entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
17
188
authors
sequence
primary_category
stringlengths
5
18
categories
sequence
text
stringlengths
2
629k
http://arxiv.org/abs/2307.03973v1
20230708131320
Autonomy 2.0: The Quest for Economies of Scale
[ "Shuang Wu", "Bo Yu", "Shaoshan Liu", "Yuhao Zhu" ]
cs.RO
[ "cs.RO", "cs.AI", "cs.CY" ]
takeaways[2] [title=#1, size=fbox,after skip=0.5, colbacktitle=yellow!25,coltitle=black] #2 none printacmref=false plain Autonomy 2.0: The Quest for Economies of Scale Shuang Wu, Bo Yu, Shaoshan Liu, Yuhao Zhu August 12, 2023 ============================================== § INTRODUCTION With the advancement of robotics and AI technologies in the past decade, we have now entered the age of autonomous machines. In this new age of information technology, autonomous machines, such as service robots, autonomous drones, delivery robots, and autonomous vehicles, rather than humans, will provide services  <cit.>. The rise of autonomous machines promises to completely transform our economy. However, after more than a decade of intense R&D investments, autonomy has yet to deliver its promise <cit.>. In this article, through examining the technical challenges and economic impact of the digital economy, we argue that scalability is both highly necessary from a technical perspective and significantly advantageous from an economic perspective, thus is the key for the autonomy industry to achieve its full potential. Nonetheless, the current development paradigm, dubbed Autonomy 1.0, scales with the number of engineers, instead of with the amount of data or compute resources, hence preventing the autonomy industry to fully benefit from the economies of scale, especially the exponentially cheapening compute cost and the explosion of available data. We further analyze the key scalability blockers and explain how a new development paradigm, dubbed Autonomy 2.0, can address these problems to greatly boost the autonomy industry. § SCALABILITY OF THE DIGITAL ECONOMY The digital economy refers to the use of information technology to create, market, distribute, and consume goods and services. It has been the key driving force for the world's economic growth in the past two decades. Consider the internet industry, for instance. The internet industry has accounted for 21% of the GDP growth in mature economies from 2005 to 2010  <cit.>. In 2019, the internet industry contributed $2.1 trillion to the U.S. economy, about 10% of the U.S. GDP, and is the fourth largest industry of the U.S. economy (behind only real estate, government, and manufacturing)  <cit.>. Along with its contribution to economy, the internet industry provides nearly 6 million direct jobs, accounting for 4% of U.S. employments. Two key forces fuel the continuous growth of the digital economy, both of which have to do with scalability: * The commoditization of computing power, as exemplified by Moore's law  <cit.>, is the greatest driving force behind the digital industry. The most successful digital economy companies have developed core technology stacks that are scale by the available compute resources and data, not by the size of their engineering teams. One remarkable example is WhatsApp: when acquired by Facebook for $19 billion, WhatsApp had only 32 engineers serving over 450 million users. * The breakthrough of artificial intelligence in the last decade has demonstrated that, in addition to many technical improvements and tuning, scaling neural network models and training datasets has been our most effective strategy for achieving continuous performance gains  <cit.>. Autonomy technologies such as those found in autonomous driving are widely seen as the pillar of the next digital economy era. However, today's autonomous machines technologies, dubbed Autonomy 1.0, represent everything a scalable industry should not do. To illustrate the problem facing autonomous driving companies, Figure <ref> analyzes the R&D expenditures and revenue per employee of two leading public digital economy companies, Microsoft representing the software industry and Alphabet representing the internet industry, and two public autonomous driving companies, TuSimple representing the robot truck industry and Aurora representing the robotaxi industry. We selected these autonomous driving companies for the accessibility of their financial data. Both Alphabet and Microsoft spend less than 20% of their total operating expenditures on R&D. For instance, Google employs less than 30,000 engineers while serving over 4.3 billions of users. Their scalability is mainly constrained by available compute resources and data instead of by the number of engineers. In comparison, both TuSimple and Aurora spend more than 70% of their operating expenditures on R&D. Often, to reach new users or to deploy services to new locations, autonomous driving companies need to pour additional R&D resources to re-calibrate their existing technology stacks to adapt to new environments. Hence, their scalability is constrained by R&D investment or, more directly, the number of engineers. As a result, Alphabet and Microsoft are able to generate $1.5 million and $0.8 million of revenue per employee respectively while maintaining a high growth rate, whereas TuSimple and Aurora generate negligible revenue per employee and struggle with growth. For the autonomy industry to achieve economies of scale, we have to revolutionize the R&D paradigm. In following sections, we will describe key scalability issues with Autonomy 1.0, and outline promising solutions that are already at the horizon to achieve scalability in Autonomy 2.0. § AUTONOMY 1.0: THE END OF THE ROAD OF AN AGING ARCHITECTURE Current commercial autonomous driving systems mostly inherited the software architecture from competitors in the DARPA Grand Challenges between 2005 and 2007  <cit.>. This software architecture, while represented a great leap of autonomy technology at the time, has showed its age and become difficult to scale after more than a decade of intense industry efforts to improve and adapt. Figure  <ref> illustrates Autonomy 1.0's scalability problems using autonomous driving operation data from California from 2018 to 2022. Over the past five years, although enormous amount of investment has been poured into autonomous driving, we did not observe significant growth of the number of vehicles under operation, which increased only from 400 in 2018 to 1,500 in 2022. The operation mileage per year increased only from 2 million miles to 5 million miles. Most importantly, there are still over 2,000 disengagement incidents per year. Given this trend in Autonomy 1.0, we are still years away from serious commercial operations of autonomous vehicles. Autonomy 1.0 is modular and consists of functional modules such as sensing, perception, localization, high-definition maps, prediction, planning and control <cit.>, each further consists of several functional sub-modules integrated by explicit and hand-crafted logic. Most decision-making tasks, such as planning, which is responsible for generating optimal and drivable paths, are solved with constraint optimization under a set of hand-tuned rules. When a disengagement incident happens, engineers usually have to go through a long process of debugging to identify which specific module or rule may have been the root cause of the disengagement, then optimize that module or develop logic changes to handle the specific problem. Often, due to intricate dependency and coupling among modules or rules, the new software version leads to other problems that need to be addressed, thus greatly slowing down development process. The Autonomy 1.0 software stack over time became a complicated collection of ad-hoc rules and a set of interdependent modules for handling various long-tail events, which has been increasingly difficult to debug, maintain and evolve for improved performance. Taking the open-source project Apollo <cit.> as an example, its perception module alone consists of multiple individual leaning-based sub-modules to accomplish object detection in 2D images, LiDAR point cloud segmentation, traffic light detection, lane detection, and others. To integrate information from these perception sub-modules, a post-processing module then fuses 2D and 3D information and outputs an integrated representation of the environment to the downstream prediction module. The planning module makes decisions and plans routes based on the data from the prediction, localization, and map modules. These modules often have strong dependencies among themselves. Making changes to one module not only impacts the overall system performance, possibly violating real-time constraints and resource allocation, but also impacts the algorithmic performance of other downstream modules due distributional shift of data. The whole system has become complicated and even brittle, demanding enormous amount of engineering resources to maintain, let alone to scale. We summarize the three Autonomy 1.0's major scalability bottlenecks below. * Complexity Bottleneck: The design of autonomy 1.0 systems demands extensive engineering efforts to define software interfaces, distribute data among modules, and map various workloads in a heterogeneous computing system. It is challenging, given the complexity, to debug and continuously update the software stack. The myriad of components also make it challenging to schedule tasks and optimize the latency of the unwieldy stack at run-time. As a result, typical autonomy 1.0 systems exhibit large latency variations <cit.>, which can harm the reliability of the autonomous driving system. * Human-Data Bottleneck: Autonomy 1.0 systems depend on fleets of physical vehicles operated by humans to collect data and perform system-level tests. This is a time-consuming and expensive process that is difficult to scale out. The scalability issue will only get worse as increasingly more modules of autonomy stack adopt data-driven approaches, which requires continuous collection and labeling, because any specific instance of the recorded data reflects only a particular subset of the world states. * Generalization Bottleneck: Autonomy 1.0 systems consist rule-based processing logic and hand-crafted interfaces, which makes them difficult to generalize to new environments. This is because the complexity and diversity of real-world environments makes it difficult to design the autonomy system to anticipate all possible challenging scenarios, whether for perception or planning. As a result, autonomy 1.0 systems are often over-fitted to frequently operated regions and common situations. To handle new environments and newly encountered rare cases, additional changes to the system are required, which is increasingly difficult and time-consuming. § AUTONOMY 2.0: SCALABILITY IS EVERYTHING Recent research breakthroughs in artificial intelligence, such as Transformer <cit.>, large language models (LLM) <cit.> and offline reinforcement learning <cit.>, have sparked new ideas in architecture design, data and model infrastructure, and engineering practices of autonomous driving, leading to a new development paradigm, which we dub Autonomy 2.0. The key of Autonomy 2.0 is scalability, which is delivered through two ingredients: 1) a software stack that improves continuously with increasing scale of data and compute resources. 2) a simulation paradigm based on digital twins for algorithmic exploration using large-scale, real-time, realistic data before deployment. Figure  <ref> illustrates the differences between Autonomy 1.0 and Autonomy 2.0 system architectures. Table  <ref> summarizes how Autonomy 2.0 addresses the three bottlenecks in Autonomy 1.0. §.§ Learning-Native Software Stack Any autonomous machine performs two main tasks: perception and action, reflecting the natural dichotomy of the past and the future. The perception task observes the environment and infers its current state based on observations so far. The action task, based on these observations, chooses an appropriate sequence of actions to achieve goals while considering how the environment may evolve in the near future. The software stack in Autonomy 2.0, thus, naturally consists of a perception module and an action module. Unlike in Autonomy 1.0 where each module is implemented by a number of sub-modules, there is a strong evidence that the two modules, in Autonomy 2.0, will each be implemented as a single large deep learning model, likely based on transformer or its variants due to their ability to generalize, as demonstrated in their recent successes in LLMs. Benefits. Before describing how the two-model architecture will look like in Autonomy 2.0, we will first discuss why such an architectural design choice is key to scalability. The two-model architecture addresses the Complexity Bottleneck by drastically reducing the amount of code that needs to be maintained and reasoned about. Figure <ref>a) compares the lines of code in the Apollo Perception module <cit.>, which represents the Autonomy 1.0 approach, with an example of the perception module in Autonomy 2.0, BEVFormer  <cit.>. The Apollo Perception module's size is ten times larger than BEVFormer, and BEVFormer has achieved state of the art perception results. The software architecture also handles corner cases through data-driven model learning instead of hand-crafted logic, and thus address the Generalization Bottleneck in Autonomy 1.0. In Figure <ref>b), we analyze over 400 issues associated with the Apollo planning modules, 47% of the issues are related to Apollo failing to handle a specific usage case, and 30% of the issues are related to software engineering problems such as interfaces with other modules. In Autonomy 1.0, many hand crafted rules are implemented to handle specific use cases. As the rules accumulate, software quality naturally becomes an issue. Architectural Design. The perception and action modules have different goals and traditionally require distinctive algorithmic approaches. The perception module is trained using supervised learning and self-supervised learning to infer one unique ground truth of world states. In contrast, the action module needs to search and choose from many acceptable action sequences, while anticipating the behaviors of other agents. Therefore, the action module makes use of methods from reinforcement learning, imitation learning, and model predictive control. Interestingly, while the fundamental distinctions of the two modules have not changed in Autonomy 2.0, there is a growing convergence of the implementation of the two modules: recent successes of large language models (LLM) <cit.> to comprehend a large amount of information to perform multiple sub-tasks suggest that both modules can be implemented using a similar architecture based on Transformer <cit.>. Transformer is a great algorithmic substrate for both the perception and action modules because of its ability to generalize. For perception, a transformer can effectively fuse perceptual data from multiple sensors and multiple moments into a unified representation, avoiding information loss from sparsification and module serialization. For action, the sequential nature of transformer makes it a perfect fit for processing and generating temporal data, especially for sampling multiple possible future paths. Perception. In Autonomy 1.0, the perception module consists of multiple DNNs, each trained separately to support individual tasks such as 2D/3D object detection, segmentation, and tracking. In contrast, the perception module in Autonomy 2.0 uses a single transformer backbone to provide a unified representation of the ego-vehicle's environment (e.g., 2D Bird's Eye View (BEV) <cit.> or 3D occupancy <cit.>), which is then attached to a number of decoder “heads”, each of which is tuned for an individual task. This single-transformer approach toward the perception module has been gaining popularity across the AV industry. For instance, this is the approach described by Tesla engineers in their “AI Day 2022” event  <cit.>, and has been deployed by another leading intelligent electric vehicle company XPENG  <cit.>. Action. The action module anticipates a combinatorially large number of possible “world trajectories”, hypothesizes multiple action sequences, and evaluates them to send the optimal one to actuators. In Autonomy 1.0, the action module is implemented as a set of sub-modules for prediction, planning, and control. The action module in Autonomy 2.0 is end-to-end learned using transformer-inspired architectures for sequential decision making <cit.>. The action transformer incorporates two models: a policy model and a world model. First, the pre-trained, transformer-based policy model leverages the large amount of historical data for agent behavior prediction and ego vehicle decision making and trajectory planning <cit.>. Second, the world model is essentially a behaviorally realistic simulator (validated against real-world data) of the world. The two models are connected with a closed-loop in the transformer so that the policies can be fine-tuned online <cit.>. §.§ Digital-Twin Based Development and Deployment Autonomy 1.0 relies almost exclusively on human efforts for tasks such as manual data labeling and physical testing, posing a scalability bottleneck. Autonomy 2.0 addresses the “Human-Data Bottleneck” using an emerging simulation technology called digital twins, where a virtual representation acts as the counterpart of the physical world. As highlighted by the recent National Artificial Intelligence R&D Strategic Plan 2023 published by the White House <cit.>, digital twins have fueled many real-world applications (e.g., urban planning/management of smart cities and additive manufacturing), and is a main strategy to sustain AI technologies. Under the digital-twins paradigm, one instruments the physical system to collect real-world, real-time data, which is then interactively shared with the digital counterpart. In the digital world, one could further synthesize scenarios (e.g., traffics) with a statistically significant fidelity with a similar behavioral distributions as that in human driving behaviors. Developing and testing autonomous driving software using synthesized virtual scenarios accelerates the evaluation process by 10^3 to 10^5 times  <cit.> and reduces the testing costs by two orders of magnitude  <cit.> compared to the physical-only approach in Autonomy 1.0. Figure <ref>c) demonstrates the R&D cost efficiency in Autonomy 1.0, which costs $180/hr through physical testing, vs. in Autonomy 2.0, which costs $2/hr through virtual testing, an 100-fold improvement <cit.>. Figure <ref>d) demonstrates the R&D efficiency in Autonomy 1.0, which takes around 3 kilo miles per physical vehicle per year through physical testing<cit.>, vs. in Autonomy 2.0, which takes over 3 million miles per virtual vehicle per year through simulation, a 1000-fold improvement <cit.>. Combining these two factors would bring over 10^5 times improvement under the same engineering investment in Autonomy 2.0, and scalability is thus only constrained by the available compute resources instead of number of engineers, effectively eliminating the human-data bottleneck. § SUMMARY The autonomy economy, or the use of autonomous machines to provide goods and services, will fuel the world's economic growth in the coming decades. Huge investments are pouring into the autonomy economy. Such a huge investment will only be justified if autonomous machines can reach, and provide utility for, every person on planet. Similar to today's digital economy, scalability will necessarily be the winning formula in this process. The current practice of developing and deploying autonomous machines carries the historical baggage of complexity bottleneck, human-data bottleneck, and generalization bottleneck, and is thus unscalable. We must start from a clean slate and rethink the architecture design of autonomous machines. We posit that Autonomoy 2.0 will embrace a learning-native software stack, which addresses the complexity bottleneck through software simplicity and addresses the generalization bottleneck through end-to-end learning. The digital twins technologies will have to be integrated throughout the development, evaluation, and deploymemt cycle in Autonomy 2.0 to address the human-data bottleneck. ieeetr
http://arxiv.org/abs/2307.04214v1
20230709160403
Non-invariance of Gaussian Measures under the 2D Euler Flow
[ "Jacob Bedrossian", "Mickaël Latocca" ]
math.AP
[ "math.AP", "math.PR" ]
In this article we consider the two-dimensional incompressible Euler equations and give a sufficient condition on Gaussian measures of jointly independent Fourier coefficients supported on H^σ(𝕋^2) (σ>3) such that these measures are not invariant (in vorticity form). We show that this condition holds on an open and dense set in suitable topologies (and so is generic in a Baire category sense) and give some explicit examples of Gaussian measures which are not invariant. We also pose a few related conjectures which we believe to be approachable. Hierarchical Autoencoder-based Lossy Compression for Large-scale High-resolution Scientific Data Jian Tao August 12, 2023 ================================================================================================ § INTRODUCTION §.§ The 2D Euler equations and vorticity formulation In this article we consider the two-dimensional incompressible Euler equations posed on 𝕋^2 in vorticity formulation, written here as E{[ ∂_t Ω + u·∇Ω = 0 for (t,x) ∈ (0,∞) ×𝕋^2; Ω(0,x) = Ω_0(x) for x ∈𝕋^2 , ]. where Ω : 𝕋^2 →ℝ is the scalar vorticity, and u : 𝕋^2 →ℝ^2 is the velocity which is related to Ω via the Biot-Savart law, {[ u = ^⊥ (-Δ)^-1Ω; Ω = ∇∧ u = ∂_1 u_2 - ∂_2 u_1 , ]. where ^⊥ = (-∂_2, ∂_1). For σ > 1, the vorticity equations (<ref>) are globally well-posed in 𝒞^0([0,∞),H^σ(𝕋^2)) (originally due to Kato <cit.>, following the older classical work in C^0,α Hölder spaces by Hölder and Wolibner holder,Wolibner33. The central question regarding the 2D incompressible Euler equations is to understand the long-time dynamics in various settings. One of the main questions is to determine whether one has the formation of small scales in the vorticity. It is widely expected that `generic' solutions to the 2D Euler equations create smaller scales, leading to unbounded growth in the H^s and C^0,α norms. Even more is expected: that generic orbits are not be pre-compact in L^2 due to the loss of enstrophy (L^2 mass of vorticity) to higher frequencies. The latter is known as Šverák's conjecture (see e.g. <cit.>*Conjecture 3). See the recent review article <cit.> and <ref> below for more discussion. One natural question one can ask about the long-term dynamics of a PDE is to find measures which are left invariant under its flow. In this article we are specifically interested in the possible invariance of Gaussian measures under the flow Φ_t of (<ref>) (or quasi-invariance; see <ref> below). It is known that the Gibbs measure of (<ref>) – the spatial white noise, supported on vorticities which are H^-1-ε for all ε > 0 – is invariant under the flow in a suitable sense; see albeverioCruzeiro, flandoli. In the non-periodic setting <cit.>, Gaussian invariant measures have been constructed for the two-dimensional Euler equations at very low regularity (typically Ω∈ H^β(ℝ^2), β < -2). In this article we show that for sufficiently regular initial data (σ > 3), Gaussian measures with independent Fourier coefficients are generically not invariant under Φ_t. See <ref> for related open directions and <ref> for more discussion about the motivations and existing literature. §.§ Random initial data and invariant measures Let (Ξ, 𝒜,ℙ) be a probability space. In this article we solve (<ref>) with Gaussian random initial data Ω_0 for (<ref>) of the form Ω_0^ω(x) = ∑_n ∈ℤ^2 a_ng_n^ωe^i n· x , where the (g_n)_n∈ℤ^2 are independent and identically distributed standard complex Gaussian random variables, and (a_n)_n ∈ℤ^2 is a complex-valued sequence. Note that replacing a_n ↦ a_n e^i θ_n for any sequence of phases θ_n ∈ℝ defines the same random initial data in law. For this reason we restrict to real-valued a_n without loss of generality. Let us be more precise and let ℤ^2_+ {(n_1,n_2) ∈ℤ^2, n_1 >0 or n_1=0, n_2 ⩾ 0}. We will work under the following assumption for the (g_n)_n∈ℤ^2. * For all n ∈ℤ^2_+, g_n r_n+is_n where {r_n, s_n}_n∈ℤ^2_+ are independent and identically distributed real standard Gaussians 𝒩(0,1). * g_-n=g̅_n for all n ∈ℤ^2_+. Let us use the norm defined by (a_n)_h^σ(∑_n ∈ℤ^2n^2σa_n^2)^1/2, and make the following assumptions on the (a_n)_n∈ℤ^2, which are tailored to ensure that almost-surely Ω_0^ω∈ H^σ: We assume that (a_n)_n∈ℤ^2∈ h^σ, where h^σ = (a_n)_n∈ℤ^2: a_n ∈ℝ, a_0 = 0, a_-n = a_n, (a_n)_h^σ < ∞. We recall that h^σ is a Hilbert space equipped with the inner product associated with the complete norm ·_h^σ. Given the isotropic nature of (<ref>) we also consider the radially symmetric sequences belonging to the space h_rad^σ = (a_n) ∈ h^σ : a_m = a_n, for all m,n m = n. <ref> and <ref> ensure that (<ref>) is real-valued. The condition a_0=0 ensures that ∫_𝕋^2Ω_0(x) dx =0, which should be the case since Ω_0=∇∧ u(0,·). Let us denote by μ_(a_n) the law that the random variable (<ref>) induces on L^2(𝕋^2,ℝ) (where we assume that (g_n)_n∈ℤ^2 satisfies <ref>). The set of such corresponding measures is therefore defined by ℳ^σ{μ_(a_n), (a_n)_n∈ℤ^2∈ h^σ}, as well as its radial counterpart ℳ^σ_rad{μ_(a_n), (a_n)_n∈ℤ^2∈ h_rad^σ}. We remark that for any μ∈ℳ^σ there holds 𝔼_μ[Ω_H^σ^2]∫_L^2 Ω_H^σ^2dμ(Ω)=𝔼[Ω_0^ω^2_H^σ] =∑_n∈ℤ^2⟨ n⟩^2σ |a_n|^2 < ∞, which implies that suppμ⊂ H^σ(𝕋^2). Let us recall that a measure μ on H^σ is said to be invariant under Φ_t if for any t∈ℝ there holds (Φ_t)_*μ=μ, i.e. for all measurable A ⊂ H^σ, μ(Ω_0: Ω(t) ∈ A)=μ(Ω_0: Ω_0 ∈ A). A measure μ is said to be quasi-invariant under the flow Φ_t if for all t∈ℝ, μ_t (Φ_t)_*μ is equivalent to μ, that is μ_t ≪μ≪μ_t. In the following we will consider the following quantified version of quasi-invariance: There exists η >2 and α >0 such that for all measurable set A ⊂ H^σ, there holds for any t ∈ℝ, μ_t(A) ⩽μ(A)^(1+|t|^η)^-α. Observe that when t → 0 there holds μ(A)^(1+|t|^η)^-α→μ(A), and also that μ(A)^(1+|t|^η)^-α→ 1 as t→∞, which is consistent with what is expected by the evolution of a quasi-invariant measure, namely that the `information' will deteriorate in large time. Applying (<ref>) at time -t instead of t and to the set Φ_-tA instead of A yields μ(A) ⩽μ_t(A)^(1+|t|^η)^-α. Therefore, <ref> should be viewed as a quantitative version of μ_t ≪μ≪μ_t. For non-time-reversible flows, one would need to explicitly state the reverse of (<ref>). §.§ Main results Our first main result is a criterion which detects measures in ℳ^σ which are non (quasi)-invariant under the flow of (<ref>). For any sequence (a_n) we introduce the number γ_s,(a_n) defined by γ_s,(a_n) = 1/(2π)^4∑_n,q ∈ℤ^2 |a_n|^2|a_q|^2 (q· n^⊥)^2/2(1/|n|^2 - 1/|q|^2) β_n,q, where β_n,q=⟨ n ⟩ ^2s(1/|q|^2-1/|q+n|^2) + ⟨ q ⟩ ^2s(1/|q+n|^2-1/|n|^2) + ⟨ n + q ⟩^2s(1/|n|^2-1/|q|^2). We prove the following result concerning γ_s,(a_n). Let σ > 3 and μ= μ_(a_n)∈ℳ^σ, where ℳ^σ is defined by (<ref>). Let γ_s=γ_s,(a_n), defined by (<ref>). If γ_s ≠ 0 for some s∈(0,σ -3), then μ is not quasi-invariant in the sense of <ref> under the flow of (<ref>). In particular μ is not invariant under the flow of (<ref>). The proof basically proceeds by showing that initial conditions sampled from any invariant measure μ which satisfies γ≠ 0 will satisfy [ Ω^ω(t)_H^s^2] - [Ω^ω_0_H^s^2] ≈γ t^2 + 𝒪(t^3) t → 0, leading to a contradiction. This is done by justifying a Taylor expansion in t for the evolution of the vorticity along the flow of (<ref>). It is important to note that the proof uses the assumption of (quasi)-invariance of the measure and the a priori estimates it implies in order to justify the approximation. Our next task is to study the condition γ = 0 and prove that for a large class of (a_n)_n∈ℤ^2 belonging to h^σ, there holds γ_s, (a_n)≠ 0 for some s ∈ (0,σ]. To this end, let us introduce the set A^σ(a_n)_n∈ℤ^2∈ h^σ, that there exists s∈ (0,σ-3), γ_s,(a_n) 0≠ 0. Denote by 𝒜^σ⊂ℳ^σ the set of corresponding measures. Similarly we also define A_rad^σ and 𝒜_rad^σ. <ref> guarantees that all of the measures in 𝒜^σ are not (quasi)-invariant under the two-dimensional Euler flow. Furthermore, we have the following: [Genericity in the Baire sense] Let σ >3. (i) The set A^σ (resp. A^σ_rad) is an open, dense set in h^σ (resp. h^σ_rad). (ii) Similarly, 𝒜^σ (resp. 𝒜^σ_rad) is an open dense set of ℳ^σ (resp. ℳ^σ_rad) equipped with the topology induced by the Wasserstein W_1 distance associated with the H^σ metric. We recall the Kantorovich-Rubinstein duality formulation of the Wasserstein distance: if φ:H^σ→ℝ, we define φ_Lipsup_f ∈ H^σφ(f) + sup_f,g ∈ H^σφ(f) - φ(g)/f - g_H^σ, W_1(μ,ν) = sup_φ : H^σ→ℝ φ_Lip⩽ 1|∫_H^σφ(u) ν(u) - ∫_H^σφ(u) μ(u)|. It follows that the set of all measures in ℳ^σ which are invariant under the two-dimensional Euler flow are included in a closed, nowhere dense set of ℳ^σ (in the Wasserstein-1 topology), which is hence a meager set in the sense of Baire category. We point out that this notion of genericity is strong enough to ensure that many mutually singular measures are included. This will follow from the Feldman-Hájek theorem. Let σ >3. Every open set U ⊂ h^σ contains an uncountable set I ⊂ U such that the corresponding measures (μ_x)_x∈ I⊂ℳ^σ are not invariant (or quasi-invariant in the sense of <ref>) under the two-dimensional Euler equation and are pairwise mutually singular. One obvious disadvantage of genericity results is that they provide no specific examples. However, we are able to provide some explicit examples of non-invariant (and invariant) measures. [Examples of non-invariant measures] (i) Let μ=μ_(a_n), where the sequence (a_n)_n∈ℤ^2 has compact support and satisfies <ref>. Then μ is invariant under the flow of (<ref>) if and only if supp(a_n) is included in a line of ℤ^2 passing through the origin, or included in a circle centered at the origin. In the case where μ is not invariant, it is also non-quasi-invariant in the sense of <ref>. (ii) Let (a_n)_n∈ℤ^2 defined by a_0=0 and a_n = 1/⟨ n⟩^5 log (3+⟨ n⟩ ^2) for n≠ 0. Then the measure μ = μ_(a_n)∈ℳ_rad^4 satisfies γ_s,(a_n) > 0 for some s ∈ (0,1) and is therefore non-quasi-invariant (in the sense of <ref>) under the flow of (<ref>). In particular μ is not invariant under the flow of (<ref>)<ref>). It was proven in <cit.> that the only stationary solutions of (<ref>) with compactly supported Fourier transforms are the ones whose Fourier transform support is contained in a line in ℤ^2 or in a circle of ℤ^2 centered at the origin. <ref>, Part (i) can be considered an extension of this result to invariant measures. The results we prove suggests a difference with the nonlinear dispersive equation setting, for which many quasi-invariance results are known, see e.g. forlanoSeong22,planchonTzvetkovVisciglia,tzvetkov,OhSosoeTzvetkov18. In our setting, we conjecture stronger non-quasi-invariance results below (<ref> and <ref>). To the best of the authors knowledge, non quasi-invariance of (sufficiently regular) Gaussian measures has only been proven for one example of non-dispersive ODE, namely ∂_t u = |u|^2u posed on the torus, which is explicitly solvable, see <cit.>*Theorem 1.6. It would be interesting to further investigate the possible non-quasi-invariance of Gaussian measures. It seems however that the method used in <cit.> is difficult to implement for the Euler equations. We end with a few conjectures. The first conjecture is that Part (i) of <ref> already contains basically all of the counter-examples to non-invariance. Notice that <ref> is specifically relating to Gaussian measures for which the Fourier modes are mutually independent. It is possible that one could construct a variety of invariant Gaussian measures which are of a very different nature (i.e. supported on quasi-periodic motions). These should also be `non-generic' in some sense, but it is less clear how to characterize this at the current moment. Let σ > 0 (or possibly even σ > -1 if a suitable local well-posedness theory can be established). A measure μ = μ_(a_n)∈ℳ^σ is quasi-invariant in the sense of <ref> under the flow of (<ref>) if and only the sequence (a_n)_n ∈ℤ^2 is supported in a line of ℤ^2 passing through the origin or on a circle centered at the origin. Proving non-invariance (rather than non-quasi-invariance) might be a much easier task. Note that for σ∈ (0,1), the random initial data (<ref>) is almost surely W^σ,p for all p < ∞, and in particular, the initial conditions are almost surely C^0,α for some α> 0 (see e.g. <cit.>). Hence, these initial data lie in a classical local well-posedness space. For σ=0 the initial conditions are almost-surely in L^2 ∩ BMO ∖ L^∞ <cit.>*Chapter 5 however, we are unsure if it lies in a local well-posedness class (though, we believe it likely does; see <cit.> for well-posedness results in borderline regularity spaces). For σ < 0, initial conditions are almost-surely distribution valued and we are not aware of any theory that would even provide local existence of solutions. In the recent work <cit.>, the author constructs a large set of smooth initial data (random Gaussian initial data of the form (<ref>)), which give rise to global solutions to the Euler equations posed on the torus of size L. It is proven that in the large box limit L ≫ 1, the Wick formula remains approximately true. The setting considered in <cit.> differs from ours, and is closely connected to a large-box effect. Note also that it could be the case that the evolution of a Gaussian measure under the Euler flow remains Gaussian, but that this evolution is not quasi-invariant, therefore not contradicting <ref>. As our article will make it clear, the proof of non-invariance does not require γ_s,(a_n) > 0, but only that γ_s,(a_n)≠ 0. The proof suggests (but does not quite prove) that the H^s norm of the randomly initialized data grow on average for short time if γ_s,(a_n) > 0, but if γ_s,(a_n) < 0 it is actually decaying. We are not aware of any example of sequence (a_n)_n∈ℤ^2 such that γ_s,(a_n) < 0. Specifically, we conjecture that such examples do not exist in ℳ^σ and that data randomly initialized in this manner will grow in H^s on average. We are unsure under what conditions one could conjecture that short-time norm growth will happen almost-surely. Let σ > 0. If (a_n)_ n ∈ℤ^2∈ h^σ is not exclusively supported in a line passing through the origin or a circle centered at the origin, then for initial data sampled using (<ref>) there exists C(s)>0 such that there holds Ω^ω (t)_H^s⩾Ω_0^ω_H^s + Ct^2. for all t ∈ [0,t_0) and all s∈(0,σ], where t_0=t_0((a_n),s). Of course, we also believe the weaker version of Šverák's conjecture: lim_t →∞Ω^ω(t)_H^s = +∞, and even the original version: that the random orbits Ω^ω(t)_t ∈ℝ are almost-surely not pre-compact in L^2. However, these stronger conjectures seem very far out of reach at the current time, whereas <ref> and <ref> seem likely to be manageable. §.§ Related literature and motivation for the results and conjectures §.§.§ Creation of small scales in 2D Euler As mentioned in the introduction, it is classical that (<ref>) is well-posed in 𝒞^0([0,∞),H^σ(𝕋^2)) as soon as σ>1. Moreover, in <cit.> it was proven that for all t ⩾ 0: Ω(t)_H^σ⩽ C(Ω_0)e^e^Ct (a similar double-exponential bound for C^0,α was proved earlier by Wolibner <cit.>). Optimality of the bound (<ref>) in generality remains an open question. On 𝕋^2, the examples of gradient growth are generally based on the Bahouri-Chemin cross <cit.>, which uses a hyperbolic point in the flow to drive rapid growth of the gradient at a single point. Exponential growth can be obtained <cit.>, and double exponential growth can be maintained on finite time <cit.>. If one replaces 𝕋^2 with the unit disk of ℝ^2, Kiselev and Šverák showed that the double exponential growth (<ref>) is sharp <cit.> (see <cit.> for the extension to any symmetrical smooth domain). In some sense, these examples are also based on the Bahouri-Chemin cross, but also inspired by the computations of finite-time singularity of the 3D Euler equations in a cylinder <cit.> (which has recently been made mathematically rigorous <cit.>). Another class of norm growth results is based on shearing. The first example of “generic” norm growth was demonstrated by Nadirashvili <cit.>, who demonstrated examples of small perturbations of the Couette flow in a periodic channel and the corresponding Taylor-Couette in an annulus which lead to unbounded norm growth like ω(t)_L^∞≳ t. The recent results of <cit.> greatly expand on this idea to produce many more examples of `wandering' solutions and unbounded gradient growth on open sets of initial data in a variety of settings (again near stable stationary states). Using the mechanism of inviscid damping, a stronger characterization of small-scale creation – the loss of L^2 pre-compactness of vorticity – has been proved in a few examples for open sets of small perturbations of special stationary states in sufficiently smooth topologies. This was first done for the Couette flow in 𝕋×ℝ <cit.> followed by results near a point vortex in the plane in <cit.>, and strictly monotone shear flows in a channel satisfying a suitable spectral condition (no embedded eigenvalues) IJ20,IJ_MS_20,MZ20. §.§.§ Invariant measures of Euler If a Gaussian measure μ on H^σ is known to be invariant for the dynamics of a partial differential equations, for which a standard local Cauchy theory is known, then the global solutions Ω^ω(t) starting at time zero from Ω_0^ω (which has law μ) almost-surely satisfy the bound Ω^ω(t)_H^σ⩽ C_ω√(log (t+2)) , where the random constant C_ω satisfies 𝔼[e^cC_ω^2]<∞. This is known as the Bourgain globalization argument <cit.>. Such a bound for the growth of the Sobolev norms is in sharp contrast with (<ref>) and seems at odds with the existing norm growth results discussed in <ref> and the observed dynamics of generic initial conditions on the torus or the sphere (see numerical simulations in e.g. <cit.> and the references therein and the discussion in <ref> below). Note that if the invariant measure μ only has weaker moment bounds, or the measure μ is only quasi-invariant as in <ref> for example, then an upper bound for the growth of Sobolev norms similar to (<ref>) is known to hold with √(log(t+2)) replaced by a different function of time (usually a polynomial) depending on the moments, the PDE, or the quasi-invariance parameters. For all σ⩾ 1, in kuksin,latocca, invariant measures of the 2D Euler equations were constructed in H^σ from suitable limits of regularized problems using compactness methods. See also foldesSy,sy21 for the SQG model or the septic Schrödinger equations cases. On the support of these measures, one can show growth estimates of the type Ω(t)_H^σ⩽ C_ω (1+t)^α , but an important question remains: what is the support of such invariant measures? Very little is currently known (for example, we do not know if all such measures are in fact supported only on stationary solutions). The identification and characterization of `fluctuation-dissipation measures' (i.e. limits obtained by adding stochastic noise balanced against viscosity and taking inviscid limits, as in for example kuksin,foldesSy) and similar limiting measures is mostly open for nonlinear problems. See BCZGH16,BBPS18,BCZ15 for the much simpler problem of passive scalars where the corresponding measures can be classified, sometimes exactly. §.§.§ Random initial data and 2D turbulence The direct enstrophy cascade, i.e. the tendency for enstrophy to be transferred from large to small scales, is central to the modern understanding of 2D turbulence. Turbulence primarily concerns the inviscid limit of the Navier-Stokes equations (rather than directly on Euler) and the 2D turbulence problem is greatly complicated by the presence of the inverse energy cascade. See Fjortoft53,Kraichnan67,Leith1968,Batchelor1969 where predictions about this dual cascade were first made and MontKraichnan1980,tabeling2002two,BE12,Biskamp2003,BCZPSW20,Dudley2023 for more discussion. For empirical evidence of 2D turbulence, see the surveys BE12,Kellay2002,CerbusPhD2015, Lindborg99,Charney1971 for observations from atmospheric data, and e.g. Gharib1989,Rutgers2001,Kellay2017,Rivera2014,Paret1999 and the references therein for a few of the many experiments. We remark that physicists and engineers often consider turbulence in a statistically stationary state, i.e. where external driving produces a continuous input of enstrophy, either from the boundaries or through body forcing, and the t →∞ limit is taken before the inviscid limit (see discussions in <cit.>), which is somewhat different from a random initial data problem. All classical predictions, such as the analogues of the Kolmogorov 4/5 law (see Bernard99,Lindborg99,Yakhot99,CP17,XB18,BCZPSW20,Eyink96) and the Batchelor-Kraichnan power spectrum are most easily interpreted in this regime. Nevertheless, the random initial data problem is observed to lead to closely related dynamics referred to as “decaying turbulence”, where one sees a transient enstrophy cascade, often generically with a decaying Batchelor-Kraichnan spectrum (see e.g. Rivera2003,Cardoso1994,Hansen1998 and the references therein). The inverse cascade, at least on 𝕋^2 or the disk, is observed to lead to behavior sometimes referred to “condensation” or “vortex crystallization” in which most of the energy congregates into certain large-scale coherent structures; see for example <cit.> and <cit.>. In any case, the existing norm growth results discussed above, and especially Šverák's Conjecture, is intimately connected to the ability of 2D Euler to send enstrophy to small scales, as observed in 2D turbulence (Shnirelman's Conjecture <cit.>*Conjecture 4 is also intimately related to the observed condensation effect). The observed turbulent dynamics strongly suggest that the Euler equations initialized from data where the spatial and frequency scales are sufficiently independent are likely to create smaller scales provided the initial conditions are more regular than at least H^1 (where the nonlinear interactions of small scales could formally start to dominate) but more likely the result holds at least all the way down to the regularity of the Batchelor-Kraichnan spectrum (corresponding to H^- regularity, i.e. a_n = n^-2). It even seems plausible that anything more regular than the Gibbs measure should not be invariant unless it is extremely degenerate. These ideas motivate both <ref>. §.§ Plan of the paper In <ref> we gather some standard fact about the bilinear form of the Euler equations for the convenience of the reader. In <ref> we provide the reader with the main idea underlying the proof of <ref>, which is then reduced to proving two key statements : <ref>, which is the goal of <ref> and <ref> which is the goal of <ref>. Then <ref> is devoted to proving <ref>, <ref> and <ref>. §.§ Acknowledgments M. L. thanks Nikolay Tzvetkov for interesting discussions, as well as Maxime Breden for suggesting the interval arithmetic package for python. The research of J .B. was supported by NSF Award DMS-2108633. § PRELIMINARIES §.§ Notations used We write ⟨ n ⟩ = √(1 + |n|^2), where |n|^2=n_1^2+n_2^2 for n=(n_1,n_2)∈ℤ^2 and n^⊥=(-n_2,n_1). We recall that the Fourier transform of a regular-enough f : 𝕋^2 →ℝ^N is the function f̂ : ℤ^2 →ℝ^N defined by f̂ (n)=∫_𝕋^2f(x)e^-in· x dx , which we simply refer to as f̂_n for notational simplicity. We also write ℱ(u) in place û. The Fourier inversion formula therefore reads f(x)= 1/(2π)^2∑_n∈ℤ^2f̂_n e^in· x. We define the space H^σ(𝕋^2) as the set of functions f such that f_H^σ<∞ where f_H^σ(∑_n∈ℤ^2⟨ n ⟩ ^2σ|f̂_n|^2)^1/2. §.§ The bilinear form Let us denote U[Ω] the vector field 𝕋^2 →ℝ^2 defined by U[Ω]=∇^⊥(-Δ)^-1Ω which is uniquely defined in the class of divergence-free vector fields. We recall that thanks to Calderón-Zygmund estimates, the linear operator a ↦∇ U[a] is continuous L^p → L^p for any p∈ (1,∞). We define the following bilinear form for any smooth enough functions a, b : 𝕋^2 →ℝ by B(a,b)=1/2U[a]·∇ b + 1/2U[b]·∇ a, and use the notation B_1(Ω) B(Ω, Ω) = U[Ω]·∇Ω. We will also use the following notation: B_2(Ω)= B(Ω,B_1(Ω)) , B_3(Ω)= B(B_1(Ω),B_1(Ω)) + 2B(Ω,B_2(Ω)) , B'_3(Ω)= B(B_1(Ω),B_2(Ω)) , B̃_3(Ω)= B(B_2(Ω),B_2(Ω)) . Let a, b, Ω : 𝕋^2 →ℝ. Then: (i) (U[b]·∇ a,a)_L^2=0. (ii) For any s>0, and any 0<ε < s there holds |⟨ B(a,a),a ⟩_H^s| ⩽ C(s,ε) a_H^1+εa_H^s^2 , |⟨ B(a,b),a ⟩_H^s| ⩽ C(s,ε) b_H^s+1+εa_H^s^2. (iii) Similarly for any 0<ε < s there holds B_1(Ω)_H^s⩽ C(s,ε) Ω^2_H^s+1 , B_2(Ω)_H^s⩽ C(s,ε) Ω^3_H^s+2 , B_3(Ω)_H^s⩽ C(s,ε) Ω^4_H^s+3 , B'_3(Ω)_H^s⩽ C(s,ε)Ω^5_H^s+3 and B̃_3(Ω)_H^s⩽ C(s,ε)Ω^6_H^s+3 , Part (i) is a consequence of the fact that U[b] is divergence-free. (ii): Estimate (<ref>) is a consequence of the Kato-Ponce commutator estimate, (see for instance <cit.>), [⟨∇⟩^s,U[a] ·∇ ]= 𝒪_H^s → L^2(∇ U[a]_L^∞) and (i), as well as the Sobolev embedding H^1+ε↪ L^∞ and the fact that a ↦∇ U[a] is continuous as a linear map H^s → H^s. Estimate (<ref>) follows from similar computations and product estimates on U[a]·∇ b in H^s. Finally, (iii) follows from product laws in Sobolev spaces. §.§ Invariant measures bounds We recall the following almost-sure upper bounds for the growth of Sobolev norms for solutions to (<ref>) with initial data sampled on the support of an invariant Gaussian measure. Let σ>2 and μ a Gaussian measure on H^σ(𝕋^2) which is either invariant under the flow of (<ref>) or satisfies the quantitative quasi-invariance property of <ref>. Then there exists a deterministic constant c >0 and a random constant C_ω such that for almost every ω there holds for any t⩾ 0, Ω^ω(t)_H^σ⩽{[ C_ω√(log(2+t)) in the invariant case,; C_ω(1+t^η)^α/2√(log(2+t)) in the quasi-invariant case, ]. and where 𝔼[e^c C_ω^2] < ∞. The proof in the invariant case can be found in <cit.>, so let us illustrate the argument by proving the very similar case of the quantitative quasi-invariance case. Step 1: Large measure, finite time. Let T>0, ε >0 and λ = λ (ε, T) >0 which will be chosen later. By standard local well-posedness of (<ref>) in H^σ, there exists c>0 such that if τ = c/λ and Ω(t_0)_H^σ⩽λ then Ω(t)_H^σ⩽ 2λ for all t_0 ⩽ t ⩽ t_0 +τ. Let B_λ{Ω_0 ∈ H^σ: Ω_0_H^σ⩽λ}, so that by the fact that μ is Gaussian there holds μ(H^σ∖ B_λ) ⩽ e^-cλ^2. Let us introduce G_λ⋂_n=0^⌊ T/τ⌋ (Φ_nτ)^-1(B_λ), where Φ_t is the flow of (<ref>). We have μ(H^σ∖ G_λ) ⩽∑_n=0^⌊ T/τ⌋μ(Φ_nτ)^-1(H^σ∖ B_λ)) ⩽∑_n=0^⌊ T/τ⌋μ(H^σ∖ B_λ)^(1+(nτ)^η)^-α⩽ Tτ^-1 e^-cλ^2(1+T^η)^-α, where we have used (<ref>) in the second inequality. Therefore there exists a numerical constant C>0 such that μ(H^σ∖ G_λ) ⩽ cTλ e^-cλ^2(1+T^η)^-α⩽ CTe^-c/2λ^2(1+T^η)^-α. Let us choose λ∼ (1+T^η)^α/2√(log(T)+log(ε^-1)) and let A_T,ε = {ω: Ω_0^ω∈ G_λ}. Then any ω∈ A_T,ε is such that Ω^ω(t)_H^σ⩽ C (1+T^η)^α/2√(log(T)+log(ε^-1)) for all 0⩽ t ⩽ T, and ℙ(ω∉ A_T,ε) ⩽ε. Step 2: Global in time almost-sure bounds. We proceed with a standard Borel-Cantelli argument. Let us consider the probability set A_ε⋂_n⩾ 1 A_2^n,2^-nε, which satisfies ℙ(ω∉ A_ε) ⩽∑_n⩾ 1ℙ(ω∉ A_2^n,ε 2^-n) ⩽ε. Let ω∈ A_ε and t ⩾ 0. Fix n such that t ∈ [2^n-1,2^n]. Since ω∈ A_2^n,ε 2^-n, we have Ω^ω(t)_H^σ⩽ C (1+(2^n)^η)^α/2√(log(2^n)+log(ε^-12^n))≲√(log(ε^-1))(1+t^η)^α/2√(log(2+t)), which therefore holds for all t ⩾ 0 by definition of n. Finally, the set A⋃_k⩾ 1⋂_j⩾ kA_2^-j satisfies ℙ(A)=1 and for any ω∈ A,  (<ref>) holds. Moreover, {C_ω>λ}⊂ A_2^-j where j is such that √(j)⩾λ, therefore, ℙ(C_ω>λ)⩽ 2^-j⩽ e^-cλ^2, which ensures 𝔼[e^-cC_ω^2] < ∞. § PROOF OF <REF> The main idea in the proof of <ref> is that on short time we expect the following expansion: Ω^ω(t) ≃Ω^ω_0 - tB_1(Ω^ω_0) + t^2 B_2(Ω^ω_0) , and that this ansatz is responsible for some change of the Sobolev norms. More precisely the crucial claim is the following: Let s > 0. Then, there holds Ω_0^ω -t B_1(Ω_0^ω) +t^2B_2(Ω_0^ω)_L^2_ωH^s^2 = Ω_0^ω_L^2_ωH^s^2 + γ t^2 + δ t^4 , where δB_3(Ω_0^ω)^2_L^2_ωH^s and γB_1(Ω_0^ω)^2_L^2_ωH^s +2𝔼[⟨Ω_0^ω,B_2(Ω_0^ω)⟩_H^s] , is the constant given by (<ref>). The ansatz (<ref>) is not an equality and the remainder is given by w^ω(t) Ω^ω(t) - Ω^ω_0 + tB_1(Ω_0^ω) - t^2B_2(Ω^ω_0). We next aim to show that this remainder is small for s ∈ (0,σ-3) (the loss of three derivatives is coming from the number of derivatives in the ansatz, i.e. in B_2). Given the rapid nonlinear instabilities inherent in the Euler equations it seems difficult to directly justify this ansatz in H^s. However, the assumption of invariance or quasi-invariance implies significant a priori estimates coming from globalization (Proposition <ref>). These are leveraged in H^s energy estimates to yield the following stability estimate. Assume that the law of Ω^ω_0 is given by μ_(a_n)∈ℳ^σ for some σ >3, and let s ∈ (0,σ -3). Suppose that μ_(a_n) is invariant or quasi-invariant in the sense of Assumption <ref>. Then there exists a (deterministic) time t_1 t_1(s,σ, (a_n))>0 and a (deterministic) constant C C(s, σ,(a_n))>0 such that w^ω(t)_L^2_ωH^s⩽ Ct^3 for all t ∈ [0,t_1] , where w^ω is defined by (<ref>). Once this a priori estimate on the remainder is obtained, one can reach a contradiction with invariance using the fact that under the assumption of invariance Ω_0^ω_L^2_ωH^s^2 = Ω^ω(t)_L^2_ωH^s^2. In the quasi-invariant case we will use a similar result: Let σ >0 and μ∈ℳ^σ satisfying <ref>. Let s∈ (0,σ]. There exists a time t_1=t_1(s,(a_n),α,η)>0 and C=C(s,(a_n),α,η)>0 such that for all 0⩽ t ⩽ t_1 there holds |𝔼_μ_t[Ω^2_H^s] - 𝔼_μ[Ω^2_H^s] |⩽ Ct^η, where we recall that η >2 is the constant appearing in <ref>. Recall that there exists c_1>0 such that for all λ >0, there holds μ(Ω_H^s>λ) ⩽ e^-c_1λ^2. We will use also use two lower bounds: there exists λ _0 >0 such that for all λ⩾λ_0 we have μ(Ω_H^s>λ) ⩾ e^-c_2λ^2, and there exists c_3>0 such that for any λ∈ [0,λ_0] there holds μ(Ω_H^s>λ) ⩾ c_3λ^2. Let us start with explaining how to obtain (<ref>): assume that a_(1,0)≠ 0 (this can be done without loss of generality, up to considering some n∈ℤ^2 such that a_n ≠ 0). Write g_(1,0)^ω=g^ω + i h^ω where g^ω and h^ω are independent standard real-valued Gaussian random variables. Then a_(1,0)g^ω_(1,0) + a_(-1,0)g^ω_(-1,0)=2a_(1,0)g^ω so that we infer that {|g^ω|>λ/|a_1,0|}×{∑_n ∉{(1,0),(-1,0)} a_ng_n^ω_H^s⩽λ}⊂{Ω_H^s>λ}. Using independence we find μ(Ω_H^s>λ) ⩾ℙ(|g^ω|>λ/|a_1,0|)ℙ(∑_n ∉{(1,0),(0,1)} a_ng_n^ω_H^s⩽λ) ⩾ℙ(|g^ω|>λ/|a_1,0|) (1-e^-c_1λ^2). The conclusion now follows from the fact that for λ large enough there holds (1-e^-c_1λ^2)⩾1/2 for λ large enough and the fact that for Λ large enough there holds ℙ(|g^ω|>Λ) ⩾ e^-c_2Λ^2. This last inequality follows from the fact that: ∫_Λ^∞e^-u^2 du = e^-Λ^2/2Λ - ∫_Λ^∞e^-u^2/2u^2 du ∼e^-Λ^2/2Λ⩾ e^-2Λ^2, as Λ→∞, which results from the fact that the remainder integral is negligible with respect to the left-hand side integral. To prove (<ref>), we use (<ref>) and that λ⩽λ_0 to bound μ(Ω_H^s>λ) ⩾ℙ(|g_1,0^ω|>λ_0/|a_1,0|)(1-e^-c_1λ^2) ⩾ c_3λ^2 for a small enough c_3=c_3(λ_0)>0 for all λ∈[0,λ_0]. In order to make the following computations simpler in the proof of the lemma, let us observe that since (1+|t|^η)^-α=1-α |t|^η + 𝒪(|t|^2η) as t → 0, one can write (1+|t|^η)^-α=1-α |t|^η(1+K(t)), where K(t) satisfies 1/2⩽ 1 + K(t) ⩽ 2 for all t∈[-t_0,t_0], with t_0=t_0(α,η)>0. We turn to the proof of the lemma and start by writing 𝔼_μ_t[Ω^2_H^s] - 𝔼_μ[Ω^2_H^s] = 2∫_0^∞λ (μ_t(A_λ)-μ(A_λ)) λ where A_λ = {Ω∈ H^s: Ω_H^s>λ}. Next, we use (<ref>) as well as (<ref>), and obtain 𝔼_μ_t[Ω^2_H^s] - 𝔼_μ[Ω^2_H^s] ⩽ 2∫_0^∞λ (μ(A_λ)^1-α t^η(1+K(t))-μ(A_λ)) λ ⩽ 2∫_0^∞λ (μ(A_λ)^1-2α t^η-μ(A_λ)) λ = 2∫_0^∞μ(A_λ)(exp(2α t^ηlog(μ(A_λ)^-1)-1) λλ. To study the behavior of this integral as t → 0 we start with the contribution λ∈ [0,λ_0] for which (<ref>) implies: ∫_0^λ_0μ(A_λ)(exp(2α t^ηlog(μ(A_λ)^-1))-1) λλ⩽∫_0^λ_0(exp(2α t^ηlog(c_3^-1λ^-2))-1) λλ. Writing C=c_3^-1 which we can take larger than 1 and λ_0^2, we arrive at ∫_0^λ_0μ(A_λ)(exp(2α t^ηlog(μ(A_λ)^-1))-1) λλ⩽∫_0^λ_0(C^2α t^ηλ^-4α t^η-1)λλ. Note that we only consider small values of t, namely t⩽ t_1(α, η) to ensure the convergence of the integral. Let us now write ∫_0^λ_0(C^2α t^ηλ^-4α t^η-1)λλ⩽λ_0t^η∫_0^λ_0(Cλ^-2)^2α t^η-1/t^ηλ =: λ_0t^η H(t^η). We immediately compute H(s)=C^2α sλ_0^1-4α s/s-4α s^2 - λ_0/s which satisfies sup_s∈(0,1] |H(s)| < ∞, and therefore ∫_0^λ_0μ(A_λ)(exp(2α t^ηlog(μ(A_λ)^-1))-1) λλ⩽ C(η,α)t^η for 0⩽ t_1(α,η). We turn to the estimation of the other part of the integral using (<ref>) to obtain ∫_λ_0^∞μ(A_λ)(exp(2α t^ηlog(μ(A_λ)^-1)-1) λλ ⩽∫_λ_0^∞ e^-c_1λ^2(exp(2c_2α t^ηλ^2)-1) λλ ⩽∫_0^∞ e^-c_1λ^2(exp(2c_2α t^ηλ^2)-1) λλ, integral that we split depending on whether 2c_2α t^ηλ ^2>1 or not. Denoting C^2=2c_2α, we have ∫_0^∞ e^-c_1λ^2(exp(2c_2α t^ηλ^2)-1) λλ⩽∫_0^Ct^-η/2 e^-c_1λ^2(exp(2c_2α t^ηλ^2)-1) λλ +∫_Ct^-η /2^∞ e^-c_1λ^2(exp(2c_2α t^ηλ^2)-1) λλ ⩽ 2ec_2α t^η∫_0^Ct^-η/2e^-c_1λ_2λ^3λ + ∫_Ct^-η /2^∞ e^(2c_2α t^η-c_1)λ^2λλ, and we can now use 2ec_2α t^η∫_0^Ct^-η/2e^-c_1λ^2λ^3λ≲ t^η, and that since we can restrict to small t such that 2c_2α t^η⩽ c_1/2 we have ∫_Ct^-η /2^∞ e^(2c_2α t^η-c_1)λ^2λλ ⩽∫_Ct^-η /2^∞ e^-c_1/2λ^2λλ ⩽ e^-c_1C^2/4t^-η∫_Ct^-η /2^∞ e^-c_1/4λ^2λλ = 𝒪(e^-c_1C^2/4t^-η) = 𝒪(t^η). The estimate of _μ[Ω^2_H^s] - _μ_t[Ω^2_H^s] is performed using the same techniques as above. We postpone the proof of <ref> to <ref> and the proof of <ref> to <ref>, and first finish the proof of the main theorem. We first use <ref> and a Taylor expansion to write Ω^ω(t)-w^ω(t)_L^2_ωH^s = √(Ω_0^ω_L^2_ωH^s^2 + γ t^2 +δ t^4) =Ω_0^ω_L^2_ωH^s + γ/2Ω_0_L^2_ωH^st^2 + 𝒪(t^4) , so that by the triangle inequality and <ref> there holds: γ/2Ω_0^ω_L^2_ωH^st^2 + 𝒪(t^3) ⩽Ω^ω(t)_L^2_ωH^s - Ω_0^ω_L^2_ωH^s⩽γ/2Ω_0^ω_L^2_ωH^st^2 + 𝒪(t^3) for 0 ⩽ t ⩽ t_1, so that for times 0 ⩽ t ⩽ t_2=t_2(s,σ,(a_n)) ≪ t_1 we can write c_1γ/Ω_0^ω_L^2_ωH^st^2 ⩽Ω^ω(t)_L^2_ωH^s - Ω^ω_0_L^2_ωH^s⩽ c_2γ/Ω^ω_0_L^2_ωH^st^2 with (c_1,c_2)=(3/4,1/4) if γ <0 and (1/4,3/4) if γ >0. In the invariant case there holds Ω^ω(t)_L^2_ωH^s - Ω^ω_0_L^2_ωH^s = 0 which contradicts (<ref>). In the quasi-invariant case, replace (<ref>) with (<ref>) to obtain a contradiction. § THE ORTHOGONALITY ARGUMENT The goal of this section is to prove <ref>. For the sake of simplicity we write Ω in place of Ω_0^ω. This will not lead to any confusion since the arguments in this section don't involve time evolution. We write Ω = ∑_n∈ℤ^2a_ng_n^ωe^in· x = 1/(2π)^2∑_n ∈ℤ^2Ω̂_n e^in· x. Similarly, let us write U[Ω]=(U^1,U^2) where U^j=∑_n∈ℤ^2Û_n^je^in· x. With these notations, the relationship ∇∧ U[Ω] = Ω reads Ω̂_n= i(n_1Û_n^2 - n_2Û^1_n) and the relation ∇· U[Ω] = 0 reads n_1 Û^1_n + n_2 Û^2_n=0 so that we obtain Û^1_n = - n_2/i|n|^2Ω̂_n and Û^2_n = n_1/i|n|^2Ω̂_n. Writing that B_1(Ω)=U^1∂_1Ω + U^2∂_2Ω we obtain B_1(Ω)(n) = ∑_k+m=nm· k^⊥/|k|^2Ω̂_kΩ̂_m = 1/2∑_k+m=n m· k^⊥(1/|k|^2 - 1/|m|^2)Ω̂_kΩ̂_m , and more generally B(α, β)(n)=1/2∑_k+m=n m· k^⊥(1/|k|^2 - 1/|m|^2)α̂_kβ̂_m . The following computation is a key step in the proof of <ref>: Assume that (a_n)_n∈ℤ^2∈ h^s+1 for some s⩾ 0. Let us recall that B_2(Ω)=B(Ω,B_1(Ω,Ω)). Then there holds 𝔼[⟨Ω, B_2(Ω)⟩_H^s] = 1/2(2π)^4∑_n,q ∈ℤ^2⟨ n ⟩ ^2s|a_n|^2|a_q|^2 (q · n^⊥)^2 (1/|n|^2 - 1/|q|^2)(1/|q|^2-1/|q+n|^2) . We will decompose B_1(Ω) to obtain the claimed result. More precisely, we write: 𝔼[⟨Ω, B_2(Ω)⟩_H^s] = 1/2𝔼[⟨Ω, U[Ω]·∇ B(Ω,Ω)⟩_H^s] + 1/2𝔼[⟨Ω, U[B(Ω,Ω)]·∇Ω⟩_H^s] =: 1/2(E_1 + E_2). Writing U[Ω]·∇ B(Ω,Ω)(n) = ∑_k + m = nm· k^⊥/|k|^2 a_kg_k^ωB(Ω,Ω)(m) = ∑_k + m = n p + q = mm· k^⊥q· p^⊥/|k|^2|p|^2 g_k^ωg_p^ωg_q^ωa_ka_pa_q, and using Plancherel's theorem, we find (2π)^4E_1 = ∑_n ∈ℤ^2⟨ n ⟩^2s∑_k + m = n p + q = mm· k^⊥q· p^⊥/|k|^2|p|^2a̅_na_ka_pa_q𝔼[g̅_n^ωg_k^ωg_p^ωg_q^ω]. Similarly ℱ(U[B(Ω,Ω)]·∇Ω)(n) = ∑_k+m =nm· k^⊥/|k|^2B(Ω,Ω)(k)a_mg_m^ω = ∑_k + m = n p + q = km· k^⊥/|k|^2q · p^⊥/|p|^2g^ω_mg_p^ωg_q^ωa_ma_pa_q so that (2π)^4E_2 = ∑_n∈ℤ^2⟨ n ⟩^2s∑_k + m = n p + q = km· k^⊥/|k|^2q · p^⊥/|p|^2a̅_na_ma_pa_q𝔼[g̅_n^ωg^ω_mg_p^ωg_q^ω]. Now we make the following crucial observation: because the g_j^ω are complex Gaussian random variables, there holds (q· p^⊥)𝔼[g̅_n^ωg_k^ωg_p^ωg_q^ω] = 0 unless the elements of {n, k, p, q} are paired two-by-two (note that the cases |n|=|k|=|p|=|q| are such that q· p^⊥=0). There are two possible parings: the first case is (n,k)=(p,-q), n≠± k, and the second case is (n,k)=(q,-p), n≠± k. Remark that the pairing (n,p)=(k,-q) is such that q· p^⊥=0 therefore giving a zero contribution. Let us write (2π)^4E_1 = (2π)^4(∑_n∈ℤ^2⟨ n ⟩^2s(E_11(n)+E_12(n)) where E_11(n) (resp. E_12(n)) corresponds to the first (resp. second) paring case. For the first contribution we find E_11(n) = ∑_q ∈ℤ^2(q · n^⊥)^2/|n|^2|q|^2|a_n|^2|a_q|^2, where we have used that m· k^⊥=q · n^⊥. Similarly E_12(n)= -∑_p∈ℤ^2(p · n^⊥)^2/|p|^4|a_p|^2|a_n|^2, so that finally (re-labeling p to q and summing both contributions) we obtain (2π)^4E_1=-∑_n,q ∈ℤ^2⟨ n⟩^2s(q · n^⊥)^2|a_n|^2|a_q|^2 (1/|q|^4-1/|q|^2|n|^2). Writing similarly (2π)^4E_2 = (2π)^4(∑_n∈ℤ^2⟨ n ⟩^2s(E_21(n)+E_22(n)) we find A_21(n)=-∑_q∈ℤ^2(q · n ^⊥)^2/|n+q|^2|n|^2|a_n|^2|a_q|^2 and A_22(n)=∑_p∈ℤ^2(p· n^⊥)^2/|p+n|^2|p|^2|a_n|^2|a_p|^2 so that after the change of variables (n,p) ↦ (q,n) in A_22, we obtain A_2 = -∑_n,q ∈ℤ^2⟨ n⟩^2s(q· n^⊥)^2|a_n|^2|a_q|^2(1/|q+n|^2|n|^2 - 1/|q+n|^2|q|^2), which is enough to conclude the proof. We also need the simpler computation: Assuming (a_n)_n∈ℤ^2∈ h^s+1 for some s⩾ 0, there holds 𝔼[B_1(Ω)_H^s^2] = 1/(2π)^4∑_n,q ∈ℤ^2⟨ n + q ⟩^2s(q· n^⊥)^2/2(1/|q|^2-1/|n|^2)^2|a_q|^2|a_n|^2 . First we write B_1(Ω)_H^s^2 = 1/(2π)^4∑_n∈ℤ^2⟨ n⟩ ^2s∑_k+m=n k' + m' = n c_k,mc_k',m'a_ka_ma̅_k'a̅_m'g_k^ωg_m^ωg̅_k'^ωg̅_m'^ω , where c_k,m=m· k^⊥/2(1/|k|^2 - 1/|m|^2). Taking the expectation in the above and using that c_k,m=0 if |k|=|m|, there are two possible parings of {k,m,k',m'} to consider: (k,m)=(k',m') and (k,m)=(m',k') both leading to the same contribution, therefore (2π)^4𝔼[B_1(Ω)_H^s^2] = 2∑_n∈ℤ^2⟨ n ⟩ ^2s∑_k+m=n(k· m^⊥)^2/4(1/|k|^2-1/|m|^2)^2|a_m|^2|a_k|^2 = ∑_n,q ∈ℤ^2⟨ n + q ⟩^2s(q· n^⊥)^2/2(1/|q|^2-1/|n|^2)^2|a_q|^2|a_n|^2 , where in the last step we just re-labeled: (n,k,m) ↦ (n+q,q,n). We go back to the notation Ω_0^ω in the following proof. Let s>0. Expanding the squared H^s norm and taking the expectation we compute Ω_0^ω -tB_1(Ω_0^ω) + t^2B_2(Ω_0^ω)^2_L^2_ωH^s = Ω_0^ω^2_L^2_ωH^s + γ t^2 + δ t^4 -2t 𝔼[⟨Ω_0^ω,B_1(ω_0^ω)⟩_H^s] -2t^3 𝔼[⟨ B_1(Ω_0^ω),B_2(ω_0^ω)⟩_H^s] where δ = B_2(Ω_0^ω)^2_L^2_ωH^s and γ = 𝔼[B_1(Ω_0^ω)_H^s^2 + 2⟨Ω_0^ω, B_2(Ω_0^ω)⟩_H^s] Using <ref> we have γ = ∑_n,q ∈ℤ^2(|a_n||a_q| (q · n^⊥))^2/(2π)^4( ⟨ n ⟩ ^2s(1/|n|^2 - 1/|q|^2)(1/|q|^2-1/|q+n|^2) + ⟨ n + q ⟩^2s/2(1/|q|^2-1/|n|^2)^2) = ∑_n,q ∈ℤ^2 b_n,q(⟨ n ⟩ ^2s(1/|n|^2 - 1/|q|^2)(1/|q|^2-1/|q+n|^2) + ⟨ n + q ⟩^2s/2(1/|q|^2-1/|n|^2)^2), where b_n,q = 1/(2π)^4(|a_n||a_q| (q · n^⊥))^2(1/|n|^2-1/|q|^2). Observe now that b_q,n=-b_n,q so that using the symmetry between n and q, we finally find γ = ∑_n,q ∈ℤ^2 b_n,q( ⟨ n ⟩ ^2s/2(1/|q|^2-1/|q+n|^2) + ⟨ q ⟩ ^2s/2(1/|q+n|^2-1/|n|^2) + ⟨ n + q ⟩^2s/2(1/|n|^2-1/|q|^2)), which is (<ref>), as claimed. It remains to observe that 𝔼[⟨Ω_0^ω,B_1(ω_0^ω)⟩_H^s]=0 and 𝔼[⟨ B_1(Ω_0^ω),B_2(ω_0^ω)⟩_H^s] =0. In order to do so, observe that computations used in the proof of <ref> imply [⟨Ω_0^ω, B_1(Ω_0^ω)⟩_H^s] = ∑_n∈ℤ^2 k+m=n⟨ n⟩^2sm· k^⊥/|k|^2a_ka_ma̅_n [g^ω_kg^ω_mg̅^ω_n] = 0, since [g^ω_kg^ω_mg̅^ω_n]=0 for all values of k, m, n ∈ℤ^2. it follows that 𝔼[⟨Ω_0^ω, B_1(Ω_0^ω)⟩]=0. For the same reason, there also holds that [⟨ B_1(Ω_0^ω),B_2(Ω_0^ω) ⟩_H^s] =0. § THE PERTURBATIVE ARGUMENT Let us recall that w^ω(t), defined by Ω^ω(t)=Ω^ω_0 - tB_1(Ω^ω_0) + t^2B_2(Ω^ω_0) + w^ω(t) , is the remainder in the ansatz (<ref>). In particular w^ω(0)=0. First, by definition of w^ω(t), we have ∂_t Ω^ω(t)= - B_1(Ω^ω_0) + 2t B_2(Ω^ω_0) + ∂_tw^ω(t) . Expanding B(Ω^ω(t),Ω^ω(t)), using that Ω^ω(t) solves (<ref>) we find that w^ω(t) (which will be denoted w, omitting the ω superscripts) satisfies the following evolution equation: ∂_t w(t) = -B(w(t),w(t)) - 2B(Ω_0,w(t)) + 2t B(B_1(Ω_0),w(t)) - t^2 (B(B_1(Ω_0),B_1(Ω_0)) + 2B(Ω_0,B_2(Ω_0)) )_B_3(Ω_0) - 2t^2 B(B_2(Ω_0),w(t)) + 2t^3 B(B_1(Ω_0),B_2(Ω_0))_B'_3(Ω_0) - t^4B(B_2(Ω_0),B_2(Ω_0))_B̃_3(Ω_0). Let us make a few comments: * Our proof proceeds by performing energy estimates in H^s, targeting an estimate of the form w(t)_H^s⩽ C t^3e^ct on short time (the exponential accounts for the use of the Grönwall inequality). One major threat to obtain such estimates will come from the random constants appearing in the estimates. For example we need to avoid factors like e^Ω_0_H^σ^3 appearing in our final estimate (which would not be integrable with respect to μ). In particular, we have to pay attention when estimating (<ref>). We want an estimate of the form w(t)_H^s⩽ Ct^3, we expect that (<ref>) will be part of the main term in the estimate, but that (<ref>) will contribute negligibly. * We point out that the usual energy estimate on the term B(w(t),w(t)) would bring a potential double-exponential bound, which on short-time is bounded. In our case we must take advantage of w(0)=0, so that the nonlinear term B(w(t),w(t)) should be viewed as a small contribution. * Because the method we employ is to use the ansatz (<ref>) for Ω^ω(t) we will need to estimate B_3(Ω_0), B̃_3(Ω_0), B'_3(Ω_0), and it is the reason why we require σ >s+3. In this proof we only work with t ⩽ 1, and we will later restrict to shorter intervals of time. We start by deriving a first a priori bound for w^ω(t) by using the triangle inequality, <ref> and <ref>, w^ω(t)_H^s' ⩽Ω^ω(t)_H^s' + Ω^ω_0_H^s' + tB_1(Ω^ω_0)_H^s' + t^2B_2(Ω^ω_0)_H^s' ⩽ C_ω√(log(2+t)) + C(1+t+t^2) (Ω^ω_0_H^s' + Ω^ω_0^2_H^s'+1 + Ω^ω_0_H^s'+2^3) ⩽ C(C_ω + (1+Ω_0^ω_H^s'+2^3))=:D_ω for all 0 ⩽ t ⩽ 1 and all s'⩽σ - 2. We will later use that the random constant D_ω lies in L^p_ω for all 1⩽ p< ∞. However, note that it lacks exponential moments, therefore directly plugging this estimate in (<ref>) and performing usual energy estimates would yield a bound on w(t)_H^s which would not lie in L^2_ω. Let us now move to the energy estimate. We compute 1/2d/dtw(t)^2_H^s = (∂_t w(t),w(t))_H^s using (<ref>) to decompose ∂_t w(t). First, we use <ref> to estimate (<ref>) by |(B(w(t),w(t)),w(t))_H^s| ⩽ C(s,ε)w(t)_H^s^2w(t)_H^1+ε, where C(s,ε) is a global deterministic constant. For the linear terms in (<ref>) we rely on <ref> again to bound (B(Ω_0,w(t)),w(t))_H^s⩽ C(s,ε)Ω_0_H^s+1+εw(t)_H^s^2 , and (B(B_1(Ω_0),w(t)),w(t))_H^s⩽ C(s,ε)B_1(Ω_0)_H^s+1+εw(t)_H^s^2 , which together imply (-2B(Ω_0,w(t)) +2t B(B_1(Ω_0),w(t)), w(t))_H^s⩽ C(s,ε)(1+Ω_0_H^s+2+ε^2) w(t)^2_H^s. To estimate (<ref>) we use the Cauchy-Schwarz inequality and <ref> to obtain (B_3(Ω_0),w(t))_H^s ⩽B_3(Ω_0)_H^sw(t)_H^s⩽ C(s,ε) Ω_0_H^s+3^4 w(t)_H^s. Another application of <ref> yields (B(B_2(Ω_0),w(t)),w(t))_H^s ⩽B_2(Ω_0)_H^s+1+εw(t)_H^s ⩽ C(s,ε)Ω_0^3_H^s+3+εw(t)_H^s^2 ⩽ D_ωΩ_0^3_H^s+3+εw(t)_H^s where in the last inequality we have used (<ref>). Therefore (-t^2B_3(Ω_0)-2t^2B(B_2(Ω_0),w(t)),w(t))_H^s⩽ D_ωt^2(1+Ω_0^4_H^s+3+ε)w(t)_H^s . Finally, also using <ref> we can also bound the remainder terms (<ref>) and obtain (2t^3B'_3(Ω_0)-t^4B̃_3(Ω_0),w(t))_H^s⩽ C(s,ε)t^3(1+Ω_0^6_H^s+3)w(t)_H^s. Combining (<ref>), (<ref>) and (<ref>) we finally obtain d/dtw(t)^2_H^s ⩽w(t)^2_H^sw(t)_H^1+ε + C(s,ε)(1+Ω_0_H^s+2+ε^2) w(t)^2_H^s + D_ωt^2(1+Ω_0^4_H^s+3 + ε)w(t)_H^s + C(s,ε)t^3(1+Ω_0^6_H^s+3)w(t)_H^s. Using t⩽ 1, dividing by w(t)_H^s and collecting the constants we find d/dtw(t)_H^s⩽w(t)_H^sw(t)_H^1+ε + k_ωw(t)_H^s + K_ωt^2 where 𝔼[e^ck_ω]<∞ for c small enough and K_ω∈ L^p_ω for all finite p. Let us now use (<ref>) for 0 ⩽ t ⩽ 1 in  (<ref>) to bound d/dtw(t)_H^s⩽ D'_ω , where D'_ω∈ L^p_ω for all finite p. Note that we have used that 1+ε⩽σ -2 in order to use (<ref>). Integrating this differential bound yields w(t)_H^s⩽ D'_ωt for 0⩽ t ⩽ 1. Using the latter inequality (also used with s=1+ε) in (<ref>), we obtain the differential inequality d/dtw(t)_H^s⩽ k_ωw(t)_H^s + K'_ωt^2, where K'_ω (D'_ω)^2 + K_ω. Therefore the Grönwall inequality implies w(t)_H^s⩽1/3K'_ωt^3 exp(k_ωt) ⩽1/3K'_ωt^3 exp(c/2k_ω) if 0 ⩽ t ⩽c/2. Therefore w(t)_L^2_ωH^s⩽1/3𝔼[K_ω^2exp(ck_ω)]^1/2t^3, and the claim is proven with C=1/3𝔼[(K'_ω)^2exp(ck_ω)]^1/2<∞. § GENERICITY OF THE CONDITION GAMMA §.§ First examples of non-invariant Gaussian measures We start with providing a very simple example of (a_n)_n∈ℤ^2 for which γ_s,(a_n)≠ 0. Let a_n=1 if n∈{(1,0),(-1,0),(0,2),(0,-2)}, and a_n=0 otherwise. If s >1 then γ_s,(a_n)≠ 0. Therefore the associated measure is not-invariant under the flow of (<ref>) (nor quasi-invariant under <ref>.) We can compute explicitly γ_s,(a_n). Using the notation of (<ref>) we find: γ = 3(6^s/2 + 3 · 2^s/10 - 4 · 3^s/5)=3/20(15· 6^s - 16· 5^s + 2^s) , and we need to check that this does not vanish. Note that s > 1 we observe that (6/5)^s>6/5 > 16/15 which implies 15· 6^s - 16· 5^s>0 so that γ >0. As a corollary for all σ >3, one can construct a sequence (a_n)_n∈ℤ^2 such that the associated measure μ_(a_n) is `exactly' supported on H^σ and γ_s,(a_n)≠ 0 for all s∈ (0,σ]. Let σ > 3. There exists μ∈ℳ^σ which is not invariant under the flow of (<ref>) and which satisfies μ(H^σ) =1, and μ(H^σ +ε)=0 for all ε >0. We first remark that if (a_n)_n∈ℤ^2∈ h_rad^σ∖⋃_ε >0h_rad^σ +ε with non vanishing a_n, then the associate measure μ_(a_n) satisfies μ(H^σ) =1, and μ(H^σ +ε)=0. That μ(H^σ) =1 holds follows from (<ref>). The fact that for any ε >0 there holds μ(H^σ +ε)=0 is a more difficult but a standard fact, we refer to the Appendix of <cit.> for a detailed proof. Let b_n=1/⟨ n ⟩ ^σ +1log(⟨ n⟩), which belongs to h_rad^σ∖⋃_ε >0h_rad^σ +ε. Let also (a_n)_n∈ℤ^2 be the sequence from <ref> and let us consider c_n=a_n+ε b_n. Then by expanding γ_s,(c_n) we can check that γ_s,(c_n) = γ_s,(a_n) + 𝒪(ε), and since γs,(a_n) >0, we can take ε small enough so that γ_s,(c_n)>0. §.§ Proof of <ref> The proof essentially follows from the following continuity lemma. Let s ⩾ 0 and σ⩾ s+1. The map γ : (a_n)_n∈ℤ^2⟼γ_s,(a_n), where γ_s,(a_n) is defined by (<ref>) is continuous as an operator h^σ(ℤ^2) →ℝ. Let us prove the continuity using the sequential characterization: we take a^k=(a^k_n)_∈ℤ^2 converging to a=(a_n)_∈ℤ^2 in h^σ(ℤ^2). Then let us write γ_a^k = ∑_n,q ∈ℤ^2 |a^k_n|^2|a^k_q|^2 α_n,q where α_n,q=(q· n^⊥)^2(1/|n|^2 - 1/|q|^2)β_n,q = 𝒪(⟨ n⟩^2(s+1)⟨ q⟩^2(s+1)), where β_n,q is defined in (<ref>). This crude bound will be enough for our purpose. Using the identity |a_n^k|^2|a_q^k|^2-|a_n|^2|a_q|^2 = (|a_n^k||a_q^k| - |a_n||a_q|)(|a_n^k||a_q^k| + |a_n||a_q|) , and using the Cauchy-Schwarz inequality we infer |γ_a^k-γ_a|^2 ⩽∑_n,q ∈ℤ^2 (|a_n^k||a_q^k| - |a_n||a_q|)^2⟨ n⟩^2(s+1)⟨ q⟩^2(s+1) ×∑_n,q ∈ℤ^2 (|a_n^k||a_q^k| + |a_n||a_q|)^2 ⟨ n⟩^2(s+1)⟨ q⟩^2(s+1) . First, we bound ∑_n,q ∈ℤ^2 (|a_n^k||a_q^k| + |a_n||a_q|)^2 ⟨ n⟩^2(s+1)⟨ q⟩^2(s+1)⩽a^k_h^s+1^4 + a_h^s+1^4 which is bounded (in k) because s+1 ⩽σ. For the other term we write (|a_n^k||a_q^k| - |a_n||a_q|)^2 ≲ |a_q|^2|a_n^k-a_n|^2 + |a_n^k|^2|a_q^k-a_q|^2 , so that we obtain ∑_n,q ∈ℤ^2 (|a_n^k||a_q^k| - |a_n||a_q|)^2⟨ n⟩^2(s+1)⟨ q⟩^2(s+1) ≲a_h^s+1^2a^k-a_h^s+1^2 + a_k_h^s+1^2a^k-a_h^s+1^2 =𝒪(a^k-a_h^σ^2) which goes to zero because s+1⩽σ, therefore proving the continuity. The continuity in the first part of <ref> now follows from <ref> and the fact that A^σ=γ^-1(ℝ∖{0}), therefore open as the reciprocal image of an open set by a continuous function. Also, <ref> holds with h^σ_rad in place of h^σ. Finally let us explain why A^σ has empty interior in h^σ (from which the corresponding result on 𝒜^σ will follow). Let (a_n)_n∈ℤ^2 such that γ_(a_n)=γ_s,(a_n)= 0, and let (b_n)_n∈ℤ^2 radially symmetric such that γ_(b_n) = γ_s,(b_n)≠ 0, such as constructed in <ref>. One can therefore consider c_n=a_n+ε b_n for ε≪ 1. There holds c-a_h^σ = 𝒪(ε), and expanding γ_(c_n) in terms of ε_n we find γ_(c_n) = γ_(a_n) +κ_1 ε + κ_2 ε^2 + κ_3 ε^3 + ε^4 γ_(b_n) for some real constants κ_1, κ_2 and κ_3. By assumption, there holds γ_(a_n)=0. If κ_1 ≠ 0 then one just needs to take ε small enough so that |κ_2 ε^2 + κ_3 ε^3 + ε^4 γ_(b_n)| ⩽κ_1ε/2 and therefore γ_(c_n)≠ 0. If κ_1=0 we can do the same analysis on κ_2 and so on, so that the only remaining case is κ_1=κ_2=κ_3=0 in which case we use that γ_(b_n)≠ 0. Continuity in <ref> Part (ii) directly follows from (<ref>): let a^k=(a^k_n)_n∈ℤ^2 which converges to a=(a_n)_n∈ℤ^2 in h^σ (resp. h^σ_rad) and let μ_kμ_a^k, μμ_a. We can compute for any 1-Lipschitz function φ, we compute |∫_H^σφ(u)d(μ-μ_k)(u) | = |[φ(∑_n∈ℤ^2 a_n^kg_n^ωe^in· x) -φ(∑_n∈ℤ^2 a_ng_n^ωe^in· x)]| ⩽[∑_n∈ℤ^2 a_n^kg_n^ωe^in· x - ∑_n∈ℤ^2 a_ng_n^ωe^in· x_H^σ(𝕋^2)] ⩽∑_n∈ℤ^2 a_n^kg_n^ωe^in· x - ∑_n∈ℤ^2 a_ng_n^ωe^in· x_L^2_ωH^σ , where in the first line of the above computation we have used the push-forward definition of the measures μ_k and μ. Taking the supremum over such functions φ yields W_1(μ,μ_k) ⩽a^k-a_H^σ which goes to zero as k→∞. §.§ Proof of <ref> Our main tool is a generalization of the Kakutani criterion. Kakutani <cit.> characterized the equivalence between tensor products of equivalent measures. For Gaussian measures in Hilbert space, we use the following generalization due to Feldman and Hájek: Let μ and ν be Gaussian random variables on a Hilbert space X with means m_μ, m_ν and co-variance operators C_μ, C_ν. Then μ and ν are either equivalent or mutually singular, and equivalence holds if and only if the three following conditions are met: * μ and ν have the same Cameron-Martin space H=C_μ^1/2(X)=C_ν^1/2(X). * m_μ-m_ν∈ H. * (C_μ^-1/2C_ν^1/2)(C_μ^-1/2C_ν^1/2)^*- I is a Hilbert-Schmidt operator. We will only use criterion (3) to prove that two given Gaussian measures are mutually singular. In particular as we consider very simple Gaussian measures on the Hilbert space H^σ(𝕋^2) with diagonal co-variance operators, we obtain the following useful corollary. Write e_n(x)=e^in· x, and let us consider the two Gaussian measures μ = ∑_n ∈ℤ^2 h_ne_n and ν = ∑_n ∈ℤ^2h̃_ne_n on H^σ(𝕋^2), where h_k, h̃_k are centered Gaussian variables of variances σ_k, σ̃_k. If ∑_n∈ℤ^2(σ̃_n^2/σ_n^2-1)^2 = ∞ , then μ and ν are mutually singular. In order to prove <ref>, let us fix a sequence (a_n)_n∈ℤ^2 such that its associated measure belongs to ℳ^σ_rad and such that γ_s,(a_n)≠ 0. Let us consider a sequence (ε^ξ_n)_n∈ℤ^2 of independent and identically distributed Bernoulli random variables satisfying ℙ(ξ : ε^ξ_n = 1)=ℙ(ξ : ε^ξ_n = -1) = 1/2. We introduce b^ξ_n=a_n(1+ε^ξ_n/2⟨ n⟩)^1/2, and consider the associated measures μ_ξμ_(b^ξ_n). We check that there is a set Ξ_1 such that ℙ(ξ∈Ξ_1)>0 and for all ξ∈Ξ_1, there holds γ_(b^ξ_n)≠ 0. Indeed, there holds 𝔼[γ_s,(b^ξ_n)] = ∑_n, q ∈ℤ^2𝔼[|b_n^ξ|^2|b_q^ξ|^2] β_n,q. Since β_n,q=0 when n=q, it follows that 𝔼[|b_n^ξ|^2|b_q^ξ|^2] = |a_n|^2|a_q|^2𝔼[1+ε^ξ_n/⟨ n⟩]𝔼[1+ε^ξ_q/⟨ q⟩] = |a_n|^2|a_q|^2 so that 𝔼[γ_s,(b^ξ_n)] = γ_s,(a_n)≠ 0, by assumption, therefore γ_s,(b^ξ_n)≠ 0 for ξ∈Ξ_1 of positive measure. Let us fix ξ_1 ∈Ξ_1, and apply <ref> to μ_ξ_1 and μ_ξ for some ξ∈Ξ (the probability space). We write that: (1+ε_n^ξ_1/2⟨ n⟩/1+ε_n^ξ/2⟨ n⟩-1)^2 = 1-ε_n^ξ_1ε_n/2⟨ n ⟩ ^2 + 𝒪(|n|^-3) = 1/2⟨ n⟩^2 - ε_n^ξ_1ε_n^ξ/2⟨ n ⟩^2 + 𝒪(|n|^-3). The series ∑_n∈ℤ^21/⟨ n ⟩ ^2 diverges, and the series of terms 𝒪(|n|^-3) converge. Finally, the random (in ξ) series ∑_n∈ℤ^2ε_n^ξ_1ε_n^ξ/⟨ n⟩^2 is convergent for almost every ξ∈Ξ (let Ξ_* be this set of ξ) by application of Kolmogorov's three series Theorem (for instance we refer to <cit.>*Section 1.8). Therefore, we obtain that the series of general term (<ref>) is divergent for all ξ∈Ξ_*, which is of probability one. This yields uncountable many measures μ_ξ which are mutually absolutely singular with respect to μ_ξ_1. Note that this does not yield a set of measures which are pairwise mutually singular. The end of the proof proceeds by contradiction: assume that there are only a countable number of measures {μ_k}_k⩾ 1 which are pairwise mutually singulars and such that for each μ_k, γ≠ 0. Let Ξ_(k) be the set of ξ such that μ_ξ⊥μ_k. We have shown that ℙ(ξ∈Ξ_1 ∩Ξ_(k)) = ℙ(ξ∈Ξ_1)>0. Therefore, ℙ(ξ∈Ξ_1 ∩⋂_k⩾ 1Ξ_(k)) = ℙ(ξ∈Ξ_1) >0. Observe now that for all ξ∈Ξ_1 ∩⋂_k⩾ 1Ξ_(k) there holds μ_ξ⊥μ_k for all k⩾ 1 and all γ_(b_n^ξ)≠ 0, therefore a contradiction. §.§ Proof of <ref> (i) Let us assume that μ has a Fourier support S={n∈ℤ^2, a_n ≠ 0} which is finite. Let us follow the terminology in <cit.> and say that a pair (n,q) is degenerate if n and q are co-linear (that is the line passing through n and q is passing through the origin) or belong to the same circle centered at 0. In other words, (n,q) is degenerate if and only if (n· q^⊥)(1/|n|^2-1/|q|^2)=0. If there exists s∈ (0,σ] such that γ_s≠ 0, there is nothing to prove. Otherwise, let us assume that γ_s = 0 for all s ∈ (s_0-ε, s_0+ε) for some s_0∈ (0,σ). Differentiating the equality γ_s = 0 with respect to s and summing linear combinations of iterated derivatives of this equality, we find that for any polynomial P ∈ℝ[X] there holds 0=∑_|n|,|q| |a_n|^2|a_q|^2(n· q^⊥)^2(1/|n|^2-1/|q|^2) (P(2 log (⟨ n+q ⟩))⟨ n+q ⟩^2s(1/|n|^2-1/|q|^2) + P(2 log (⟨ n⟩))⟨ n⟩^2s(1/|q|^2-1/|n+q|^2) + P(2 log (⟨ q⟩))⟨ q ⟩^2s(1/|n+q|^2-1/|n|^2)) Let M=max{|n|, a_n≠ 0} and let C_M{n ∈ℤ^2, |n|=M and a_n≠ 0}. Let P be a polynomial such that P(2log(⟨ k⟩))=0 for any k⩽ M. and P(2log(⟨ k⟩))=1 if M<k⩽ 2M. Using this polynomial in (<ref>) yields ∑_n,q: |n+q|>M⟨ n+q⟩^2s|a_n|^2|a_q|^2(n· q^⊥)^2(1/|n|^2-1/|q|^2)^2 = 0, from which we infer that if |n+q|>M and a_n, a_q ≠ 0, then (n,q) is degenerate. We distinguish between the case |C_M|>2 and the case |C_M| ⩽ 2. Assume that |C_M|>2. We claim that all the modes n such that a_n≠ 0 are included in the circle {|n|=M}. In order to obtain this result, let us observe that since |C_M|>2 and that C_M is symmetric (by assumption on the a_n) it means that there exists at least p_1,p_2∈ C_M such that p_2 does not belong to the line passing through 0 and p_1. Let q ≠ p_1,p_2 such that a_q ≠ 0. Since a_-q=a_q ≠ 0, and that either |p_i+q|>M or |p_i-q|>M, there is no loss in generality in assuming |p_i+q|>M so that for i=1,2, (p_i,q) is degenerate in view of (<ref>). Since this holds for p_1 and p_2 not belonging to the same line passing through 0 this implies that q belongs to the circle of radius M centered at 0. Let us assume |C_M|⩽ 2, which actually means |C_M|=2 and C_M={p,-p}. Let us prove that any q such that a_q ≠ 0 belongs to the line passing through 0 and p. Let q such that |q|<M and a_q≠ 0. Without loss of generality we can assume |p+q|>M (otherwise there holds |-p+q|>M and replace p by -p). Using (<ref>) it follows that (p,q) is degenerate, and since |q|<M=|p| it follows that q belongs to the line passing through 0 and p as claimed. Combining this result with <ref>, it holds that for any sequence (a_n)_n∈ℤ^2 with compact support which is not degenerate, there exists an open neighborhood of μ_(a_n) in ℳ^σ such that any measure in this neighborhood is non-invariant under the flow of (<ref>) (nor quasi-invariant under <ref>). §.§ Proof of <ref> (ii) In this section, we prove that the Gaussian measure on H^4(𝕋^2) induced by ∑_n∈ℤ^2 ∖{0}g_n^ω/⟨ n⟩^5log(3+⟨ n ⟩^2)e^in· x , satisfies γ_1/2 = γ > 0 (in this section we are choosing s=1/2 and henceforth drop the subscript from γ). Such an example is the typical example one might consider to produce a measure whose support is in H^4 but not in ⋃_ε >0H^4 +ε. Our proof that γ > 0 is computer assisted using interval arithmetic (see <cit.> for an introduction to mathematically rigorous computation using interval arithmetic, which fully accounts for CPU rounding errors). We first compute for some N>0 the partial sum: γ_N ∑_|n|,|q|>Nα_n,q∑_|n|,|q|< N |a_n|^2|a_q|^2(q· n^⊥)^2(1/|n|^2-1/|q|^2) β_q,n, where β_q,n=β_q,n,s is given by (<ref>). Then we separately bound the error |γ - γ_N| analytically. More precisely, the numerical approximation of γ_N for N = 30 places it in an interval sufficiently far from zero: {[ 1/2γ _N ∈ [0.00011184535610465990373, 0.00011184535613147070557]; 1/2|γ - γ _N| ∈ [0,0.00010534979423897216787]. ]. The interval for γ_N is obtained using interval arithmetic implemented in Python with the package[<https://mpmath.org>]; See <ref> for the code used to carry out the computation. Computations were carried out with and the CPU is Next, let us consider the analytic estimate on γ - γ_N. First, note that as the summation in γ is symmetric in n and q, there holds γ_N=∑_|n|,|q|< Nα_n,q=2∑_|n|, |q|< Nα_n,q1_|n|<|q| , so that |γ - γ_N| ⩽ 2∑_|q|⩾ 1∑_|n|⩾ Nα_n,q1_|n|<|q| + 2∑_|n|⩾ 1∑_|q|⩾ Nα_n,q1_|n|<|q| . Using |n|<|q| we have ⟨ n⟩^2s⩽ 2^s |q|^2s and ⟨ q⟩ ^2s⩽ 2^s |q|^2s as well as ⟨ n+q⟩^2s⩽ 5^s |q|^2s. Therefore a crude bound on β_n,q reads α_n,q1_|n|<|q|⩽ |q|^2|n|^21/|n|^2(2^s+2+5^s)|q|^2s1/|n|^10|q|^10 = 2^3/2+3^1/2/|n|^10|q|^7, where we have used s=1/2>0. We have also used 1/log(3+⟨ n ⟩ ^2)⩽ 1. In the following we use the following bounds: * 2^5/2+5^1/2<8, which is obtained by direct computations (by hand). * For any τ⩾ 4 we bound ∑_n∈ℤ^2 ∖{0}1/|n|^τ⩽∑_n∈ℤ^2 ∖{0}1/|n|^4 = ∑_k⩾ 1 |{n∈ℤ^2, |n|=k}| 1/k^4. We use the very simple fact that {n∈ℤ^2, |n|=k}⊂{n∈ℤ^2:max_|n_1|,|n_2|⩽ k}∖{0}, so that |{n∈ℤ^2, |n|=k}| ⩽ (2k+1)^2 -1=4k^2+4k ⩽ 8k^2 and therefore ∑_n∈ℤ^2 ∖{0}1/|n|^τ⩽∑_k⩾ 18k^2/k^4⩽8π^2/6. We can also write: ∑_|q|⩾ N1/|q|^K = ∑_k⩾ 0∑_ 2^k(N+1)⩽ |q| < 2^k+1N1/|q|^K⩽ 12 N^-K+2∑_k⩾ 0 2^-(K-2)k therefore since ∑_k⩾ 0 2^-(K-2)k⩽ 2 we have the crude bounds ∑_|n|<|q||q|⩾ N, |n|⩾ 1α_n,q⩽ 8 ×8π^2/6×24/N^5, and ∑_|n|<|q||q|⩾ 1, |n|⩾ Nα_n,q⩽ 8 ×24^2/N^13 so that finally 1/2|γ -γ_N| ⩽1536/N^5(π^2/6 + 3/N^8) < 1536/N^5(10/6 + 3/N^8) =: ε, where we have used π^2 <10. An interval for the value of ε is also computed in <ref>. § PYTHON CODE [language=Python,breaklines]nume.py alpha
http://arxiv.org/abs/2307.07362v1
20230714140854
A scoping review on multimodal deep learning in biomedical images and texts
[ "Zhaoyi Sun", "Mingquan Lin", "Qingqing Zhu", "Qianqian Xie", "Fei Wang", "Zhiyong Lu", "Yifan Peng" ]
cs.CV
[ "cs.CV", "cs.CL" ]
1]Zhaoyi Sun [email protected] 1]Mingquan Lin [email protected] 2]Qingqing Zhu [email protected] 1]Qianqian Xie [email protected] 1]Fei Wang [email protected] 2]Zhiyong Lu [email protected] 1]Yifan Pengcor1 [email protected] [cor1]Corresponding author [1]organization=Population Health Sciences, addressline=Weill Cornell Medicine, city=New York, postcode=10016, state=NY, country=USA [2]organization=National Center for Biotechnology Information (NCBI), National Library of Medicine (NLM), addressline=National Institutes of Health (NIH), city=Bethesda, postcode=20894, state=MD, country=USA Objective: Computer-assisted diagnostic and prognostic systems of the future should be capable of simultaneously processing multimodal data. Multimodal deep learning (MDL), which involves the integration of multiple sources of data, such as images and text, has the potential to revolutionize the analysis and interpretation of biomedical data. However, it only caught researchers' attention recently. To this end, there is a critical need to conduct a systematic review on this topic, identify the limitations of current work, and explore future directions. Methods: In this scoping review, we aim to provide a comprehensive overview of the current state of the field and identify key concepts, types of studies, and research gaps with a focus on biomedical images and texts joint learning, mainly because these two were the most commonly available data types in MDL research. Result: This study reviewed the current uses of multimodal deep learning on five tasks: (1) Report generation, (2) Visual question answering, (3) Cross-modal retrieval, (4) Computer-aided diagnosis, and (5) Semantic segmentation. Conclusion: Our results highlight the diverse applications and potential of MDL and suggest directions for future research in the field. We hope our review will facilitate the collaboration of natural language processing (NLP) and medical imaging communities and support the next generation of decision-making and computer-assisted diagnostic system development. Multimodal learning Medical images Clinical notes Scoping review § INTRODUCTION Multimodal deep learning (MDL), which involves the integration of multiple modalities, such as medical images, unstructured text, and structured Electronic Health Records (EHRs) has gained significant attention in biomedical research <cit.>. This approach has been proven to improve the accuracy and efficiency of various tasks in clinical decision-making with imaging and structured EHR (i.e., -omics data, lab test data, demographic data) <cit.>. The heterogeneous data available to clinicians allows for multiple viewpoints to be considered when making decisions and constructing computer-aided diagnosis and prognosis systems. However, the application of MDL with medical imaging data and unstructured free-text data (i.e., clinical reports) is still in its infancy. The emergence of related research has only recently surfaced. For example, in the field of natural language processing (NLP), pre-trained models, such as Bidirectional Encoder Representations from Transformers (BERT) <cit.> and Generative Pre-trained Transformer 3 (GPT-3) <cit.>, have garnered world-renowned accomplishments in various downstream tasks. Furthermore, multimodal language models, including Contrastive Language Image Pretraining (CLIP) <cit.> and the more recent KOSMOS-1 <cit.>, have demonstrated remarkable performances in addressing general domain tasks. This notable progress has simultaneously facilitated the models' applicability within the medical domain. As a result, we believe it is imperative to comprehensively synthesize the past five years' research on MDL in biomedical images and texts, including an overview of research objectives and methodologies, elucidating development trends, and exploring potential broader clinical applications in the future. Our review is inspired by several related review articles. Heiliger et al. <cit.> provided a comprehensive overview of existing multimodal learning methods and related databases in radiology, proposing a modality-based taxonomy based on the structural and design principles of the model. However, it was method-oriented, which might not facilitate clinicians' comprehension of the development of MDL in the medical field from the standpoint of specific applications. Cui et al. <cit.> explored the various fusion strategies employed in disease diagnosis and prognosis. However, the multimodal fusion discussed in these articles primarily included structured data from EHRs, with limited attention to unstructured text. Similarly, numerous systematic reviews have synthesized the employment of multimodal artificial intelligence (AI), machine learning, and the Internet of Medical Things (IoMT) within the realm of biomedicine <cit.>. Nonetheless, these investigations exhibited a notable absence of detailed discussions on implementing multimodal language models in the medical domain. Additionally, the outstanding achievements of deep learning are accompanied by increasing model complexity and a lack of interpretability of AI models that prevents their applicability to clinical scenarios <cit.>. Therefore, it becomes necessary to come up with solutions to address this challenge and move toward more transparent AI. Compared to single-model AI, MDL presents unique challenges as explanations of multimodal data are often separated. For example, there are SHAP values for the EHR and a heatmap for the brain images - a visualization of the brain areas affected. But few visualization/explanation methods integrate the data and results, especially with longitudinal data. While many review studies organize and report challenges and opportunities of explainable AI, however, they do not focus on MDL <cit.>. To our knowledge, our paper represents the first review of multimodal deep learning focusing on medical image and text data, explainability, and human evaluation. Our motivation is to foster the application of multimodal language models in the medical field in a more comprehensible manner. Our target readers include clinicians and computer scientists. Specifically, we aim to provide clinicians with insights into the current performance of various pre-training models on different clinical tasks, as well as opportunities to evaluate model interpretability and contribute to developing new public datasets. Meanwhile, we hope that computer scientists will advance the clinical translation of models by focusing on clinical tasks, recognizing the significance of external validation, and increasing model transparency in the clinical translation process. The review questions and objectives for this scoping review are as follows: The primary research question is: What is the current state of the literature on MDL in biomedical images and texts? This question will be addressed by exploring the following sub-questions: What databases were utilized in these studies? What were multimodal fusion techniques employed in these studies? Which image and text modalities were incorporated in these studies? What metrics were utilized to evaluate the model's performance in these studies? Did these studies employ external validation? Did these studies explicate the model's interpretability? The organization of the review is as follows: Section <ref> describes the protocol used in planning and executing this systematic review. Section <ref> discusses the research directions of five tasks: report generation, visual question answering, cross-modal retrieval, diagnostic classification, and semantic segmentation. Section <ref> summarizes the limitations and challenges of the current approaches and highlights future research directions. Lastly, Section <ref> concludes the final remarks. § METHODS Our scoping review follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines <cit.>. §.§ Eligibility Criteria Our scoping review focused on research on multimodal deep learning techniques applied to medical images and unstructured text. The inclusion criteria for our review consisted of English-language articles published between 2018 and 2022, including both conference papers and journal articles. We chose this time frame to capture the most up-to-date research in this rapidly evolving field. Additionally, we refer to relevant preprint articles to ensure we can consider cutting-edge research that has yet to be published in peer-reviewed venues. §.§ Information Sources A search of multiple databases was carried out, including PubMed[<https://pubmed.ncbi.nlm.nih.gov/>], the Association for Computing Machinery (ACM) Digital Library[<https://dl.acm.org/>], the Institute of Electrical and Electronics Engineers (IEEE) Xplore Digital Library[<https://ieeexplore.ieee.org/Xplore/home.jsp>], Google Scholar[<https://scholar.google.com/>], and Semantic Scholar[<https://www.semanticscholar.org/>]. The most recent search was executed on January 8, 2023. §.§ Search Strategy All the studies collected in this research were confined to the medical field. Initially, our search comprised three keyword groups: image modality (e.g., medical images and radiology images), text modality (e.g., text and report), and multimodal fusion learning (e.g., multimodal learning, joint fusion, and contrastive learning). We combined these keywords to carry out the first round of collection across five databases. To ensure the comprehensiveness of the articles collected, we conducted a second round of collection on Google Scholar, by adding a fourth application-oriented keyword group (i.e., report generation, visual question answering, and cross-modal retrieval). §.§ Study Selection Title and abstract screening were conducted independently by two reviewers (ZS and ML). In cases of disagreement, studies were subjected to full-text review, and a consensus was reached through discussions. Subsequently, each article was reviewed and labeled according to the tasks. These tasks encompassed report generation, visual question answering, cross-modal retrieval, diagnostic classification, semantic segmentation, and other related tasks, with the possibility for a single article to correspond to multiple tasks. During the screening and the full-text review stages, we excluded review articles, non-medical articles, poor-quality articles, and unimodal studies (i.e., studies focusing solely on images or text). Articles containing modalities without images or text (e.g., omics data, lab test data, and demographic data) were also excluded. §.§ Data Extraction and Synthesis In our study, we undertook a systematic analysis of each downstream task. Firstly, we explored commonly used datasets for the task at hand, as well as their primary contents. Secondly, we expounded on the commonly employed multimodal frameworks and development trends of the methodology (e.g., fusion embedding, transformer-based attention models, and contrastive language-image pre-training). Subsequently, we summarized the specific image and text modalities covered in the articles, such as chest X-rays (CXR) and radiology reports. Lastly, we sorted out commonly used evaluation metrics for each downstream task, such as the area under the receiver operating characteristic curve (AUC), F1-score, and bilingual evaluation understudy (BLEU) <cit.>. Of particular note, we considered whether clinical experts were invited for external validation and explanation of the model's interpretability. We believe this has significant implications for enhancing the accuracy of computer-aided diagnosis and prognosis in the future. § RESULTS §.§ Included Studies and Datasets A total of 361 articles were retrieved from five databases, from which 77 articles were ultimately included in our review. Figure <ref> shows the flowchart of our article screening process. During the screening process, we excluded 137 articles based on their titles and abstracts, according to our predetermined exclusion criteria (Section <ref>). Subsequently, a full-text review was conducted on the remaining articles, which resulted in an additional 13 articles being excluded. Specifically, these articles were discarded based on evaluations of their full texts, including 3 non-medical articles, 6 articles that lacked a text modality, 1 article that lacked an imaging modality, 2 articles on unimodal learning, and 1 poor-quality article. Table <ref> encapsulates the medical multimodal datasets employed in the articles collected in this scoping review, encompassing the dataset name, image type, text type, and the corresponding website for each dataset. §.§ Report Generation Report generation aims at generating descriptives from EHR and medical images automatically. It could ease the work burden upon clinicians and improve the quality of the reports themselves. Since the training process of report generation typically requires both medical images and text reports written by clinicians, it can be naturally considered a multimodal learning process. Table <ref> provides an overview of the application of multimodal deep learning on report generation. Common image data used in the medical field include X-rays, computerized tomography (CT), magnetic resonance imaging (MRI), and pathological images. A common dataset for this task is the IU X-Ray <cit.> dataset, which comprises 7,470 frontal and lateral chest radiographs and 3,955 corresponding reports. Another widely-used dataset is the MIMIC-CXR <cit.> dataset, including 377,110 images and 227,827 reports. Furthermore, there exist datasets specifically designed for image classification and assistance in report generation, such as the CheXpert dataset <cit.>, which comprises 224,316 images and 14 labels marked as present, absent, or uncertain. Most studies employ convolutional neural networks (CNNs) to process medical images. Regarding text processing, Long Short-Term Memory (LSTM) was previously a popular method. For example, Yuan et al. <cit.> developed a CNN encoder and hierarchical LSTM decoder that utilized a visual attention mechanism based on multi-view in radiology. In the recent two years, the Transformer architecture has seen increasing use in report generation. Chen et al. <cit.> proposed the VMEKNet model, which combines the Transformer architecture with visual memory and external knowledge, resulting in improved performance in both qualitative and quantitative experiments and clinical diagnosis. Another notable contribution is the AlignTransformer proposed by You et al. <cit.>, which effectively addresses data bias and is particularly well-suited for long-sequence report generation. The use of self-supervised learning techniques, such as CLIP, has also garnered attention for its ability to retrieve reports for report generation purposes. The CXR-RePaiR model proposed by Endo et al. <cit.> employed the CLIP approach with retrieval-based mechanisms and achieved outstanding metrics in language generation tasks. Similarly, the RepsNet model proposed by Tanwani et al. <cit.> incorporates the principle of self-supervised contrastive alignment. Recent research has focused on improving the factual correctness and completeness of generated reports through reward mechanisms. Miura et al. <cit.> developed a model that applies a reward mechanism to reinforcement learning, resulting in significant improvements in clinical performance. This approach was further refined by Delbrouck et al. <cit.> and improved by 14.2% in factual correctness and 25.3% in completeness. Evaluation metrics for report generation can be classified into three categories: text quality, medical correctness, and explainability <cit.>. These metrics are typically intended to be generated automatically, rather than manually, to facilitate automation of the report generation process. The text quality is commonly evaluated using metrics such as BLEU <cit.>, METEOR <cit.>, and ROUGE-L <cit.>. Medical correctness is evaluated using metrics such as AUC, precision, recall, and F1 <cit.>. Yu et al. introduced a composite metric, RadCliQ, aimed at quantifying the similarity between model-generated reports and those produced by radiologists, and the percentage of decreased errors <cit.>. Additionally, the explainability-related metrics factENT and factENTNLI, proposed by Miura et al. <cit.>, have been shown to effectively evaluate the factual correctness and completeness of the model. In the reviewed literature, 10 articles sought external validation through the involvement of radiologists or other clinical experts. Furthermore, 14 articles provided validation of the interpretability of the models through various methods. §.§ Visual Question Answering In the clinical domain, Visual Question Answering (VQA) represents a computer-assisted diagnostic technique that offers clinical decision-making support for image analysis <cit.>. Table <ref> is an overview of the application of MDL on VQA. Commencing in 2018, ImageCLEF has been conducting an annual challenge for medical VQA, evaluating and ranking the performance of participating models. The mainstream VQA datasets in the medical domain include VQA-MED-2018 <cit.>, VQA-MED-2019 <cit.>, and VQA-MED-2020 <cit.>, which were proposed by the challenge tasks. These datasets encompass radiographic images along with corresponding question-answer pairs. For instance, VQA-MED-2020 comprises 4,500 radiographic images and 4,500 question-answer pairs <cit.>. Additionally, VQA-RAD consists of 315 radiological images and 3,500 question-answer pairs <cit.>. The PathVQA dataset contains 1,670 pathological images and 32,799 question-answer pairs <cit.>. Liu et al. <cit.> introduced the SLAKE, a bilingual dataset that encompasses semantic labels and structural medical knowledge, incorporating more modalities and body parts. The SLAKE includes 642 images, 14,028 question-answer pairs, and 5,232 medical knowledge triplets. A typical VQA model consists of four essential components: an image feature extractor, a question feature extractor, a multimodal fusion component, and a classifier or generator. For the image feature extractor, CNN-based pre-trained models such as ResNet <cit.> or VGGNet <cit.> are often employed to extract high-dimensional features from medical images. Liu et al. <cit.> introduced a bi-branch model that leverages both ResNet152 and VGG16 to extract sequence/spatial features and retrieve the similarity of image features, thereby enhancing the semantic understanding of images. For question feature extraction, recurrent neural networks (RNNs) such as Long-Short-Term Memory (LSTM) <cit.> and Gated Recurrent Unit (GRU) <cit.> are commonly utilized. Additionally, BERT-based models <cit.> have seen increasing use for extracting textual features. With regards to multimodal fusion, models from general domain VQA such as Stacked Attention Networks (SAN) <cit.>, Bilinear Attention Networks (BAN) <cit.>, Multimodal factorized bilinear (MFB) <cit.>, and Multimodal Factorized High-order (MFH) <cit.> are often adopted. Sharma et al. <cit.> utilized MFB as a feature fusion technique to design an attention-based model that maximizes learning while minimizing complexity. Liu et al. <cit.> proposed a pre-training model called the Contrastive Pre-training and Representation process (CPRD), which effectively resolves the issue of limited MED-VQA data and demonstrates excellent performance. The issue of data scarcity and lack of multilevel reasoning ability in Med-VQA has prompted the development of the Mixture of Enhanced Visual Features (MEVF) <cit.>. MEVF is a meta-learning-based approach that utilizes Model-Agnostic Meta-Learning (MAML) <cit.> and Convolutional Denoising Auto-Encoder (CDAE) <cit.> to effectively address the problem of insufficient data during image feature extraction. The proposed method has gained widespread use in subsequent studies and has been further improved by the introduction of the Question Conditioned Reasoning (QCR) and Type Conditioned Reasoning (TCR) modules by Zhan et al. <cit.>, which enhance the model's reasoning ability. Do et al. <cit.> have proposed a Multiple Meta-model Quantifying (MMQ) model that achieves remarkable accuracy with the addition of metadata. The latest trends indicate that BERT and attention-based models are currently the most effective and are expected to be the future of VQA models. The RespsNet-10 proposed by Tanwani et al. <cit.> achieved an accuracy of 0.804 on the ImageCLEF 2018 and ImageCLEF 2019 datasets. Meanwhile, the study by Zhan et al. investigated the contrastive representation learning model UnICLAM with adversarial masking and obtained an accuracy of 0.831 on the SLAKE dataset <cit.>. Accuracy is the most widely used evaluation metric for VQA, typically associated with classification models and closed-ended questions. Meanwhile, some generation models designed to tackle open-ended problems may also employ alternative metrics, such as BLEU or WBSS <cit.>, for evaluation purposes. While 12 articles have demonstrated the interpretability of the models, there has been a lack of studies that have sought to evaluate the results of VQA models from clinicians. §.§ Cross-modal Retrieval Cross-modal retrieval encompasses two primary types of retrieval: image-to-text retrieval, which involves retrieving associated text for a given image, and text-to-image retrieval, which involves retrieving the associated image for a given text. Table <ref> summarizes an overview of the application of MDL on cross-modal retrieval. In the medical field, cross-modal retrieval tasks frequently involve radiological images and reports, such as those found in MIMIC-CXR <cit.> and CheXpert <cit.> datasets. The ROCO dataset, comprising over 81,000 radiology image-text pairs, is also widely employed in cross-modal retrieval tasks <cit.>. In addition, a small number of pathological captioning datasets exist. One is the ARCH dataset proposed by Gamper et al <cit.>. It comprises 7,579 image and description pairs extracted from medical articles on PubMed and pathology textbooks. Most cross-modal retrieval tasks rely on matching image and text features through contrastive learning. This process involves both global and local feature matching, together with attention mechanisms. For example, Huang et al. <cit.> introduced GLoRIA which enables cross-modal retrieval through the averaging of global and local similarity metrics. In a separate study, Chen et al. <cit.> developed self-supervised multimodal masked autoencoders, achieving excellent performances for image-to-text retrieval and text-to-image retrieval on the ROCO dataset. Maleki et al. <cit.> proposed LILE, a dual attention network that uses Transformers and an additional self-attention loss term to enhance internal features for text retrieval and image retrieval on the ARCH dataset. Widely used measurements for assessing the performance of cross-modal retrieval are precision@K <cit.> and Recall@K <cit.>, which quantify the accuracy of the first K retrieval results. Another commonly used metric is the mean reciprocal rank (MRR) <cit.>. Out of the 12 studies in our collection, only 2 works incorporated external validation, while 6 studies assessed the interpretability of their model. §.§ Computer-aided diagnosis MDL-based computer-aided diagnosis (CAD) is the use of generated output from multimodal data as an assisting tool for a clinician to make a diagnosis. Incorporating text modality in this context has been shown to provide supplementary features that can enhance performance in image classification. Currently, research in CAD mainly focuses on utilizing chest X-ray images in conjunction with corresponding radiological reports. It is expected that future pathological datasets will expand this field of research. Table <ref> summarizes the application of multimodal deep learning on CAD. There exist several commonly employed multimodal fusion strategies, including image-text embedding and contrastive learning. Image-text embedding refers to merging image and text features, which are then trained using supervised learning. For example, Wang et al. <cit.> introduced a Text-Image Embedding network (TieNet), which utilized a multi-task CNN-RNN framework and achieved an AUC of over 0.9 in thorax disease classification. In contrast, contrastive learning often involves image-text alignment and self-supervised learning. Tiu et al. <cit.> proposed a self-supervised learning framework, CheXzero, which achieved expert-level performance in zero-shot thoracic disease classification without requiring manual labeling. Monajatipoor et al. <cit.> developed BERTHop, which leverages PixelHop++ <cit.> and VisualBERT <cit.> to enable the learning of associations between clinical images and notes. This model achieved an AUC of 0.98 on the IU X-Ray dataset <cit.>. Studies on COVID-19 diagnosis have recently been another popular trend. Zheng et al. <cit.> designed a multimodal knowledge graph attention embedding framework for diagnosing COVID-19, based on clinical images and doctor-patient dialogues. The proposed model performed better than single modality approaches, with an AUC of 0.99. In addition, the MedCLIP proposed by Wang et al. <cit.> achieved better performance than supervised models for the zero-shot classification task of COVID-related datasets. The metrics employed to assess the performance of diagnostic classification primarily comprise the AUC and the F1-score. Additionally, the Matthews correlation coefficient (MCC) is utilized to assess the dissimilarity between model and expert classifications <cit.>. Out of the 24 studies gathered, 4 incorporated external validation, while 11 studies focused on elucidating the interpretability of the model. §.§ Semantic Segmentation This group of studies investigates the effectiveness of image-text contrastive learning, which involves utilizing semantic segmentation to extract visual features that can be juxtaposed with textual features to facilitate the comprehension of the relationship between images and their corresponding textual descriptions (Table <ref>). Additionally, local alignment assessment in contrastive learning is evaluated using semantic segmentation techniques. Typical datasets employed for semantic segmentation include SIIM <cit.> and RNSA <cit.>. The SIIM dataset consists of 12,047 chest radiographs, along with corresponding manual annotations. Similarly, the RNSA dataset includes 29,700 frontal view radiographs for evaluating evidence of pneumonia. Boecking et al. have recently proposed the MS-CXR <cit.> dataset, which comprises 1153 image-sentence pairs with annotated bounding boxes and corresponding phrases validated by radiologists. This dataset covers eight distinct cardiopulmonary radiology findings. Image-text alignment and local representation learning are commonly used in MDL for semantic segmentation. These techniques can help improve the model's accuracy by enabling it to better understand the spatial relationships between different regions in the image and the relationship between visual and textual information <cit.>. Li et al. <cit.> proposed LViT, which used medical text annotations to improve the quality of image data and guide the generation of pseudo labels, leading to better segmentation performance. Müller et al. <cit.> devised a novel pre-training approach, LoVT, which aimed to specifically address localized medical imaging tasks. Their method exhibited superior performance on 10 out of 18 localized tasks in comparison to commonly employed pre-training techniques. In all the research studies that we have gathered, Dice <cit.> has been utilized as a metric for measuring the similarity between predicted segmentation and ground truth. Additionally, mean intersection over union (mIoU) and contrast-to-noise ratio (CNR) have also been employed. Out of the 5 studies in our collection, no work incorporated external validation, while 2 studies assessed the interpretability of their model. §.§ Other Related Tasks During our article collection, we identified several works that, while not fitting into the aforementioned categories, are of considerable importance. These works include studies centered on medical image generation, object detection, multimodal predictive modeling, MDL-related databases, and libraries of pre-training models. Chambon et al. <cit.> fine-tuned the Stable Diffusion model to generate CXR images with realistic-looking abnormalities by employing domain-specific text prompts. In a separate publication, they introduced RoentGen, a model adept at synthesizing CXR images predicated upon text prompts present in radiological reports, resulting in a 25% enhancement in the representation capabilities of pneumothorax <cit.>. Qin et al. <cit.> scrutinized the implementation of pre-trained vision language models (VLM) for medical object detection and devised an approach to incorporate expert medical knowledge and image-specific information within the prompt, thereby augmenting the performance of zero-shot learning. Lin et al. <cit.> developed a survival prediction model using radiation reports and images to forecast ICU mortality. This model outperformed traditional single-modal machine learning methods with a higher C-index. Bai et al. <cit.> designed an interactive VQA system that empowers patients to upload their own multimodal data, choose the appropriate model in the library, and communicate with an AI robot for model evaluation. Delbrouck et al. <cit.> presented ViLMedic, a Vision-and-Language medical library, consisting of over 20 pre-trained models for various downstream tasks. This resource facilitates the real-world clinical translation of these models. Kovaleva et al. <cit.> released the first publicly available visual dialog datasets for radiology, highlighting the belief that integrating patients' medical history information would enhance the performance of traditional VQA models. Li et al. <cit.> summarized the performance of four pre-trained models for multimodal vision-and-language feature learning and visualized their attention mechanism. Evidenced by these studies, we believe multimodal vision-and-language learning will continue to expand its range of applications in the future, with more related databases and model libraries being established to promote its clinical use. § DISCUSSION Our scoping review identifies research related to MDL in biomedical images and texts on different downstream tasks, with specific attention to the datasets employed, model methodology, evaluation metrics, external validation, and interpretability. Overall, the evidence suggests that deep learning models on multimodal medical image and text data can potentially improve diagnostic accuracy and clinical decision-making, showing promising results in several medical fields, including oncology, radiology, and pathology. However, our review also reveals challenges related to data imbalance, clinical knowledge, model fairness, and human evaluation. These findings are highly relevant to clinicians, researchers, and computer scientists interested in leveraging recent advances in artificial intelligence and deep learning to improve patient care and health outcomes. In the realm of MDL, acquiring high-quality annotated data is crucial for the development and evaluation of models, yet several challenges persist in obtaining such datasets like MIMIC-CXR. First, the annotation of medical data is a laborious and time-consuming task that requires domain expertise and specialized tools to ensure accuracy and consistency, particularly when annotating both image and text data. This can result in insufficient annotated samples for certain modalities, leading to imbalanced datasets that adversely affect model performance. To address data scarcity and reduce the burden of expert annotation, multimodal meta-learning, and few-shot learning are poised to remain popular research topics in the medical field <cit.>. Second, the current trend in medical datasets predominantly features radiology images and their accompanying reports, with a limited representation of other imaging modalities such as pathological images, ultrasound, endoscopy, and text modalities such as clinical notes. This limits the broader clinical application of multimodal models. Future work should construct more multimodal datasets for different medical scenarios, and integrate these heterogeneous data into a system to realize multimodal cross-scenario learning. Thirdly, data privacy concerns are pronounced in the medical domain, necessitating the protection of sensitive patient information. However, this often leads to a lack of publicly available datasets, exacerbating the issue of insufficient and unbalanced data. Advocating for open-access initiatives can help address this challenge by enabling researchers to access larger and more diverse datasets for model training and evaluation. In addition, implementing advanced privacy-preserving techniques, such as differential privacy and federated learning, can further alleviate privacy concerns while allowing researchers to utilize medical data (paper: Federated learning and differential privacy for medical image analysis). Incorporating clinical knowledge into medical NLP has been identified as a major research direction that can enhance the model's performance and broaden its application in clinical practice <cit.>. However, the current research is limited in terms of the integration of clinical knowledge into MDL models. Incorporating clinical knowledge into the encoding stage can help learn useful visual features, leading to more accurate predictions. Specifically, clinical knowledge can provide insights into specific image features that are more clinically relevant, such as lesions or abnormalities, and guide the model to focus on these features during the encoding process. Chen et al. <cit.> integrated external knowledge into the features of TF-IDF and achieved improved performance in both report generation and diagnostic tasks. Furthermore, clinical knowledge can be particularly beneficial in scenarios with limited or new data, such as COVID-19-related datasets, where overfitting is more likely to occur. Liu et al. <cit.> incorporated external knowledge into the COVID-19 CT report generation task, generating fewer irrelevant words and higher BLEU scores. In addition, Chen et al. <cit.> demonstrated that aligning, reasoning, and learning using clinical knowledge could achieve higher accuracy than each approach individually in VQA. Future research could explore more sophisticated ways to integrate clinical knowledge into models, such as knowledge graphs and ontologies. Moreover, researchers could examine how clinical knowledge from diverse sources, such as electronic health records, medical literature, or expert opinions, can be integrated to enhance the models' performance and adaptability. It is also important to assess the clinical relevance and impact of models in real-world clinical settings by conducting clinical trials and involving clinicians and patients in the development and validation process. Human evaluation is essential for assessing the practicality of the model in real-world clinical scenarios and providing insights into the model's decision-making process. However, human evaluation was not widely employed in the studies we collected. Out of the five downstream tasks covered in this review, report generation incorporated more external validation, as observed in 10 of 25 articles. Notably, no studies were found to introduce external validation for VQA or semantic segmentation tasks. The observed phenomenon could be attributed to the fact that human evaluation is time-consuming and costly <cit.>. Additionally, the absence of standardized protocols for human evaluation of MDL models in medical settings poses a significant challenge to the comparison and generalization of findings across studies <cit.>. Furthermore, the interdisciplinary collaboration between clinicians and computer scientists can be a formidable obstacle, owing to differences in their respective backgrounds and training that can hinder effective communication and seamless teamwork. Besides, clinicians often have limited availability to engage in such collaborative efforts, while computer scientists may face stringent deadlines for developing and testing models. In the future, there is a need to develop and adopt standardized protocols for the human evaluation of MDL models in medical applications. Moreover, interdisciplinary workshops can help bridge the gap between clinicians and computer scientists and facilitate effective collaboration. Finally, effective automated metrics could provide a more objective and efficient approach to evaluating MDL models. The fairness and explainability of MDL models also exhibit deficiencies. The absence of interpretability of the models engenders challenges in fostering trust in their predictions, thereby limiting their adoption in clinical practice. The lack of transparency in these“black boxes” further compounds the issue as it hinders the detection of errors and biases, thereby resulting in potential harm to patients <cit.>. Out of the 77 articles we collected, only 35 provided an exposition of the interpretability of the model, leveraging techniques such as heat maps and factual metrics. Among them, the visual interpretation of CNN models, which are based on attention mechanisms, has gained increasing traction in the medical field <cit.>. However, it is worth noting that a significant number of articles do not explicitly consider the inclusion of interpretability as an improvement, and only a few employ a formal counterfactual evaluation [49]. Future MDL research endeavors must prioritize the development of interpretable models. Standardized methods are needed for evaluating and quantifying the interpretability of these models. Additionally, it is essential to engage in a continuous dialogue between clinicians, researchers, and computer scientists to ensure that the development of MDL models aligns with the values and needs of the medical community. While our scoping review provides a comprehensive overview of the current state of MDL in biomedical images and texts, several limitations must be considered. First, our search strategy may have missed some relevant studies, as we focused on a limited set of databases and search terms. Second, we tried to understand the current state of the literature from their downstream tasks and applications. Still, there was a lack of a systematic summary of the methodology, particularly regarding the multimodal fusion strategy. Third, the heterogeneity of the included studies makes it difficult to compare and synthesize the evidence across different domains and contexts. Finally, our scoping review did not include a formal quality assessment of the studies, which may have affected the reliability and validity of the evidence. However, we believe the breadth and depth of the evidence we gathered will provide a robust foundation for future research and improvement. § CONCLUSION In this scoping review, we systematically examined the current state of research on MDL in biomedical images and texts based on various downstream tasks, including report generation, visual question answering, cross-modal retrieval, computer-aided diagnosis, and semantic segmentation. Our findings suggest that MDL can potentially improve diagnostic accuracy and clinical decision-making, but it also poses challenges related to data imbalance, clinical knowledge, human evaluation, and model fairness. We also discussed several areas for further investigation and improvement, such as developing more robust evaluation standards, collaborating with interdisciplinary institutions or individuals, and exploring new data sources and modalities. Our review has important implications for clinicians, researchers, and computer scientists interested in leveraging the latest advances in MDL to improve patient care and health outcomes. § ACKNOWLEDGMENTS This work was supported by the National Library of Medicine under Award No. 4R00LM013001, NSF CAREER Award No. 2145640, and Amazon Research Award. This work is also supported by the Intramural Research Program of the National Institutes of Health, National Library of Medicine. elsarticle-num
http://arxiv.org/abs/2307.05963v1
20230712071220
GVCCI: Lifelong Learning of Visual Grounding for Language-Guided Robotic Manipulation
[ "Junghyun Kim", "Gi-Cheon Kang", "Jaein Kim", "Suyeon Shin", "Byoung-Tak Zhang" ]
cs.RO
[ "cs.RO", "cs.CV" ]
Modeling Motion Dynamics in Psychotherapy: a Dynamical Systems Approach Itai Dattner Department of Statistics, University of Haifa [email protected] August 12, 2023 ======================================================================================== empty empty Language-Guided Robotic Manipulation (LGRM) is a challenging task as it requires a robot to understand human instructions to manipulate everyday objects. Recent approaches in LGRM rely on pre-trained Visual Grounding (VG) models to detect objects without adapting to manipulation environments. This results in a performance drop due to a substantial domain gap between the pre-training and real-world data. A straightforward solution is to collect additional training data, but the cost of human-annotation is extortionate. In this paper, we propose Grounding Vision to Ceaselessly Created Instructions (GVCCI), a lifelong learning framework for LGRM, which continuously learns VG without human supervision. GVCCI iteratively generates synthetic instruction via object detection and trains the VG model with the generated data. We validate our framework in offline and online settings across diverse environments on different VG models. Experimental results show that accumulating synthetic data from GVCCI leads to a steady improvement in VG by up to 56.7% and improves resultant LGRM by up to 29.4%. Furthermore, the qualitative analysis shows that the unadapted VG model often fails to find correct objects due to a strong bias learned from the pre-training data. Finally, we introduce a novel VG dataset for LGRM, consisting of nearly 252k triplets of image-object-instruction from diverse manipulation environments. § INTRODUCTION Integrating natural language understanding and robotic manipulation is a longstanding goal in the fields of robotics and artificial intelligence (AI). Developing intelligent robots that can comprehend and execute natural language instructions has numerous practical applications, such as assisting users with household chores, bringing objects, and providing care for elderly or disabled people. In this context, Language-Guided Robotic Manipulation (LGRM) has been studied to develop robots that can manipulate everyday objects based on human language instructions. One of the critical components of LGRM is to pinpoint visual objects that human instruction describes, commonly called Visual Grounding (VG) <cit.>. Since real-world environments often contain numerous objects with similar characteristics, LGRM focuses on instructions that include diverse visual information such as object categories, attributes, and spatial relationships. As an illustration, robots should execute instructions like “please pick up the horse-shaped toy and place it in the green box on the shelf” or “could you bring me the white mug next to the vase?”. Thus the process of VG in LGRM requires a deep understanding of diverse information in both visual perception and the semantics of natural language expressions. Many researchers in computer vision and natural language processing communities have made remarkable progress <cit.> with diverse benchmark datasets <cit.>. Studies on LGRM <cit.> utilize deep learning-based VG models trained on large-scale datasets <cit.>. However, they do not adapt the models to the real-world robotic environment as seen in the dotted lines in Fig. <ref>, assuming that the datasets for VG are a good proxy for the real world. We argue that applying the VG model to the real world without adaptation severely limits the manipulation ability due to the domain gap between the training datasets and data observed in the real-world environment: the properties and quantity of objects and the environment itself affect the inconsistency of the domain. Consequently, the VG model remains poorly fitted to the robotic environment. One immediate solution for adapting the VG model is to retrain it using VG data collected through extensive human annotation. However, as demonstrated by the work <cit.>, manually constructing an environment-specific dataset is prohibitively expensive and arduous. Above all, a new dataset should be collected whenever the environment changes. To this end, we propose Grounding Vision to Ceaselessly Created Instructions (GVCCI), a lifelong learning framework for LGRM where a robot continuously learns visual grounding without human supervision. The core idea of our approach is automatically producing natural language instructions for manipulation via a pseudo-instruction generation method. Our framework first extracts the objects and their corresponding location, category, and attributes in the visual input with the off-the-shelf object detectors <cit.> and extracts relationships between the detected objects with the proposed heuristic algorithm. Then pseudo-instruction is generated with object features via predefined templates. A self-generated triplet consisting of the given image, the target object's coordinates, and instructions is stored in the buffer that stochastically forgets earlier collected data. Finally, GVCCI updates the VG model using the stored data, enabling the model to adapt to the real-world environment without requiring manual annotations. To validate the robustness and effectiveness of our approach, we collected 150 images containing numerous objects and 528 human-annotated instructions from two different environments, resulting in three distinct test sets with varying properties. We conduct offline (VG) and online (LGRM) experiments on these datasets. In the offline experiment, we identify the efficacy of lifelong learning on two state-of-the-art VG models. Regardless of the specific VG model or environment, GVCCI monotonically enhances the performance of state-of-the-art VG models through lifelong learning, with up to 56.7% compared to the Zero-Shot (i.e., not adapted) VG model as synthetic VG data accumulates. Furthermore, we conduct the online experiment using a real-world arm robot as depicted in Fig. <ref>. Results from the experiment show that our framework increases the manipulation performance by up to 29.4%. To summarize, our contributions are mainly three-fold: * We introduce GVCCI, a lifelong learning framework that enables intelligent robots to continuously learn visual grounding by generating pseudo-instructions, thereby circumventing human annotation. * We demonstrate the efficacy of GVCCI through experiments conducted in both online and diverse offline settings and highlight the importance of real-world adaptation for a state-of-the-art visual grounding model. * We propose a novel visual grounding dataset for a pick-and-place task, VGPI, comprising 825 images collected from two robotic environments, including 528 human instructions and 252,420 self-generated instructions. § RELATED WORK §.§ Visual Grounding Visual grounding (VG) aims to localize objects described in the referring expression. Various deep learning-based approaches <cit.> have been proposed to tackle VG, but vision-language pre-training (VLP) <cit.> has emerged as the most dominant approach. Based on the transformer architecture <cit.>, VLP models are first pre-trained on a large-scale vision-language dataset to learn cross-modal semantics and then fine-tuned on a VG dataset. While these approaches utilize predetermined and static datasets, GVCCI stands apart by performing lifelong learning through generating synthetic expressions from continually perceived images. To generate synthetic expressions for VG, our method uses various object information, such as category, attribute, location, and spatial relationships. Especially, referring expressions for robotic manipulation often need spatial relationships (e.g., in front of, next to) to disambiguate the objects with a similar appearance. Thus it focuses more on generating expressions that include relational information than existing query generation methods <cit.>. §.§ Language-guided Robotic Manipulation Extensive works of Language-Guided Robotic Manipulation (LGRM) have been studied in desire of the natural interaction with robots. We categorize these works into two streams: grounding language to action <cit.> and grounding language to vision <cit.>. We focus on the latter stream of works that aims to ground language to objects in an image to successfully manipulate referred objects. Although purposeful approaches have grounded language to vision in robotics, earlier works studied strong constraints. A work <cit.> used pre-defined object categories which limit the supported instruction. Some studies classified manipulation targets from instructions but assumed to know the information of the objects, such as location, beforehand <cit.>. Thus many studies have aimed to guide a real-world manipulation task with unconstrained language instruction and unknown object information, taking different perspectives to amortize such complexity. A few studies <cit.> tried to learn spatial information in a given scene and human instructions, and <cit.> tried to handle ambiguous human instructions with human-robot interaction. Other methods, such as <cit.>, took advantage of additional non-linguistic modalities such as gesture and audio. These works tackled an essential problem to conduct LGRM, but most of the works took advantage of a pre-trained model on existing VG datasets without any adaptation to the environment, performing manipulation tasks with zero-shot predictions of VG (See Fig. <ref>). This discrepancy between the training and the inference domain is mainly responsible for frequent failures of VG and subsequent failures in LGRM in the complex real world. A work <cit.> relieved this problem by adapting the VG model with the labeled dataset of their environment. Nevertheless, labeling VG data for every shifted domain the robot encounters is impractical. A work <cit.> has the most analogous motivation as ours, raising a problem of domain discrepancy of VG when conducting LGRM. They alleviated this problem by learning VG with automatically sampled referring expressions in simulation and transferring the knowledge to the real world. However, object categories need to be defined in simulation, and additional human labor is needed when transferring the knowledge to the real world, making robots hard to adapt in a lifelong manner. § METHOD GVCCI consists of 5 modules: (1) visual feature extraction module, (2) instruction generation module, (3) probabilistic volatile buffer, (4) visual grounding model, and (5) manipulation module. The overall framework is shown in Fig. <ref>. Notations. The robot's visual perception at time step t consists of I^t, the 2D RGB input, and the corresponding 3D point cloud. The visual perception contains K objects O^t = {o_k^t}_k=1^K. We also define O_rel(k)^t⊂ O^t as a set of objects that are associated with o_k^t. Objects in O_rel(k)^t is either an object nearby o_k^t or has the same object category as o_k^t. Examples of objects in O_rel(k)^t are shown with the dotted bounding boxes in Fig. <ref> where o_k^t is depicted with the solid bounding box. Our approach generates the instructions for picking U_k^t={u_k,l^t}_l=1^L and placing V_k^t={v_k,l^t}_l=1^L regarding o_k^t. In the inference phase, the robot grounds the location of the target object b̂_pick^t and target place b̂_place^t utilizing visual information I^t and human instructions u^t, v^t. §.§ Visual Feature Extraction Module The visual feature extraction module takes I^t from the visual perception and extracts three visual features for each object in O^t: category features C^t, location features B^t, and attribute features A^t. First, Faster RCNN <cit.> is used to extract the category features C^t={c_k^t}_k=1^K and the location features B^t={b_k^t}_k=1^K. Each element in the location features (i.e., b^t_k) consists of the top-left and bottom-right coordinates of the object bounding box. Meanwhile, bottom-up-attention <cit.> extracts object attribute features A^t={a_k^t}_k=1^K. We follow the type of category and attribute features defined in the Visual Genome dataset <cit.>, consisting of 1600 object categories and 400 attributes (e.g., color and material). §.§ Instruction Generation Module Inspired by the success of pseudo-supervision generation methods <cit.>, our module aims to generate plausible pick-and-place instructions with the features obtained from the visual feature extraction module. When generating referring expressions for o_k^t, visual features regarding o_k^t and O_rel(k)^t are used, denoted as c_k^t,b_k^t,a_k^t and C_rel(k)^t,B_rel(k)^t,A_rel(k)^t, respectively. Additionally, using the location features b_k^t and B_rel(k)^t, a set of spatial relational features R_k^t with related objects are extracted with the proposed heuristic algorithm, f. The algorithm compares b_k^t and B_rel(k)^t and draws three types of relational features R_k^t: (1) relationship with objects of the same category (e.g., u^t_k,1: “cup in behind" in Fig. <ref>), (2) relationship with objects of the same category and attribute (e.g., u^t_k,2: “yellow cup in behind" in Fig. <ref>), and (3) relationship with objects nearby the referred object (e.g., v^t_k,3: “yellow cup next to the blue ball" in Fig. <ref>). Using the features c_k^t,b_k^t,a_k^t,R_k^t,C_rel(k)^t,A_rel(k)^t, referring expressions for each object are generated through the pre-defined templates in Tab. <ref>. The full instruction is built with command terms and expressions. For example, pick instruction u^t_k,2∈ U_k^t in Fig. <ref> is generated by < “Pick”+Template 3 >. For generating place instructions V_k^t, prepositions (e.g., in front of) are also attached to the expression. For example, v^t_k,3∈ V^t_k in Fig. <ref> is generated by < “Place”+{preposition}+Template 4 >[Note that the place instruction generated here is not for placing the object o_k^t but to place the target object in the pick instruction, o_i^t ∈(O^t ∖{ o_k^t }), to the destination nearby o_k^t. See the example instruction v^t_k,3 in Fig. <ref>.]. Our approach produces the synthetic instructions for all K objects in the given image, constructing U^t={U_k^t}_k=1^K and V^t={V_k^t}_k=1^K. Finally, generated referring expressions are used to build two separate instructions: pick instructions that target the object o_k^t; place instructions that target destination position nearby o_k^t. [Note that the place instruction generated here is not for placing the object o_k^t but to place the target object in the pick instruction o_i ≠ k^t nearby o_k^t. See the example instruction v^t_k,3 in Fig. <ref>.] For pick instructions, referring expressions are concatenated with command terms (e.g. pick up, grasp the) building the whole pick instruction queries U_k^t. For place instructions V_k^t, generated referring expressions are concatenated not only with the command terms (e.g. place, put) but also with the prepositions (e.g. in front of, on the right side of). Our approach produces the synthetic instructions for all K objects in the given image, constructing U^t={U_k^t}_k=1^K and V^t={V_k^t}_k=1^K. §.§ Probabilistic Volatile Buffer An image I^t, collected instructions U^t and V^t, and bounding box features B^t are saved to the buffer later to be used to train the VG model. All collected data at τ=0,1,...,t should remain and be used for training. However, since our method assumes learning in a lifelong manner (t →∞) with a finite memory size of the hardware, our buffer has a mechanism that forgets the formerly generated data when the buffer overflows with the data. We implement the buffer to forget the earlier data with the exponential forgetting probability as follows: p_forget(τ, t)=min(e^-γ(τ-t)-1/e^γ M-1,1), where M indicates the maximum number of images the buffer can hold, t is the current time step. At time step t, the probability of the data before time step τ=t-M gives 1, which implies that the data before this time step is deleted. From data collected after the time step τ=t-M, the probability exponentially decreases as τ increase and reaches to 0 when τ=t. γ≥ 0 is a hyperparameter that controls how fast the forgetting probability should decay. To summarize, the buffer will try to forget the earlier data collected, maintaining the maximum size of the buffer to M. §.§ Visual Grounding Model The Visual Grounding (VG) module utilizes pick-and-place instructions to infer the target object to pick and the destination to place. Thus, we train two separate models, Grounding Entity Transformer (Get) and Setting Entity Transformer (Set). Get is a Transformer <cit.> based model that grounds the object entity to pick, and Set is another Transformer based model that infers the destination to place the object. As in Sec. <ref>, the instruction generation module produces the place instructions using prepositions. Accordingly, the bounding box coordinates B^t should be shifted considering the preposition. For example, for the place instruction “place it in front of the red cup", the corresponding bounding box still refers to the red cup. We thus shift the bounding box to the front of the red cup. Likewise, all bounding boxes B^t for place instructions are turned into B̃^t. Based on the backbone models <cit.>, we separately train Get and Set with (I^t, U^t,B^t) and (I^t, V^t,B̃^t), respectively. Note that the remaining data in the probabilistic volatile buffer is used for training. Finally, we optimize Get and Set by minimizing the negative log-likelihood of the bounding box as follows: ℒ_Get=-∑_d ∈ D^∑_k=1^K∑_l=1^LlogP_θ_1(b^d_k|I^d,u^d_k,l); ℒ_Set=-∑_d ∈ D^∑_k=1^K∑_l=1^LlogP_θ_2(b̃^d_k|I^d,v^d_k,l). At the inference phase, Get and Set infer the location of the referred object b̂^t_pick and the destination b̂^t_place, given I^t and pick and place instructions (u^t,v^t). §.§ Manipulation Module The manipulation module takes predicted 2D bounding boxes b̂^t_pick and b̂^t_place from the visual grounding model and computes the 3D pick-and-place coordinates. It first matches the 2D bounding boxes to the point cloud with the identical resolution as I^t and segments points of the object inside the boxes using RANSAC<cit.>. Computing the target coordinates by averaging the estimated coordinates of segmented points, the module plans the actual manipulation trajectory of the robotic arm using the MoveIt<cit.> software. § EXPERIMENTS Our method is evaluated in offline and online settings and compared to various baseline methods. In offline settings, we conduct a comprehensive analysis to demonstrate the proficiency of the VG model (i.e., Get). In online settings, we validate GVCCI on language-guided robotic manipulation (LGRM) in Env2 with a real-world arm robot. §.§ Experiment Setup Environment setup. We built two environments, Env1 and Env2, scattering arbitrary everyday objects on the table. These environments were designed to imitate real-world scenarios, such as a table, kitchen, or room, intentionally populating with objects of similar categories and attributes. In most cases in such environments, a simple referring expression is insufficient for the unique identification of an object. For example, in Fig. <ref>, “green cup” is not enough to identify the cup to which the expression refers. Two environments are different in many ways, such as objects, background, robotic hardware, and robots' viewpoint, including the angle and distance from the objects. Particularly in Env2, the portion of the object area is about three times larger than those in Env1 on average (see Sec. <ref>). Dataset. We construct a novel VG dataset VGPI (Visual Grounding on Pick-and-place Instruction), which contains images from both Env1 and Env2. First, Env1 data consists of two test sets (i.e., Test-H and Test-R) and a train set. Test-H includes 60 images and corresponding 212 instructions and the target object's bounding box coordinates, and Test-R has 60 images with 180 instructions and coordinates. The images in Test-H contain a relatively Higher number of objects, with an average of 11.4 objects, while the images in Test-R contain a Reduced number of objects, with an average of 7.4. We construct the train set with 540 images and the generated instructions, 97,448 for Get and 114,422 for Set. Env2 data consists of Test-E (E stands for Enlarged object size in Env2) and a train set. Test-E includes 30 images containing 7.5 objects on average, along with 68 instructions and the bounding box coordinates. The train set for Env2 has 135 images and corresponding synthetic instructions; 19,511 for Get and 20,999 for Set. In total, VGPI consists of 825 images, 460 human instructions for pick, 68 human instructions for place, 116,999 self-generated instructions for pick, and 135,421 self-generated instructions for place. Robotic system setup. Our robotic platform consists of Kinova Gen3 lite for manipulation, Intel Realsense Depth Camera D435, and Azure Kinect DK for RGB images. The platform communicates with the remote workspace conducting VG and computes the manipulation planning locally. §.§ Offline Experiment Evaluation protocol. We evaluate Get with the Intersection over Union (IoU) score, which is a dominant metric in visual grounding (VG). The IoU compares the ground truth and the predicted bounding boxes and divides the overlapping regions between the two by their union. We report [email protected] scores, which measure the percentage of predicted regions with an IoU score greater than 0.5. IoU= Area of overlap/Area of union = |b ∩b̂|/|b ∪b̂| Compared methods. We report results in a combination of two different dimensions: (1) backbone VG models (i.e., OFA <cit.> and MDETR <cit.>) and (2) three methods (i.e., Zero-Shot (Z-S in Tab. <ref>), PseudoQ, and GVCCI). The Zero-Shot method (Z-S in Tab. <ref>) is to evaluate backbone VG models (i.e., OFA <cit.> and MDETR <cit.>) on the real-world environment without any adaptation. The PseudoQ approach is to evaluate Get adapting to the real-world environment, using the instructions generated by PseudoQ. To ensure a fair comparison, all modules and settings, except for instruction generation, were kept identical to those used in our framework. All of the methods are evaluated on Test-H, Test-R, and Test-E datasets, and with four different time steps in log scale τ∈{8,33,135,540}[At each chosen time step, the model is updated for an epoch with received images on the top of the former model (e.g., at time step 33, the model is updated on the top of the model from time step 8).] except for the Zero-Shot method[Time step for the Zero-Shot method is 0.]. Results. We present our results for the offline setting in Tab. <ref>. The results indicate that GVCCI outperforms the baseline methods in locating target objects across all test sets (Test-H, Test-R, and Test-E) and backbone models (OFA and MDETR). It is worth noticing that GVCCI with τ=8 in Env1 (i.e., Test-H and Test-R) yields dramatic performance gains with only learning eight scenes compared with the Zero-Shot method. The performance monotonically increases as the observed scenes accumulate in the buffer. On the other hand, the PseudoQ method fails to demonstrate incremental improvements, indicating an early overfitting of the model. It indicates that the instructions generated by PseudoQ make the VG model overfit with a small number of data. In Env2 (i.e., Test-E), we observed relatively modest improvements in performance and even a performance degradation at τ=8. We noticed that Faster RCNN <cit.> performs poorly in the presence of noisy backgrounds in Env2, occasionally generating obstructive pseudo instructions. However, when compared to PseudoQ, GVCCI exhibits superior performance, which can be attributed to its robustness to errors from Faster RCNN while generating higher-quality pseudo instructions. §.§ Qualitative Analysis We visualize examples of VG results from both Get and OFA <cit.> in Fig. <ref> to analyze the cause of the considerable performance gain in Sec. <ref>. We categorize the cause into three main reasons. First, we observed that Get performs high-dimensional inference. For example C, the model has to deduce that the green cup in the instruction means the left one. Exploring these examples, we observe out-of-the-box reasoning capabilities from Get trained on self-generated instructions. Second, we observe that OFA results in undesirable grounding due to some biases induced by the VG dataset  <cit.>. For example, A and D in Fig. <ref> show the bias of relational terms in OFA. Specifically, example D shows the extreme case of the bias, finding an utterly irrelevant object fooled by the term “rightmost.” Furthermore, as seen in example B, OFA seems to ground phrases that occupy a longer portion in the instruction, i.e., ground “the middle green bear” rather than “Cup.” Meanwhile, our method seems robust from these biases by learning with plentifully generated relational expressions from diverse object combinations. Finally, we discovered the effect of the domain discrepancy between RefCOCO and our environment on the model's prediction. Since the objects queried in RefCOCO have larger portions in the image than in our environment, OFA tends to ground objects that have a larger portion in the image, as seen in examples A, B, C, E, F, and G. In other words, OFA tends to choose a most probable object that ‘are to be’ queried. E shows that OFA neglects all the information except for the “yellow cup”, choosing the largest object. On the other hand, our method seems to relieve this problem by adapting to all the detected objects from our environment. In addition, A shows the bottleneck of inference in a robotic manipulation environment. Since OFA tends to ground larger objects, the model occasionally ground instructions to a robot arm that appeared in the robot's visual perception, which is inevitable in robotic manipulation tasks. Adaptation with its own perception helps a robot to learn that a part of its body is “not” a desired object to be chosen. To delve into the OFA's tendency of choosing a larger object, we visualize three probability density functions[Probability densities are approximated with Gaussian Kernel Density Estimation with smoothness factor of 3.] in Fig. <ref>: ℙ_Get and ℙ_OFA, regarding the size of the object bounding box each predicted by Get and OFA, and ℙ_all, regarding the size of all the objects. In addition, we visualize the density of objects that are predicted without instructional conditions to clearly observe the bias exhibited by OFA in Fig. <ref>-(b). It is worth noting that the values of ℙ_OFA extend beyond the maximum object size, indicating that when not conditioned by instructions, OFA is prone to infer large boxes that do not correspond to any objects. Get, on the other hand, seems to choose relatively diverse objects in the image regardless of size. In Fig. <ref>-(a), despite being conditioned by the instruction, OFA still tends to ground larger objects and even fails to box out the area with no objects in some cases. We also compare the similarity of distributions with scaled Wasserstein distance, W. For (a), W(ℙ_OFA, ℙ_all)=47.73 and W(ℙ_Get, ℙ_all)=4.69. For (b), W(ℙ_OFA, ℙ_all)=95.21 and W(ℙ_Get, ℙ_all)=3.44. The distance W(ℙ_OFA, ℙ_all) and W(ℙ_Get, ℙ_all) in both (a) and (b) numerically tell that ℙ_Get is more similar to ℙ_all than ℙ_OFA is, partially proving that Get is well adapted to the environment. §.§ Online Experiment We conduct Language-Guided Robotic Manipulation (LGRM) with a real-world arm robot loaded with Get and Set. The task is divided into four consecutive procedures: (1) inferring the accurate object location referred to by the instruction, (2) grasping the target object with its bounding box coordinates predicted by Get, (3) reasoning target location from the place instruction with Set, and (4) put the target object on the target location. We perform the online experiment by reconstructing the scenes in Env2 test data. Evaluation protocol. LGRM was evaluated on four checkpoints: a success rate of pick inference; pick manipulation; place inference; and overall pick-and-place accuracy. Evaluating pick manipulation is straightforward; we regard a correct one if the robot picks up the target object referred in the instruction. However, assessing the accuracy of pick-and-place operations can be subjective for several reasons. For instance, objects like a cup may remain lying down in the target location, leading to ambiguity regarding the decision to succeed. Additionally, people may have a different interpretation of the correct location for placing the object according to the given instruction. Therefore, we assess the accuracy of pick-and-place via human evaluation. Specifically, three participants made decisions based on (1) pick-and-place instruction and (2) images before and after the manipulation was held. Finally, the participants evaluated pick inference and place inference scores since the success criteria can differ from person to person. We report the average scores from all the participants. Compared methods. Similar to the offline experiment, we compare GVCCI with two methods: (1) Zero-Shot and (2) PseudoQ <cit.>. However, as discussed in <ref>, the Zero-Shot method can not infer the target location to place since the target location is a vacant region in most cases. Thus, we built a simple rule-based framework for place operation. The framework first splits the instruction into three parts: command term (e.g., “place it"); relation term (e.g., “in behind"); and referring expression term (e.g., “the yellow cup")[Annotators were guided to use relation terms among {in front of, in behind, on the left/right side of} when annotating place instructions for the valid comparison, making rule-based parser work at inference.]. The VG model then infers the bounding box of the reference object using the referring expression term and shifts the bounding box coordinates to a fixed extent considering the relation term. Consequently, we compare GVCCI with two methods: (1) a Zero-Shot method using OFA for pick and OFA with the rule-based method for place and (2) Get and Set trained with synthetic instructions from PseudoQ. Additionally, we benchmarked a variant of GVCCI, Get for pick and Get with a rule-based method for place operation. Results. As shown in Tab. <ref>, Get and Set trained with GVCCI remarkably outperform both OFA and Get and Set trained with PseudoQ <cit.> on all checkpoints. Mainly, our method showed 29.41% absolute points gain of the overall Pick&Place accuracy as to the OFA. The robot using OFA only achieved 35.29% success rate of pick manipulation (-16.18% absolute points as to the Pick Inference accuracy), often failing to grasp the correct object that the model inferred successfully. On the other hand, the robot using Get achieved 73.53% success rate of pick manipulation on average (-8.13% absolute points as to the average Pick Inference accuracy), relatively grasping the objects better from Get than from OFA. We analyzed that the main reason is from the tighter bounding box that OFA yielded, as shown in Fig. <ref>-G. Since leveraging all the information of the object point cloud plays a dominant role in manipulating the object, loss of the information due to the tighter bounding box can lead to failure in manipulation. On the other hand, Get inferred a looser bounding box, fully exploiting object point cloud information leading to less failure in picking manipulation. Inferring the target destination point from place instructions was best using Get with a rule-based method. However, Set gave a comparable performance, nearly achieving Get with the rule-based method. Since the rule-based method can only handle the place instructions that use the exact relation term provided, Set is expected to give better results when handling free-form place instructions. To summarize the results, our proposed approach enhances the capability to comprehend human instructions and perform manipulations with greater precision, compared to both trained methods (i.e., PseudoQ) and non-trained methods (i.e., Zero-Shot OFA). § CONCLUSION This paper presents a novel lifelong learning framework for visual grounding in object-centric robotic manipulation tasks without human supervision. Our framework significantly enhances visual grounding in three real-world pick-and-place scenarios, surpassing the performance of state-of-the-art visual grounding methods and enabling robots to follow human instructions more accurately. Although our work does not claim to capture all possible human instructions, we believe it represents a valuable step towards making intelligent robots capable of understanding natural language instructions and becoming omnipresent in our daily lives. § ACKNOWLEDGEMENTS We extend our gratitude to Jungmin Lee for her invaluable contribution to video editing, Suhyung Choi for assistance in data collection, and Minjoon Jung for valuable suggestions on the writing. We also express our appreciation to all the reviewers for their insightful comments and feedback. IEEEtran.bst
http://arxiv.org/abs/2307.04341v1
20230710045017
Stroke Extraction of Chinese Character Based on Deep Structure Deformable Image Registration
[ "Meng Li", "Yahan Yu", "Yi Yang", "Guanghao Ren", "Jian Wang" ]
cs.CV
[ "cs.CV", "cs.AI" ]
Realization of an extremely anisotropic Heisenberg magnet in Rydberg atom arrays Jaewook Ahn August 12, 2023 ================================================================================ Stroke extraction of Chinese characters plays an important role in the field of character recognition and generation. The most existing character stroke extraction methods focus on image morphological features. These methods usually lead to errors of cross strokes extraction and stroke matching due to rarely using stroke semantics and prior information. In this paper, we propose a deep learning-based character stroke extraction method that takes semantic features and prior information of strokes into consideration. This method consists of three parts: image registration-based stroke registration that establishes the rough registration of the reference strokes and the target as prior information; image semantic segmentation-based stroke segmentation that preliminarily separates target strokes into seven categories; and high-precision extraction of single strokes. In the stroke registration, we propose a structure deformable image registration network to achieve structure-deformable transformation while maintaining the stable morphology of single strokes for character images with complex structures. In order to verify the effectiveness of the method, we construct two datasets respectively for calligraphy characters and regular handwriting characters. The experimental results show that our method strongly outperforms the baselines. Code is available at https://github.com/MengLi-l1/StrokeExtraction. § INTRODUCTION Stroke extraction of Chinese characters refers to extracting every single stroke of the characters based on the matching with the templates consisting of standard ordered strokes. Stroke extraction is important for certain research on Chinese characters. In the field of character recognition, the experiments in <cit.> show that further disassembly of the stroke structure of characters can significantly improve the accuracy of character recognition. In other search fields such as evaluation of calligraphy the important part of traditional Chinese culture <cit.>, and character generation <cit.>, the stroke extraction is also of great significance to them. By analyzing the characteristics of Chinese characters, the difficulties of Chinese character stroke extraction mainly include the following three aspects. First, there are more than 7000 Chinese characters commonly used. Most of them have a complex structure. Second, the shapes of character strokes are simple and only have structural differences. These indistinguishable features make recognition directly of stroke hard for cross strokes within character. Third, the unfixed number of strokes in different Chinese characters makes it difficult to build a stroke extraction model. Existing research on stroke extraction of Chinese characters, including some deep learning-based methods, mostly focus on image morphological features of strokes and radicals <cit.>. Although these methods can realize remarkable results, their cores are only morphological analysis, which exists two drawbacks or constraints: (1) the prior information of the character stroke is rarely used, and (2) the semantic analysis of the stroke is lacking. Rarely using prior information and stroke semantics can lead to errors in the separation of cross strokes and the matching of strokes with the template in the stroke extraction of complex characters. Inspired by the stroke extraction process of humans, we propose an efficient stroke extraction method of Chinese characters which not only separates strokes but also establishes the matching with the reference template. This method takes semantic features and prior information of strokes into consideration and mainly includes three steps: (1) stroke registration based on the Structure Deformable Image Registration Network (SDNet); (2) stroke segmentation based on the Image Semantic Segmentation Network (SegNet); (3) single stroke high-precision extraction based on the Single Stroke Extraction Network (ExtractNet). For human cognition, the prior information of Chinese character images refers to the basic knowledge of the position and shape of the strokes. To obtain the prior information, we use the image registration to establish a rough mapping relationship between the reference character strokes and the target character. The transformed reference strokes based on the mapping relationship are used as prior information of the target stroke positions and shapes. The semantic features of the strokes need to be stable during the registration-based transformation for effective prior information. However, the existing image registration methods <cit.> usually cause the Chinese stroke to be severely distorted when characters have complex structures. To solve the problem, we propose SDNet using multiple linear mapping planes to replace the native single mapping surface. SDNet can maintain the stability of stroke semantic features while transforming deformably stroke structure. Our main contributions are summarized as follows. * We propose a novel deep learning-based stroke extraction method, which takes semantic features and prior information of strokes into consideration more adequately and achieves significant improvement in the separation of cross strokes and matching of strokes with the template. * We propose a structure deformable image registration method, which performs better in the registration of image structure. § RELATED WORK §.§ Image Registration Image registration is a process of establishing pixel-level correspondences between different images with matched image contents. In the past decades, image registration usually extracted and matched feature regions first, such as closed-boundary regions, edges, corners, and other shape features <cit.>. By evaluating the transformation model, like elastic model <cit.> and discrete model <cit.>, the transformation relationship of these feature regions and entire image is established. Later, some researchers began to use deep learning techniques to enhance the feature extraction and matching, based on an iterative framework or reinforcement learning <cit.>. Recently, with the proposal of Spatial Transformer Networks (STN) <cit.>, the grid sample block with gradient backpropagation has facilitated the direct application of deep learning <cit.>, which effectively promotes the improvement of the image registration. However, the existing deep learning-based image registration methods cannot maintain the local stability while the whole structure is freely transformed. It is not suitable for stroke registration of Chinese characters with complex structures. §.§ Stroke Extraction for Chinese Character Image For most existing stroke extraction methods of Chinese character, analyzing the ambiguous cross region is the core and primary task. By detecting the corners caused by the interlaced strokes, <cit.> disassembled the Kaiti characters into simple strokes for calligraphy robots. <cit.> used chained line segments to represent the boundaries of characters and separated interlaced strokes by detecting whether these boundaries are regular. To further improve the accuracy of stroke extraction, some template-based methods are proposed <cit.>. The methods use stroke structure matching to establish the correspondence between sub-strokes and template strokes, which is used to merge sub-strokes created by separating strokes with cross region. However, the template of the existing template-based methods is not used for previous steps of cross region detection and separation. In this case, the reference template information (prior information) is insufficiently used. In addition, character structure matching is mostly based on shape analysis and lacks the use of stroke semantic information. § METHOD The proposed stroke extraction method, shown in Figure 1, mainly includes the following three modules. (1) The prior information acquisition module that establishes the registration between the reference stroke and the target through SDNet and uses the transformed reference strokes as prior information. (2) The stroke separation module that separates the target character preliminarily by SegNet with the guidance of prior information. (3) The single stroke extraction module that uses ExtractNet to extract every single stroke images of the target one by one according to the order of the reference strokes. §.§ SDNet for Prior Information Due to the complex stroke structure and various writing styles of Chinese characters, the position and shape of the stroke in the same character may be very different. It is the reason that it needs the high deformable registration model. However, highly distorted strokes can destroy their own shapes and reduce the validity of prior information, which needs the registration model to ensure that the shape of a single stroke is stable before and after transformation. For addressing the problem, we propose a local linear stroke spatial transformation method that constrains the transformation of a single stroke to be linear. The SDNet uses UNet as the main frame of the registration network similar to <cit.>. The main frame convolves down from size 256× 256 to 8×8 and then convolves up to size 256256 for output. To improve the analysis of Chinese character features, we add features of the Chinese character recognition of the input characters in the last four stages of encoder in the UNet. The Chinese character recognition model is a simple convolution network like VGG <cit.>. The input of SDNet consists of two parts: target character image input as Target Data 𝑡 and reference character image marked with different values for different stroke labels input as Reference Data. The output of the model is a prediction of the offset coordinate vector for each pixel, which can be easily constructed as a registration field. The structure of The SDNet is shown in Figure 2. Based on the existing output registration field Φ_𝑑, as shown in Figure 2, we add a branch in the up-convolution process. This branch upsamples the data at size 3232 by a factor of 4 and then upsamples again after one convolution to obtain the registration field Φ_𝑒 with the same size as Φ_𝑑. Φ_𝑒 is a fine-tuning of the original registration field, and its output weight is only 0.5 of Φ_𝑑. The registration field used to calculate the linear transformation is represented as Φ_𝑠 (Φ_𝑠=Φ_𝑑+0.5Φ_𝑒). Due to learning from a smaller size, Φ_𝑒 is biased towards th e prediction of the overall offset of the single stroke, which is suitable for the estimation of local linear transformation. During model training, both Φ_𝑑 and Φ_𝑠 are involved in the loss computation. Φ_𝑠 tends to learn the local registration ability of strokes under the constraint, which weakens the learning of global registration. Therefore, Φ_𝑑 is used to realize the global registration of character images to make up for the lack of Φ_𝑠. In actual training, the position and shape of the reference stroke are more stable. We use the reference stroke to mark the local region in Φ_𝑠 and calculate the linear transformation estimation of these regions. Therefore, during model training, what we actually learn is the transformation from target to reference. This operation can reduce the noise caused by errors in the linear estimation part, and improve the stability and efficiency of training. During inference, the transformation from reference to target can be obtained by calculating the inverse spatial transformation for every single stroke. §.§.§ Linear Estimation of Single Stroke Spatial Transformation The existing linear fitting methods cannot be embedded in deep networks to achieve gradient backpropagation. Inspired by the Taylor series, we construct a linear estimation method that can be used in deep neural networks: _s_linear = mean(_s_local)+( X-P_x) × mean(∂_s_local/∂ X) + ( Y-P_y) × mean(∂_s_local/∂ Y), where _s_local represents the local region of _s used for linear estimation. X and Y denote coordinate matrices. P_x and P_y denote the coordinates of the centroid of the local region, which can be calculated from the reference stroke. _s_linear denotes the linear estimation result. Equation 1 is similar to the Least Squares Linear Fitting method while the slope is directly estimated as the average of the gradients to simplify the calculation. Due to the strong learning ability of deep learning, this simplification will not affect the final estimation result, but it can effectively reduce the computing workload. §.§.§ Loss for Training The loss of SDNet consists of two parts: the global registration loss L_global and the single-stroke registration loss L_single_linear. L_global includes similarity loss L_sim_global and smoothing loss L_smooth of _d. L_single_linear is the average of all single stroke similarity losses. For the operation of linear transformation estimation, we do not need to calculate the smoothing loss of _s. Traditional image registration methods usually use Normalization Mutual Information (NCC) to measure the similarity of two images. However, NCC is not suitable for the data with simple shape such as the stroke image. To solve this, we build a ContentNet that is trained to auto-encode stroke images with a simple Encoder-Decoder structure. The similarity loss of the two stroke images is defined as the Euclidean distance of the encoding results with l_2-normalization: S_c(a,b) = Dis[norm_l_2(E(a)),norm_l_2(E(b))], where Dis denotes the calculation of Euclidean distance. E denotes the access function of the encoding result of ContentNet. Final loss in train process is defined as: L_sum(t,r,t_s,r_s,_d,_s) = λ L_single_linear(t_s,r_s,_s)+ L_sim_global(r,t,_d )+γ L_smooth(_d), L_single_linear(t_s,r_s,_s ) = 1/stroke_num∑_i = 1^stroke_num S_c(r^i_s,_s_linear^i ∘ t^i_s ), L_sim_global(r,t,_d) =S_c(r,_d ∘ t), L_smooth(_d)=mean (∂_d/∂ X^2 + ∂_d/∂ Y^2), where _s_linear^i is the linear transformation estimation result corresponding to the reference single stroke r_s^i and the target single stroke t_s^i. Considering that the global registration result will have a greater impact on _s, we apply a larger weight to _d, especially to L_smooth , and set λ and γ to be 0.5 and 5, respectively, in order to ensure a stable and better registration result of _d. §.§ SegNet for Separating Strokes Roughly There are 32 basic strokes for Chinese characters. However, there is a high similarity between these basic strokes, and the number of strokes is seriously unbalanced. Therefore, we divide the 32 basic strokes into 7 categories artificially based on the following three rules: (1) The number of strokes used in common Chinese characters within the category is as balanced as possible. (2) The similarity of stroke shape is as high as possible within the category and as low as possible between the category. (3) The probability of stroke crossing within the category is as low as possible. We use the network architecture adapted from the Deeplabv3 model <cit.> as the main frame of the SegNet to segment strokes guided by prior information. Considering the cross stroke, we construct the SegNet as a multi-label model. The loss of SegNet is the average of the binary cross-entropy of output and label. The input of SegNet consists of two parts: Target Data and Prior Data. Prior Data is composed of reference single strokes that are linearly transformed by SDNet. Strokes in Prior Data with different categories are marked with different values. In the training process, in order to improve the generalization of SegNet, we apply a random position offset within maximal 5 pixels to every single stroke in Prior Data. §.§ ExtractNet for Single Stroke Extraction §.§.§ Input Data As shown in Figure 3, ExtractNet has five inputs to provide sufficient prior information and stroke semantic information. Table 1 shows the details of these inputs. For ExtractNet, Segment Data and Reference Stroke Transformation Data provide the major information required for stroke extraction. Considering the possible segmentation errors of SegNet, we add SegNet Feature and Target Data to supplement the information not included in the Segment Data. The Reference Stroke Transformation Data can only roughly mark the location and shape of the target stroke. For the high-precision stroke extraction work, we add Reference Segment Transformation Data to provide relative positional relationship information with Reference Stroke Transformation Data. In this way, through spatial transformation of the STN Block, the transformed reference information can be further registered to the target data, which can provide more accurate prior information of stroke position and shape. Furthermore, within a Chinese character, the size of different strokes may vary greatly, which will weaken the learning of small-sized strokes. Therefore, we adaptively scale and crop images of input data and labels to eliminate the size difference between strokes. §.§.§ Structure and Loss The structure of ExtractNet is shown in Figure 3, which mainly includes two parts: STN Block and the simple convolution network used to extract strokes. In the beginning, we quickly compress the input to 1/4 of the original size by two layers of convolution. The STN Block is used to further register the reference information to the target. The output is a single-channel stroke image. We use binary cross-entropy to calculate the loss between the output and label. § EXPERIMENTS §.§ Datasets and Reference Data To evaluate our method, we construct two stroke extraction datasets for calligraphy and handwriting, which basically cover the main application fields of the stroke extraction. * The Chinese Calligraphy Character Stroke Extraction Dataset (CCSEDB): CCSEDB has a total of 5000 character data, consisting of calligraphy characters, printed calligraphy characters, and calligraphy characters. We carefully select characters of CCSEDB to maintain a balance between the number of strokes and stroke structure. Each record of data in CCSEDB contains a target image and some singlestroke label images of the target image arranged in reference stroke order. * The Regular Handwriting Character Stroke Extraction Dataset (RHSEDB): We construct RHSEDB referring to <cit.> based on the online handwriting dataset CASIA-OLHWDB <cit.>, which contains 28,080 pieces of handwriting data written by 17 writers in total. The format of each piece of data in RHSEDB is the same as CCSEDB, while the images of writing track of the stroke is normalized to a width of 6 pixels (the size of the stroke image is 256256 pixels). * Reference data: We construct reference data for CCSEDB and RHSEDB, respectively. For CCSEDB, due to the large stroke area, we use character images and the corresponding single stroke images of the Kaiti font as reference data. For RHSEDB, due to the thin stroke width, we use the skeleton images and the corresponding single stroke skeleton images of the Kaiti font as reference data, which are also normalized to a width of 6 pixels. §.§ Implementation Detail In the experiments, for both CCSEDB and RHSEDB, we use 90% of the data for training and 10% for testing. The images of training data of SDNet, SegNet, and ExtractNet have resolution of 256256. The training data are binarized, except for a few that need to be marked with three-channel value. The three models are trained progressively with a batch size of 8 and for 40, 10, and 20 epochs respectively. Their learning rates are initialized to 0.0001 and decrease by a factor of 0.5 every 10, 2, and5 epochs respectively. §.§ Stroke Registration Results In this task, we build experiments on two representative image registration methods: TpsStn <cit.> and VoxelMorph <cit.>. TpsStn uses the idea of STN to realize the TPS registration of two images. Due to the fewer control points, this method well maintains local stability but lacks sufficient deformability. In order to fully verify the registration effect of TpsStn, we evaluate TpsStn with different numbers of control points of 44 and 88 respectively. VoxelMorph is a typical image registration method that can theoretically achieve the maximum deformable transformation for predicting the offset of each pixel. In order to quantify the effect of the prior information constructed by the registration results. For the stroke position, we estimate the qualitative result of position with centroid pixel distance mDis of the single stroke. For the stroke shape that is mainly reflected in the size, we estimate the qualitative result of stroke shape with the IOU mBIou of the bounding box of the single stroke. mDis = 1/n∑_i=0^nDis(centroid(rt_s^i),centroid(t_s^i )), mBI ou = 1/n∑_i=0^nIOU(box(rt_s^i),box(t_s^i)), In Equation 7 and Equation 8, t_s^i denotes the single stroke of the target character. rt_s^i denotes the single transformation reference stroke in prior information by SDNet. Dis refers to the Euclidean distance. A smaller mDis and a larger mBIou indicate the higher accurate prior information. As we can see from Table 2, SDNet performs much better than other baseline methods in qualitative results of prior information. This means that our method provides more precise prior information to the target character. As shown in Figure 4, the SDNet has the best results, which is manifested in higher structural deformability and more stable single-stroke morphology. Especially in Figure 4 (d) and (e), this advantage is more prominent when the target structure is quite different from the reference structure. We believe that this is mainly due to the registration branch _e and linear estimation of single stroke spatial transformation. Local linear estimation can construct different transformations for every single reference stroke even for cross stroke while providing linear constraint. This is the reason for the structure deformable which is enhanced greatly by the registration branch _e. However, TpsStn and VoxelMorph need to balance the deformation and the smoothness because each of them has only one registration field. §.§ Stroke Extraction Results We find that the morphological analysis method based on deep learning has a great improvement over traditional methods for stroke extraction. Therefore we only compare the recent best deep learning-based stroke extraction methods <cit.> named Path-MatchNet and <cit.> named PathNet. PathNet separates strokes by predicting the probability that two pixels belong to the same stroke using a deep learning-based image semantic segmentation method. Path-MatchNet adds stroke matching on the basis of PathNet to further improve the accuracy of stroke extraction. Referring to <cit.>, we construct two evaluation methods that employ the same evaluation strategy but differ slightly based on whether it needs to match the reference stroke. The evaluation considering matching is defined as : mIOU_m = ∑_i=0^nIOU(rt_s^i,t_s^i), The evaluation without considering matching is defined as: mIOU_um = ∑_i=0^nIOU(rt_s^i,maxCross(rt_s^i,t_s)), In Equation 9 and Equation 10, t_s^i denotes the single stroke. t_s denotes all of the single strokes of the target character. rt_s^i denotes the single transformation reference stroke in prior information by SDNet. maxCross refers to obtaining the single stroke image of target character with the largest intersection area with rt_s^i in t_s. As shown in Table 3, our method performs much better in mIOU_um and mIOU_m than baseline methods. As shown in Figure 5, our stroke extraction results have higher precision and are almost 100% accurate in matching with reference strokes, which benefit from the use of prior information and stroke semantic information. As shown in Figure 5 (a) and (d), PathNet and PathMatchNet have lower stroke extraction precision in the cross stroke area. That is because they only use deep learning for morphological stroke trajectory analysis, and lack analysis of the semantics. In the part of stroke matching, simple position and morphological feature similarity calculation in PathMatchNet cannot guarantee the accuracy of matching, especially in Chinese characters with a large number of strokes, as shown in Figure 5 (a) and (b). §.§ Ablation Study To evaluate the effect of prior information and stroke semantic information, we design ablation experiments for SegNet and Extraction. As shown in Table 4 and Figure 6, the prior information has a significant effect on the SegNet because of the high similarity between strokes. Compared to the SegNet, the prior information has a greater effect on the ExtractNet, as shown in Table 5. This is because that the prior information provides major information for analyzing the position and shape of the current single stroke in the ExtractNet. Semantic information has a few effect on the ExtractNet. This is because the Target Data can supplement more information when the Segment Data lacks. Using semantic information can further improves the precision of ambiguous strokes, as shown in Figure 7. §.§ Limitations Finally, as shown in Figure 8, we show typical errors in the results of our method. * Scribbled strokes: Scribbled strokes often lead to densely intersected strokes and indistinguishable stroke morphology, which usually lead to failure of stroke segmentation and single-stroke extraction, such as (a) in Figure 8. * Excessive difference in local stroke structure: As shown in Figure 8 (b), excessive local structure differences between reference and target usually lead to large registration errors, which makes the prior information of these local strokes unusable. § CONCLUSION In this paper, we propose an efficient stroke extraction model of Chinese characters which takes semantic features and prior information of strokes into consideration efficiently. In our method, SDNet establishes the registration relationship between reference strokes and target strokes to provide the prior information of stroke position and shape to the target character. The prior information can guide SegNet to segment roughly the strokes of the target character. With the prior information and segmentation results, every stroke is extracted high precision through ExtractNet. Furthermore, to solve the registration problem of characters with complex stroke structures, we propose a new method for Chinese character image registration called SDNet. The use of the local linear stroke spatial transformation method in SDNet ensures deformability of stroke structure while maintaining the stability of single stroke shape in transformation. Experiments show that our method performs better in stroke extraction than baseline methods and can be used for a wide range of characters including calligraphic characters and handwritten characters. And, SDNet performs better in the registration of image structure. We believe that prior information and stroke semantic information are the keys to the stroke extraction of Chinese characters. In our future work, we will pay more attention to studying new image registration methods based on SDNet and stroke segmentation methods for irregular characters.
http://arxiv.org/abs/2307.06255v1
20230712155038
Machine learning and Topological data analysis identify unique features of human papillae in 3D scans
[ "Rayna Andreeva", "Anwesha Sarkar", "Rik Sarkar" ]
cs.LG
[ "cs.LG", "math.AT", "math.DG" ]
Groups of Binary Operations and Binary G-Spaces Pavel S. Gevorgyan August 12, 2023 =============================================== empty § INTRODUCTION The tongue is a highly sophisticated, heterogeneous anatomical structure and its operation is fundamental to speech, friction regulation and oral processing of food. The surface of the tongue is covered with tiny but small projections known as papillae which enable perception of taste, texture and oral mechanics. Of these numerous anatomical projections, fungiform papillae are considered as phenotypic markers of chemosensation of taste as they house the taste buds <cit.>, whereas filiform papillae that are devoid of taste buds are considered to be regulators of mechanoreception <cit.> for textual perception. Women are believed to have more fungiform papillae and are classed more frequently as supertasters <cit.>. On the other hand, increased number of papillae have been found to be associated with enhanced fatty perception <cit.>. In addition to taste perception, papillae on the tongue are responsible for mechano-sensing. Mechano-sensing refers to our ability to sense the texture, friction, lubrication and touch on the tongue surface, and is carried out mainly by numerous filiform papillae that act as fine strain-amplified sensors on the tongue surface. These sensory functions are critical for manipulation and transport of food and liquids in the mouth<cit.>. Such textural properties also influence our psychological reaction to food. For example, feelings such as satiety and therefore hunger are influenced by perception of friction and lubrication <cit.>. It has recently been shown that our preference for certain food such as chocolates is driven by surface lubrication that can be measured by artificial tongue-like surfaces <cit.>. Besides food preferences, there is burgeoning interest in understanding the complex morphology of the tongue due to its involvement in various age-related oral conditions <cit.>, mucosal degeneration and systemic diseases <cit.>. Certain medical conditions<cit.> and inter-individual differences are known to be associated specifically with the morphology of the papillae and the tongue. Understanding the finer details in morphology, differences in papillae structures can thus lead to fabricating novel bio-inspired artificial surfaces in biomedical engineering, food engineering and therapeutics<cit.>. The intricate geometry of the tongue at a microscopic scale can be appreciated in 3D scans (see Figure <ref>). These images are obtained via surface reconstruction of 3D optical scans of a silicone-polymer mask of a human tongue. Fungiform papillae (Figure <ref>(b)) are larger, sparsely distributed over the surface, and have a simple hemisphere-like shape. The average diameter of a fungiform papilla is about 878μm<cit.>, and they are clearly visible in larger images (Figure <ref>(a)). The filiform papillae show a more intricate crown shape (Figure <ref>(c)). They are smaller (about 355μm in diameter) and substantially more numerous. A square centimeter of human tongue surface is estimated to contain between 100 and 200 filiform papillae <cit.>. Although there has been significant research on the importance of papillae density, our understanding of the papillae shapes and surface properties of the tongue suffers from the difficulty of extracting and analysing geometry of papillae at microscopic scales. Previous studies have thus focused on manually localising papillae from 2D images <cit.>, primarily focusing on fungiform papillae <cit.>. Other works on biological surface data have used conformal geometry and computational topology at larger scales. Examples of such techniques include shape registration <cit.>, segmentation and topological data analysis<cit.>. Machine learning has recently emerged as a powerful technique for diagnosis where large volumes of medical data or images are available <cit.>. These approaches have largely focused on computing global functions such as a medical diagnosis from an image. However, to date there is no machine learning model that has classified microscopic tongue papillae based on 3D tongue scans. Herein, we present the first study of the 3D shapes of filiform and fungiform papillae in humans, with an emphasis on the variations in the microscopic geometry seen in Figure <ref>. We develop a machine learning based framework applied to custom designed topological and geometric properties – called features – to understand one fundamental issue: What separates one type of papillae from another? We also ask the questions whether papillae are unique across and within individuals based on finer geometric details. Instead of applying machine learning as a black box application, we use statistics and explainable machine learning <cit.> to differentiate one type of papillae from another and identify the most distinctive features. We follow the process of Topological Data analysis, where implicit shapes in data are extracted as topological features that form the basis of machine learning models. However, in addition to topological features, we also make use of geometric features computed from discrete curvatures to understand the uniqueness of tongue papillae. These features together are seen to have a high accuracy of 85% in correctly identifying the papilla type (filiform or fungiform) in a small segment of a surface. As a result, we can now map the papillae arrangement for the first time – including filiform papillae that are critical for developing biorelevant tribological surface and unravelling mechano-sensing – as seen in Figure <ref>. Unprecedented analysis from our model reveals differences in papillae shapes across gender, age and individuals. We find that given a papilla, the age group and gender of the participant can be predicted to moderate accuracy, and even the exact individual from among 15 participants can be identified with approximately 48% accuracy, showing the first evidence for papillae to act as a unique identifier. This study demonstrating the uniqueness of papillae geometry at microscopic length scales using discrete differential geometry and computational topology stands to benefit future development of 3D tongue models for enabling rational food design diagnosis of oral medical conditions. § RESULTS Our analytic framework processes the data, computes the features, and then applies machine learning driven analysis. We briefly explain the data processing and feature extraction. Then we proceed with a machine learning driven analysis of the feature set, prediction of gender, age and papillae type that reveals insights about papillae. The data is obtained as 3D digital scans. The process starts with taking masks of the dorsal area of tongue of participants on silicone polymers. These masks are scanned using a 3D scanner, which yields a set of 3D points. These points are then passed through a surface reconstruction algorithm <cit.> implemented in Meshlab <cit.>, which yields a mesh and a corresponding surface (see Figure <ref>). This process was developed by Andablo et al <cit.>. From this mesh data, we extract segments that are candidates for papillae. The extraction process is as follows. Around a point P on the surface, select the set B of points within a radius r + δ, where r=max(r_fungiform, r_filiform) μ m and δ = 100μ m, which we find to work well in practice. A plane fit to B based on the RANSAC algorithm <cit.> represents our best approximation of the plane of the segment base. The local maximum m in the segment is defined as the point M furthest away from the plane. This point is assumed to be the peak of a papilla, if present. Finally, we cut a region of radius r around m representing a candidate mesh for a papilla. Figure <ref>(a) and Figure <ref>(b) show such extracted segments for a fungiform and filiform papilla, while Figure <ref>(c) shows general surface area without any papilla. These three kinds of elements are the basis of our study. A total of 2092 segments extracted from scans of 15 participants were labeled manually as Fungiform, Filiform or None. In the statistical workflow, a random subset of the segments (called the training set) is used to develop statistical models, while the remaining (the test set) – whose labels are unknown to the model – are used to test the accuracy of the models in a task of correctly predicting the label class (called classification). All accuracies reported in this paper are accuracy on the test set of unseen data. The analysis and machine learning are carried out on a large set of features (Table <ref>). In past work<cit.>, height and radii have been found to be distinctive between papillae types. Our more comprehensive segment dataset and computational models show that these baseline features are insufficient for high accuracy automated detection of papillae type and other tasks. §.§ Features and feature visualisation Features can be considered at different scales. At the global scale, a topological invariant of the entire papilla may be a distinctive feature. At the local scale of the neighborhood of a point on the surface, geometric properties – in particular, curvature of points in the neighborhood – best characterise the local shape of the surface. Local properties can be aggregated over the entire papilla to obtain a global feature. We describe below the significance of topological and geometric quantities in this context. Topological features. In this work, topological properties are computed via persistent homology. In this approach, each vertex (for us, a point on the reconstructed surface) is treated as the center of a growing ball, and the union of these balls is observed for changing topology. One way to interpret computational persistent homology is that it monitors connected components of loops of different dimensions as they are born and die with the growth of the balls. For a comprehensive introduction see the text by Edelsbrunner and Harer <cit.>. Figures <ref>(d – i) show the persistent topological components for the three types of segments, where the scale is measured in μ m. Figures <ref>(d – f) show the persistent diagram view, where each component manifests as a point indexed by its birth and death time. The difference in distribution of the points across plots suggests that there are variations in topological features for different segments. Figures <ref>(g – i) show an alternative view of the same data, called the barcode view – where each bar shows the life duration of a topological component. From these sets of bars we can derive statistical features based on the distribution of bar lengths. The distribution of bars at different lengths for H_0 (connected components) are shown in Figure <ref>(j,k) as the kernel density estimates. Fungiform bar lengths in Plot <ref>(j) have higher density for shorter bars of length between 0 and 10 as compared to Filiform and None (around 0.01), and then again in the mid range between 17 and 25, where all densities achieve their maximum. There are considerably fewer longer bars for Fungiform as compared to Filiform and None, which dominate the longer bar end of the spectrum. In plot (k) with densities H_1, we note that the density of short bars (lengths between 0 and 10) are higher for Fungifrom (0.07), followed by Filiform (0.065) and None (0.06). Thus there seems to be one predominant region of major difference, while H_0 shows greater variation across types. Geometric (curvature) features. Curvature is locally defined at each point and is a complete descriptor of a surface. Positive curvature occurs where the surface matches a region of a sphere, for example at the top of a fungiform papilla. Sharp peaks are characterised by high positive curvature, while gentle tops, such as at the top of the fungiform papillae, have lower positive curvature. Negative curvatures are observed in saddle shaped neighborhoods, for example, around the base of papillae. In digital discrete data, where manifolds are piecewise linear (triangulated) meshes, as in our case, curvature is computed at each vertex of the mesh as the angle deficit of the manifold (see Methods section for details). For our analysis, we compute curvatures on a sample of points in the segment. The geometric features of a segment include quantities such as the maximum and minimum of Gaussian curvatures, percentage of points with positive and negative Gaussian curvature, and other aggregated quantities (See Table <ref>). The distribution of curvatures of the segments in Figure <ref>(a-c) are shown in Figure <ref>(l). For all types of papillae, most points are seen to be concentrated around small values of curvature close to zero. In particular, fungiform papillae have more points of near zero curvature, as can be expected from fungiforms having mostly flat or gently curving surfaces. In contrast, filliform and even generic surface areas are seen to have greater fraction of sharper curvature points. Feature Visualisation. The correlation matrix of features is shown in Figure <ref> in the Supplementary material. This set of features were selected after removing features with correlation higher than 0.65. The features remaining on this matrix show little correlation with each other, implying that they capture mutually distinct information, and thus they are informative in our analysis. Correlations by papillae type are shown in Figure <ref>. PCA-based embedding of the data (Supplementary Figure <ref>) shows overlap between classes, suggesting that linear methods are insufficient for distinguishing them. However, a non-linear method called Uniform Manifold Approximation and Projection for Dimension Reduction (UMAP)<cit.> does cluster the data in ways that show clear separation between classes (Figure <ref>), implying implicit distinction between the classes. Next, we examine these features in order to quantify more closely their usefulness. §.§ Feature analysis and feature importance Various features may have different levels of importance in the distinction between papillae. The importance of a feature is a fundamental question in the field of explainable machine learning, and is usually determined by its contribution to a classification model. It is a somewhat complex measure that is difficult to derive by looking at the feature in isolation. For our purposes, we use the technique called permutation feature importance <cit.>, and compute the contribution of these features to a class of standard classifiers called Kernel SVMs. The permutation feature importance method evaluates a feature f by nullifying f of the test data and observing the drop classification accuracy of the model. A large drop in accuracy implies f is an important attribute for the classifier model. The effect of nullifying f is achieved by permuting the values of f among the test data points. Figure <ref>, shows three most important features in determining each of the four labels of interest to us: the papillae type, the gender, the age and the participant id. The main observation here is that certain topological features are seen to be consistently important in these tasks (Figures <ref>(a-d)). Topological features overall are also found to contribute more to prediction accuracy than other features (Figure <ref>(e)). Type prediction features. The KDE plots of the most important features for the papillae type classification task are presented in Figure <ref>, and the box plots and the aggregated distributions are shown in Supplementary Figure <ref>. The three distinctive features are seen to have very different distributions for the different types of segments, which explains their effectiveness in classification. Gender prediction features. We have two topological and one curvature feature at the top three for gender prediction task, whose box plots and aggregated distributions can be found in Figure <ref>. Persistent entropy (0) (Figure <ref>(a)), Maximum Gaussian curvature (Figure <ref>(b)) and Short bars (1) (Figure <ref>(c)) are all important features for determining gender. Figure <ref>(a), shows that the female participants tend to have a higher median value of the max Gaussian curvature (which holds for both Fungiform and Filiform) as compared to male participants, which could be linked to female papillae being `sharper', or `pointier'. Age prediction features. Topological features also dominate the age-prediction task, as seen in Figure <ref>. The box plots and aggregated distributions of the top three features are presented in Figure <ref> – Persistent entropy (0)(Figure <ref>(a)), Amplitude(Image,0) (Figure <ref>(b)) and Maximum Gaussian (Figure <ref>(c)) are the most important features for the age classification task. The baseline features (Height, Radius) are not amongst the most essential for this task, suggesting that their characteristics do not differ much for the two age groups in this study. An interesting observation is that height is more important than radius. The distributions can be seen in Figure <ref>. Similar to the gender-prediction task, the Maximum Gaussian curvature feature (Figure <ref>(b)) is one of the most important. The median for the younger age group is 0.269 (n = 840) and for the older is 0.166 (n = 640), implying that there is significant difference between the two groups, with the younger group having `pointier' papillae. This holds both for Fungiform and Filiform. §.§ Predicting gender, age and participant from papillae structure Having understood the differences in papillae structure based on gender and age, we ask if one can easily predict gender, age and the participant given a papilla. Specifically, we ask if the papillae and the features identified above contain sufficient information to allow simple statistical methods to carry out accurate prediction. §.§.§ Gender prediction In this task we predict the biological gender of the participants. The classification performance is presented in Table <ref>. The models trained on topological features result in accuracy of 0.65, outperforming the curvature features by 0.05 and baseline features by 0.14. Using all the features together marginally improves accuracy to 0.67. §.§.§ Age prediction The participants are split into two groups depending on their age. The cut-off is 29 to achieve a close to equal split. The classification statistics are shown in Table <ref>. The results follow similar pattern to the gender prediction task. The topological features on their own achieve classification accuracy of 0.73, closely followed by curvature with 0.67. The baseline features are behind by almost 0.10, with a score of 0.58. Combining the features once again improves accuracy to 0.75. §.§.§ Participant identity prediction In this task we predict the participant from their papillae. The balanced accuracy of the topological features (0.39) are almost double that of curvature features (0.22). This is illustrated by the most important features as well, as all three of them are topological. Unlike in the previous two tasks for gender and age, here combining all the features brings a significant improvement in the balanced accuracy score to 0.48, suggesting that both the local and global information can contribute to predicting the identity of the participant. Note that while accuracies around 0.40 to 0.50 as seen here are not good on binary classification tasks, in this case the task is distinction among 15 participants. A baseline rate of random prediction in this case will produce an accuracy of only 0.06. The features thus distinguish participants to a high degree of distinctiveness. §.§ Papillae detection and type classification. The final result is the accuracy of classification based on the features (Table <ref>). The classification statistics are shown in Table <ref>. The accuracy of the topological features is better than the baseline and the curvature features, and combining all features together provides the best accuracy. We achieve balanced accuracy of 0.72 for the topological, 0.67 for the curvature and 0.62 for the baseline. Combining all the features increases the performance to 0.85. §.§.§ Application of classification model The machine learning model developed can be used for accurate papillae detection and positioning on segments from a single person's tongue. Figure <ref> shows the method accurately positions the fungiform form (in blue) and filiform (in yellow) on a tongue segment from one participant. This automated approach can thus efficiently and accurately construct maps or tongue prints from given tongue masks. § DISCUSSION We have presented here the first study of the 3D shapes of human papillae based on high resolution scans. Our study is based on a novel framework combining geometry, topology and machine learning. Past research <cit.> has focused on fungiform papillae in 2D images. In contrast, our microscale 3D reconstruction based approach can detect filliform papillae and non-papillated areas of the tongue, which are hard to distinguish with the naked eye and 2D images. Recent research has shown that the human perception of food is governed not only on the chemical sensation of taste, but also heavily on the mechanosensation, i.e. texture perceived by filliform papillae, for example, in the perception of soft textured delicacies such as chocolates <cit.>. Of more importance, the framework proposed here can be extended beyond the tongue papillae to the general study of shape and arrangement of microscale surface elements such as finger-like projections that are omnipresent in biology. To capture the intricate biological shape information, we have developed a pool of geometric and topological features. While 3D geometric and topological transformations have previously been used to process biological scan information <cit.>, we employ a unique approach and treat them as statistical data that are fed to a machine learning system. In this approach, curvature statistics are used for aggregated local information, while persistent homology is used for global characteristics. Based on the subject of study, other features may be used. In our analysis topological features turn out to be more informative in prediction. Recent research<cit.> has suggested that persistent homology can capture local shape information as well as global properties. Our results on tongue papillae are consistent with this idea. The analytics are based on machine learning models. The models themselves are built to predict the relevant variables of type, age, gender and participant, but our objective was to gain a better understanding of variations across classes and features. We thus used permutation feature importance to evaluate how each feature contributes to each model. From a pure accuracy point of view, large neural network models <cit.> trained on big datasets are considered the most successful current paradigm <cit.>. However, our objective in this study was to develop an interpretable framework for investigation of biological surface features, operating on relatively few samples from few participants. We have thus used simpler models that can be trained with smaller quantities of data. The accuracy of the results with simple models gives us confidence in our conclusion of feature importances and in the feasibility of highly accurate machine learning models in future research. The tasks for prediction of age group and gender suffer from the small number of participants. Machine learning models for these tasks achieve balanced accuracies of approximately 74% and 67% respectively. Note that for such binary prediction, a random prediction model achieves 50% accuracy. The results suggest that geometric and topological features do vary to an extent across these variables, but more data will be needed to confirm the result and the nature of variation. The higher max Gaussian curvature appears as an important feature for female participants and the younger age group, suggesting more sharply curved or pointy shapes in these demographics. In past research, women and younger people have been noted to have higher density of fungiform papillae, which has been attributed to variations in taste perception, and women have been observed to be supertasters more frequently <cit.>. The curvature variation implies a difference in papillae shapes that could be contributing to the sensory differences as well. Fungiform papillae density has been noted <cit.> to drop above an age of 65. In our study the participants were within the relatively young range of 22-37, but even within this group, shape features show enough variations to reach a classification accuracy of 74% between age groups 22-28 and 29-37. The papillae type detection results are more accurate at 85% and based on a large number of papillae, which gives us confidence that the model is truly accurate. To confirm that the models generalise to unseen participants, we carry out the Leave One Group Out test, and find that the accuracy holds up even on samples from a completely unseen participant, which confirms that the models can be used to classify and localise papillae on new tongue impressions. The papillae type model can thus be used to automatically identify filiform and fungiform papillae on scans of new tongue impressions. The individual participant model shows a 48% balanced accuracy and 51% raw accuracy. This score is not impressive in a binary classification task, but our participant prediction task is a multi-class one, with 15 possible classes. A papilla could have belonged to any one of the 15 classes, and a random predictor would have an accuracy of only 6.66%. Considering the sample sizes from different participants, (Table <ref>) a predictor that always predicts the largest class can achieve an accuracy of 11%. In comparison, the model achieves between 4 to 8 times the accuracy of these baselines based on the distinctiveness in the data of a single papilla. This distinctiveness may have multiple contributing factors – these can be true inter-individual variations as well as variations in experimental conditions in collecting the masks. The exact cause of this difference will require further study. The framework and discriminative models presented here enable deeper study of the papillae structure and their variations and arrangements. The model for localising and classifying papillae (as seen in Figure <ref>) enables the study of the overall tongue surface, or tongue prints. Such arrangements of papillae are known to influence the surface properties of the tongue and its perception abilities <cit.>. Our data and past research have shown that the distribution of papillae vary across individuals. A detailed study of this variation across various demographic parameters could reveal insights into preferences, cultures and medical conditions. Arrangements identified by our models could be used to build generative models that can fuel such insights and can create more realistic surfaces for use in food engineering and development of oral diagnostics. Ultimately, this study offers a new dimension showing papillae as an unique identifier for the first time in the literature which needs further validation using this developed method for a larger dataset of participants. § METHODS §.§ Data collection §.§.§ Collection of Human Tongue Silicone Impressions. The data in this study has been obtained from 3D optical scans of masks of real human tongues from 15 healthy participants performed using an Alicona InfiniteFocus (IF), details of the data collection has been described in a previous publication <cit.>. Negative impressions of the upper surface of the tongue were collected from (n = 15 subjects, the mean age in years is 29.1, SD=3.7), 6 male and 9 female. More detailed information can be found in Table <ref>. §.§.§ Experimental protocol. The study adhered to all relevant guidelines and regulations. Signed informed consent was obtained from all participants before undertaking the experimental protocol. The ethics declaration is included at the end of this section. §.§ Dataset generation for papillae Each participant's point cloud was split into two smaller parts of approximate size 13mm by 9mm in order to reduce the size of the point cloud. On each part, The Screened Poisson surface reconstruction<cit.> in Meshlab<cit.> is applied. Then, a number of circular segments of radius r + δ, where r is set to match max(r_fungiform, r_filiform)µm and δ = 100, were extracted according to our algorithm for extracting candidates for papillae locations described below. These segments have been manually labelled into one of three classes: fungiform papillae, filiform papillae or None (neither a fungiform nor a filiform). The final dataset consists of 414 fungiform, 1489 filiform and 190 None, resulting in 2092 tongue segments in total. The number of segments per participants can be found in Table <ref>. §.§.§ Finding Candidates for papillae locations The pipeline for segment extraction works as follows. First, we pick a random point P on the surface. Then, we select a radius r + δ of points around P, where we set r to match max(r_fungiform, r_filiform)µm. We set δ = 100 to fully cover any papilla in the region. After that, we fit a plane based on the RANSAC algorithm <cit.> and identify the point M furthest away from the plane, which will be a local maxima. We identify it to be the centre of the segment. Finally, we cut a region of radius r around M as a candidate segment. This process is applied repeatedly to identify multiple papilla segments. §.§ UMAP for visualisation UMAP <cit.> represents data by fitting it to non-linear manifolds and thus can capture complex information. We have used the supervised version of the method for visualisation. The supervised method explicitly tries to separate known classes by embedding their connectivity graphs. As a result, it does not readily produce papillae prediction models for unknown data. We use it to test presence of distinction between classes. §.§ Baseline, curvature and topological features Three sets of features are extracted from each of the selected segments – baseline, curvature and topological. §.§.§ Baseline features We use geometric measurements for baseline feature identification, which comprises of two quantitative shape characteristics of the papillae: height and radius. From the data presented in Table 1 in <cit.>, based on Tukey's test for statistical significance of the means and standard deviation, the diameter (and the radius, respectively) and height are different between fungiform and filiform. Therefore, they can serve as features for distinguishing between the three classes. We note that defining the height and radius automatically is a challenging task due to the irregular nature of these structures and to our knowledge no methods exist in the literature to date to identify these structures accurately. Human participants do this manually by observing the continuity of the papillae from the base to the tip. We use a heuristic to identify heights. The point M, identified as the local maximum for the segment, as the centre of the structure. Then we define the radius r as the radius value of the sphere, centered at M, which contains 90% of the points in the segment. We compute this iteratively, by first setting the value of the radius i as a small value, and count the number of points in the neighbourhood of radius i (we use KDTree with FLANN <cit.> for nearest neighbor search). We then increase i by 10, until the number of points contained in the neighbourhood exceeds 90% of all points. The value of i at the stopping condition is our candidate for radius value, r. The computation of the height, h, is dependent on the value of r. It works as follows: we first cut a region around the centre of radius r, we then fit a plane using the RANSAC algorithm <cit.> and find the maximum distance from the plane to the local maximum point M. This value is our height value, h. All the computations have been performed using the Python libraries and . The algorithm is mimicking the manual procedure which a tongue expert would use to compute these values. §.§.§ Curvature features For each x ∈ H, where H is the surface generated by the Poisson surface reconstruction process, we compute the discrete curvature as defined by Meyer et al. <cit.>. The definition in the discrete case on the triangular mesh is via the vertex's angular deficit k_H(v_i) = 2π - ∑_j ∈ N(i)^θ_ij, where N(i) are the triangles incident on vertex i and θ_ij is the angle at vertex i in triangle j. The way that Gaussian and mean curvatures are computed uses averaging Voronoi cells and the mixed Finite-Element/Finite-Volume method <cit.>. We use the existing implementation from the Python version of Meshlab, called <cit.>. We use the maximum and minimum of the Gaussian and mean curvature as features, the ratio of positively curved points to the number of all points in the mesh (k_positiveratio), and we introduced a new feature called curvature ratio (k_ratio). Let x be the number of points of positive curvature, and y be the number of points of negative curvature. Therefore, we define the curvature ratio k_ratio to be k_ratio= y/x if y ≤ x and k_ratio=x/y if x ≤ y. The signs of the mean and the Gaussian curvature provide a lot of information about the local behavior of the surface <cit.>. We computed the discrete Gaussian and mean curvature for all meshes and calculated the number of vertices of positive and negative curvature (after the Poisson surface reconstruction filter). The ratio of positively curved points to the number of all points in the mesh is defined as k_positiveratio= x/x + y. The full list and intuitive interpretations are provided in the Supplementary material, Table <ref>. §.§.§ Topological features We subsample the 3D point clouds to 1000 points each and compute the Vietoris-Rips complex, using the Euclidean distance as a filtration. Persistent homology <cit.> of the 3D point cloud was computed using the library <cit.> and  <cit.>. We then generate 12 features which are one number summary of the diagram, providing different topological information. For more details on persistent homology, please refer to the Supplementary material. Persistent entropy<cit.> is the measure of the entropy of the points in a persistent diagram. Concretely, let D = {(b_i, d_i)}_i ∈ I be a persistent diagram with non-infinite death times, i.e., d_i < ∞. Then, the persistence entropy of D is defined as P_E(D) = ∑_i ∈ Ip_ilog(p_i), where p_i = (d_i - b_i)/L_D and L_D = ∑_i ∈ I(d_i - b_i). Amplitude can be defined as the distance from the persistent diagram to the empty diagram, which contains only the diagonal points. Here we use 2 metrics (Wasserstein and Bottleneck) and 2 kernels (persistence landscapes <cit.> and persistence image <cit.>) and the amplitude of the kernel is computed using the L2 norm. For the computation, we use the default parameters in . The Wasserstein amplitude of order p is the Lp norm of the vector of point distances to the diagonal, which is A_w = √(2)/2(∑_i ∈ I(d_i - b_i)^p)^1/p. Here we use p = 2. Similarly, the Bottleneck amplitude, A_B, is defined by letting p to ∞ in the definition of the Wasserstein amplitude. In other words, it is a fraction of the longest bar A_B = √(2)/2sup_i ∈ I(d_i - b_i). Persistence landscapes amplitude: Given a persistent diagram D = {(b_i, d_i)}_i ∈ I, its persistence landscape is the set {λ_k }_k ∈ℕ of functions λ_k(t):ℝ→ [0, ∞], where λ_k(t) is the k-th largest value of the set {g_(b_i,d_i)(x)}^n_i=1, where g_(b,d) = 0 if x ∉ (b, d); g_(b,d) = x - b if x ∈ (b, b+d/2) and g_(b,d)=-x + d if x ∈ (b+d/2, d). The parameter k is called a layer. In this work we consider the case when k = 1. Persistence image amplitude: diagrams are converted to sums of Dirac deltas. The convolution with Gaussian kernel is performed, where the computation is done over a grid with rectangular shape. The locations of the points are evenly sampled from the values of the filtration, turning it into a raster image, which is then flattened into a vector. The Persistence image amplitude is the L2 norm of that vector. §.§ Machine learning and statistics Classification models. The experiments use classes of simple models – Support vector machines (SVMs) and Logistic regression models. The implementations from <cit.> were used without modification and with the default hyperparameters. The SVMs were used with a radial basis kernel (RBF). Details of these techniques can be found in any introductory book on machine learning. We use 20% of the data for testing and the other 80% for training using a random split. The procedure is repeated 50 times. Performance Metrics for machine learning. Accuracy represents the proportion of correct predictions made by the model out of the total number of predictions. To adjust for the varying number of samples across classes, we compute the balanced accuracy. It calculates the average of the correct classification proportions for both positive and negative observations. Feature Importance. The plots are based on classification by the best balanced accuracy split of the data, and 30 permutations of the features for that split. The black line represents the standard deviation of the feature importance over the 30 runs. §.§ Ethics declarations Signed informed consent was obtained from all participants before undertaking the experimental protocol. Ethical approval for this study was granted by the University of Leeds ethics committee DREC ref: 120318/AS/245, as well as the University of Edinburgh (Reference number 2019/71645). § DATA AVAILABILITY The datasets generated and analysed during the current study are not publicly available but are available on reasonable request to authors. § ACKNOWLEDGEMENTS RA is supported by the United Kingdom Research and Innovation (grant EP/S02431X/1), UKRI Centre for Doctoral Training in Biomedical AI at the University of Edinburgh, School of Informatics. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant Agreement No. 757993). Dr. Efren Andablo-Reyes (School of Food Science and Nutrition, University of Leeds) and Dr. Paul Hydes (School of Dentistry, Faculty of Environment, University of Leeds are kindly acknowledged for collection of tongue masks and primary data development. The authors would like to thank Camille Hammersley (School of Mechanical Engineering, University of Leeds) for her technical support in using the Alicona InfiniteFocus instrument for 3D optical scanning of the tongue masks. § AUTHOR CONTRIBUTIONS STATEMENT A.S. has collected the data, R.A. and R.S. conceived the experiment(s), R.A. conducted the experiment(s), R.A analysed the results. R.A. wrote the first draft. All authors discussed the results, commented on, revised and reviewed the manuscript. § ADDITIONAL INFORMATION The authors declare that there are no competing interests. § SUPPLEMENTARY MATERIAL
http://arxiv.org/abs/2307.10217v1
20230714143841
A scalable scanning transfer cavity laser stabilization scheme based on the Red Pitaya STEMlab platform
[ "Einius Pultinevicius", "Marian Rockenhäuser", "Felix Kogel", "Phillip Groß", "Tatsam Garg", "Ole Einar Prochnow", "Tim Langen" ]
physics.atom-ph
[ "physics.atom-ph", "physics.ins-det" ]
AIP/123-QED 5. Physikalisches Institut and Center for Integrated Quantum Science (IQST), Pfaffenwaldring 57, Universität Stuttgart, 70569 Stuttgart, Germany 5. Physikalisches Institut and Center for Integrated Quantum Science (IQST), Pfaffenwaldring 57, Universität Stuttgart, 70569 Stuttgart, Germany 5. Physikalisches Institut and Center for Integrated Quantum Science (IQST), Pfaffenwaldring 57, Universität Stuttgart, 70569 Stuttgart, Germany 5. Physikalisches Institut and Center for Integrated Quantum Science (IQST), Pfaffenwaldring 57, Universität Stuttgart, 70569 Stuttgart, Germany 5. Physikalisches Institut and Center for Integrated Quantum Science (IQST), Pfaffenwaldring 57, Universität Stuttgart, 70569 Stuttgart, Germany 5. Physikalisches Institut and Center for Integrated Quantum Science (IQST), Pfaffenwaldring 57, Universität Stuttgart, 70569 Stuttgart, Germany [email protected] 5. Physikalisches Institut and Center for Integrated Quantum Science (IQST), Pfaffenwaldring 57, Universität Stuttgart, 70569 Stuttgart, Germany Many experiments in atomic and molecular physics require simultaneous frequency stabilization of multiple lasers. We present a stabilization scheme based on a scanning transfer cavity lock that is simple, stable and easily scalable to many lasers at minimal cost. The scheme is based on the Red Pitaya STEMlab platform, with custom software developed and implemented to achieve up to 100Hz bandwidth. As an example demonstration, we realize simultaneous stabilization of up to four lasers and a reduction of long-term drifts to well below 1MHz per hour. This meets typical requirements, e.g. for experiments on laser cooling of molecules. A scalable scanning transfer cavity laser stabilization scheme based on the Red Pitaya STEMlab platform T. Langen August 12, 2023 ======================================================================================================= § INTRODUCTION Laser frequency stabilization is an important component of many scientific investigations and technological applications of atoms and molecules. A large variety of techniques exist to stabilize a laser to an atomic resonance <cit.> or optical resonator <cit.>, ranging from simple spectroscopic schemes to the millihertz-level techniques used to realize the most precise optical clocks <cit.> and gravitational interferometers <cit.>. However, advanced methods for laser stabilization can be complex and expensive, while in many experiments only moderate precision and bandwidth are required. A recent example are experiments aiming to laser cool molecules <cit.>. In comparison to atoms, molecules feature a much more complex level structure, where decay into undesired states needs to be compensated through — sometimes up to almost a dozen <cit.> — repumping lasers. At the same time, only moderate compensation against drifts on the order of the transition linewidths are required, typically in the range of a few MHz. For such experiments, stabilization schemes that are simple, robust and scalable to many lasers are thus highly desirable. Here we present a scanning transfer cavity lock which is ideally suited to meet these requirements. Our approach utilizes the readily available, high-resolution analog-to-digital converter and fast digital signal processing capabilities of the Red Pitaya (RP) STEMlab platform <cit.>. Combining this with a simple Fabry-Perot cavity enables long-term sub-MHz-level frequency stabilization using a simple setup, which we demonstrate for up to four lasers in parallel. Our scheme offers a cost-effective and scalable solution for stabilizing lasers that is applicable to a large variety of experiments. § LOCKING SCHEME The basic setup for our laser frequency stabilization scheme is shown in Fig. <ref>. It is based on the well-established, generic concept of a scanning transfer cavity lock (STCL). Beams from several lasers are superimposed and their transmission through a scanning Fabry-Perot cavity is monitored <cit.>. One of the lasers is frequency stabilized and serves as a reference for the scheme. This initial frequency stabilization can be realized by locking the reference laser to an atomic vapor spectroscopy or by using an inherently frequency-stable laser, such as a helium-neon laser. The positions of the cavity transmission peaks of this reference are then stabilized by changing the cavity length. This transfers the frequency stability of the reference laser to the cavity and stabilizes it against drifts. The positions of all other transmission peaks on this — now precisely controlled — frequency scale are subsequently used to generate feedback, by means of which the corresponding lasers are stabilized. § IMPLEMENTATION Similar STCL schemes have been widely used in many experiments in atomic and molecular physics <cit.>. Here, we employ RP units for a simple, cost-effective and fully digital processing of the transmission data of the transfer cavity. The RP is a multi-instrument, FPGA based platform with open-source capabilities. Fast analog-in and output channels make it very suitable for the processing of analog signals. In the context of laser stabilization, a powerful toolkit is provided for this platform by the PyRPL python package <cit.>. In terms of cavity stabilization, this package provides, e.g., an interface for analog-to-digital conversion, signal processing and generation. In principle, this makes the package suitable for realizing a STCL, but it also requires permanent data transfer between the RP platform and a personal computer (PC) for signal processing that is not carried out on the FPGA itself. If applied to setups with many lasers, this typically creates a large data transfer overhead that limits the realizable bandwidth of the stabilization. Our solution aims to minimize this data overhead, by performing all processing and monitoring tasks directly on the RP's internal CPU. It also minimizes the complexity of the optical setup by using only a single cavity for a potentially large number of lasers, and does not require any additional radio-frequency components. To realize the locking, several RP units are operated in a stand-alone way, where only the basic control commands to run and control the STCL locking loop are externally provided by the user via a local area network (LAN) connection. This connection is realized using Python modules that are run on both the RP units and the PC. In our scheme, we use a script to host servers on the RP via Python's socket package. We then employ an interface available on the RP for the use in Jupyter notebooks, which provides access to the FPGA functions. With that, the analog input and output channels can be controlled directly through the RP platform's CPU. A schematic illustration of components operated on the RP units, on the controlling PC, as well as their mutual communication is shown in Fig. <ref>. The basic setup requires one RP (RP Cavity) for the scan and stabilization of the transfer cavity to the reference laser. Every pair of additional lasers on the same cavity then requires one additional device (RP Laser), since each RP features two fast analog outputs and can thus provide feedback for two lasers. The cavity is scanned by bursts of triangular signals, generated using the signal generation capabilities of RP Cavity's FPGA. The frequency of the waveform (238) is chosen such that the period is a multiple of the minimum acquisition duration of the FPGA's inputs (In1 and In2). This way the acquisition time can be matched to the duration of the upwards slope. During the signal processing, the cavity piezo can settle back to its original length throughout the downwards slope. The signal acquisition and processing is triggered on all devices using a square wave, synchronized to the cavity scan and generated on the second output of RP Cavity. The detected cavity transmission is evaluated on the CPU of RP Cavity. This involves the determination of different resonance positions that need to be distinguished from one another. For that purpose, regions of interest are specified for the respective lasers in the cavity (see Fig. <ref>b). The peaks are then detected by searching for the maximum positions in the respective ranges. Using the data points next to those positions, a Savitzky-Golay derivative filter <cit.> is applied. The resulting zero-crossing is determined using linear regression to find the resonance position <cit.>. This method efficiently smoothens the data around the resonance and increases the time resolution of the peak detection algorithm. The ranges of interest as well as the necessary parameters for the peak-detection algorithm are remotely provided from a PC and are saved on the device. From the evaluated peak positions for the reference laser and the lasers L_i the relative positions Δ t_L_i (see Fig. <ref>b) are calculated to obtain the respective error signals e_L_i = Δ t_L_i/Δ t_FSR - s_L_i. The set values s_Li are defined by the user. Variations of the scan ramp are cancelled out by the division over the FSR Δ t_FSR of the cavity <cit.>. This includes effects that can arise due to the non-linearity and hysteresis of the cavity piezo. For cavity stabilization, the error e_Ref is simply defined as the distance of the resonance to a pre-defined set value. In addition to the signal processing, the locking also requires a feedback loop, which is not directly available on the RP's FPGA. We thus implemented a dedicated digital proportional–integral–derivative controller (PID controller). As the aim of our locking scheme is to compensate against slow long-term drifts of laser frequencies, we use mainly the integral (I) and proportional (P) terms of this controller. A drawback of our remote-controlled implementation of the lock is that the actual data on the RP can not be accessed while the lock is enabled without compromising locking bandwidth. For this reason an additional RP unit (RP Monitor) that is independent of the lock is used for monitoring of the cavity signal on the computer. This unit runs the same algorithms as the units used in the transfer lock, which assures that the monitored signal is as close as possible to the actual STCL signals. To prevent the transfer of relatively large data sets between RP Monitor and the user's PC, the errors of the individual peak positions can be evaluated on the unit before they are sent to the computer. This allows for faster update rates when monitoring the locking performance. As an alternative to this additional unit, the cavity signal can also simply be monitored with any oscilloscope. Overall this leads to a full setup of 2 + ceil(N/2) devices for N lasers on a single cavity, including RP Monitor. On each of the RPs, a continuously listening server is started by remotely running a script. Communication with these servers can be established from a PC, such that commands can be sent to the devices to control the cavity scan as well as the individual locks. Based on the monitored cavity signal, the respective setpoints and peak detection ranges of the individual locks can be initialized. The locks of the RPs can be individually started, which initiates a feedback loop on each device. Those loops block the communication on the original listening server. In order to keep the locks remotely controlled, a second host server is started on the RPs to listen for further commands. This is realized using multi-threading and multi-processing, such that it is possible to adjust any lock settings while the STCL is running. This approach also allows for slow frequency scans within the pre-defined peak detection ranges. An important benchmark for our scheme is the achievable bandwidth. Taking into account the computing power of the RP and the efficiency of our implementation, we generate feedback output up to every 10, i.e. with a bandwidth of up to 100. This is a comparable timescale to the scan rates used for the cavity. For comparison, it is also an order of magnitude faster than an implementation we realized using the readily available components of the PyRPL package. We observe that the main contribution to the time budget of each iteration originates from the acquisition and processing of the cavity signal. In particular, a time delay of around 4, independent of the read-out buffer size of the RP, can be attributed to the conversion of the memory buffer to a Python object. A possible improvement of the bandwidth could thus be realized by implementing the whole feedback loop in C or directly on the FPGA. We estimate that this could increase the bandwidth by another order of magnitude, up to the limit of typical piezo actuators, which is in the kHz range <cit.>. However, since our main goal in this work is to minimize slow, long-term drifts of the lasers, we favor the use of Python, which results in more accessible code. § OPERATION OF THE LOCK §.§ Optical setup We tested the STCL using several realistic setups for the laser cooling of atoms and molecules. This involved different home-built and commercially available external cavity diode lasers of different wavelengths, as well as a tunable continuous-wave Ti:Sapphire laser. To address these different lasers, feedback of different forms was generated using the RP units. For the diode lasers, the feedback was applied through their diode currents or through piezo actuators controlling the length of an external Littrow cavity. In cases where only small feedback is required and which may thus be limited by the 14bit resolution of the RP's output signal, this signal can be improved by combining higher output amplitudes with an attenuator. For the Ti:Sapphire laser the feedback was applied to a piezo actuator, which controls the length of an internal cavity of the laser. For the transfer cavity, we typically used a simple home-built design in our tests <cit.>. It consists of a plane and a confocal mirror, which are separated by 165.4 using a stainless steel spacer, resulting in a free spectral range of 906. Given a particular wavelength range of interest, we used custom mirror coatings, resulting in a typical finesse on the order of 240. The mirrors of the cavity are mounted on piezo ring actuators, allowing for fine control of the resonator length. As indicated in Fig. <ref>a, the first piezo actuator was used for scanning and stabilization of the cavity length, while the second was used to realize a fixed length offset independently of the operation of the STCL. This latter feature is useful to easily minimize the overlap of transmission peaks from different lasers and is thus crucial for the scalability of our scheme. As the transmission resonance condition for each laser depends on its wavelength, any incidental overlap between two peaks from different lasers can be removed simply by shifting the cavity length by several free-spectral ranges (see Appendix <ref>). The same can also be achieved with a single piezo in combination with a bias voltage on the scan ramp. It should be noted that the analog outputs of the RP are limited to a range of ±1, which is not sufficient to drive a typical piezo actuator over its full range. For this reason, whenever necessary, we employed amplifiers to scale the voltages to sufficient amplitudes <cit.>. In the case of the transfer cavity, this solution not only allows for a variable gain and offset, which is useful for the initialization of the scanning range, but also reduces noise on the scanning ramp. Additional noise reduction is possible using a low-pass filter that is matched to the capacity of the piezo and the speed of the scan ramp. Our scheme is not specific to a particular cavity design and can easily be realized also with any other cavity. In particular, cavities mounted in vacuum or constructed using low thermal expansion materials could further improve the performance of the lock presented in this work. §.§ Basic locking An example result for the stabilization of a home-built diode laser is illustrated in Fig. <ref>a. This laser was set close to a transition required for the laser cooling of barium monofluoride molecules that is located at 898 <cit.>. For the reference, a commercial diode laser was stabilized to the cesium D1 line at 895 via a vapor cell FM spectroscopy <cit.>, which was implemented using PyRPL <cit.>. The error e of the laser frequency was evaluated from the cavity signals as in Eqn. <ref>, with the frequency scale obtained by multiplication with the free spectral range. In addition to the internal monitoring of the STCL, the frequency scale and the operation of the lock were also independently observed and validated using a wavelength meter throughout the tests. In order to illustrate the difference between the locked and unlocked states, the lock was intentionally operated for a time of around 20 minutes. Frequency jumps on the order of 15 indicate the times when the lock was manually engaged and disengaged. Residual high frequency noise visible on the signal was due to the inherent noise of the home-built laser. We find that the maximum time for which the lock can operate is mainly limited by the thermal drifts of the cavity, which can at some point exceed the tuning range of the piezo actuators. In a temperature stabilized labspace, we have observed this time to exceed several hours. §.§ Stability against perturbations We further investigated the effects of sudden perturbations on the performance of the STCL using a similar setup as in the previous section. For this, the locked laser was strongly perturbed by manually knocking the optical table or by adding current spikes to the diode laser current while the lock was engaged. The resulting monitoring signal is shown in Fig. <ref>b. Occasional jumps in the signal reach 70 and correspond to the times when the laser was perturbed. Such jumps were well within the pre-defined region of interest for the peak detection of this laser. Therefore, the STCL could always detect the resonance, the resulting frequency spikes were quickly reduced again and the laser remained locked throughout the test, without any significant drift. We note that any perturbations larger than the region of interest, such as laser mode-hops, can still interrupt the lock. This is particularly important to keep in mind when more lasers are added to the same cavity, which limits the size of the region of interest available for each laser. §.§ Scalability In order to test the scalability of our STCL scheme, we locked up to four lasers simultaneously on a single transfer cavity. Including the monitoring RP, a total of four RP units were used for the frequency stabilization of the lasers L1 to L4. For demonstration purposes, the resonances of the lasers were spaced equally throughout the scanned range. The wavelengths corresponded again to the cooling scheme for barium monofluoride <cit.>, which features transitions between 860 and 898. The monitored error signals are summarized in Fig. <ref>c, with only the first 5 minutes shown for clarity. All four lasers were successfully stabilized against any relevant frequency drifts on timescales exceeding hours. § PERFORMANCE To characterize the long-term performance of the STCL, we used a beat note of two overlapped lasers. An example is given in Fig. <ref>, where the two lasers were realized using commercial diode lasers. One of these lasers, frequency stabilized via DAVLL <cit.> to the D2-line of ^85Rb, served as a precise reference for residual drifts of the second laser. This second laser was locked using the STCL to a frequency roughly 60 away, allowing for a measurement of the beat-note signal using an oscilloscope with 200 bandwidth. For the reference laser used in the STCL scheme, a separate third laser at the same wavelength was used, stabilized with a DAVLL scheme on the ^87Rb D2 line. The use of two lasers with wavelengths close to each other reduced drifts caused by environmental factors for the STCL <cit.>. Moreover, the choice of a separate reference laser for the beat-note measurement assured that the measurements were not influenced by drifts of the reference laser. The beat-note signal was acquired with a frequency of 1 over the course of an hour. Variations in the frequency of the laser were evaluated by the peak positions of the Fourier-transformed signals. The resulting frequency drift Δ f is shown in Fig. <ref>a. Without engaging the transfer lock, the frequency of the laser drifts by more than 60 over the course of one hour. These drifts were fully compensated while the STCL was engaged. Notably, the frequency drifts over an hour were characterized by a standard deviation of 0.5, well below the short-term fluctuations of the individual lasers. The largest residual drifts observed in a large set of such test measurements performed over several weeks of operation were on the order of 1.5. The Allan deviation calculated from the measured frequency drifts σ(τ) is shown in Fig. <ref>b. For increasing averaging times τ up to 200, σ(τ) decreases due to averaging of short-term noise. For longer τ, an upwards trend indicates the effect of the small residual frequency drifts. Substituting the STCL's reference with a frequency stabilized helium-neon laser resulted in a similar performance with drifts of 0.8 over three hours, which can be explained by both a temperature drift around 0.5 during the measurement and the specified frequency stability of the helium-neon laser of 1 per hour. § CONCLUSION We have used the capabilities of the STEMlab platform provided by Red Pitaya to develop a STCL that is both cost-efficient and scalable. To prevent an overhead in data-transfer, Python modules were developed to implement the signal processing and feedback loop digitally directly on the RP's CPU with a bandwidth of up to 100. The STCL is remotely accessible from a PC, allowing the control and monitoring of the individual laser locks. Its scalability has been demonstrated by simultaneous stabilization of four lasers on a single cavity. We determined the long-term stability of the system to stay well below 1 over the course of an hour. These parameters make our STCL particularly useful for the laser cooling of many molecular species. We are indebted to Tilman Pfau for generous support and thank Max Mäusezahl for fruitful discussions and advice on the amplifiers used to drive the piezo actuators <cit.>. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreements No. 949431), Vector Stiftung, the RiSC programme of the Ministry of Science, Research and Arts Baden-Württemberg and Carl Zeiss Foundation. § DATA AVAILABILITY STATEMENT The Python codes used in this study are openly available in the GitHub repository of the Langen Group <cit.>. § REMOVING OVERLAPPING CAVITY TRANSMISSION PEAKS FROM DIFFERENT LASERS As shown in the main text, many lasers can be coupled into the same cavity to stabilize them using the STCL. In principle, this only requires sufficient power to reliably detect the transmission peak of each laser. The robustness of the STCL, however, depends on the size of the region of interest available around each transmission peak to uniquely detect and compensate the frequency fluctuations of the corresponding laser. Therefore the spacing between the individual peaks in the cavity transmission signal effectively limits the maximum number of lockable lasers. In particular, overlapping resonances from different lasers can easily interfere with the successful operation of the frequency stabilization algorithm. For this reason, the cavities used in our work are equipped with an additional piezo actuator to control the overall length of the cavity independent of the scanning piezo actuator that participates in the lock. When scanning the length L of the resonator using this latter piezo actuator, the free spectral range of different lasers depends on their wavelength according to the standing wave condition L = m λ /2, m∈ℕ. This wavelength dependence allows us to tune the relative resonance positions of different lasers by choosing an appropriate scan range with the second piezo. This is demonstrated in Fig. <ref>. Effectively, the mode number m is shifted by Δ m for all lasers. The change in relative position to the reference peak for a laser L can then be modeled as (λ_L/λ_Ref - 1)Δ t_FSR×Δ m. As an alternative to this approach, additional transfer cavities can be set up or the frequencies of the lasers can be shifted (e.g. using acousto-optical modulation) before coupling them into the cavity used for the STCL.
http://arxiv.org/abs/2307.04286v2
20230710002617
Numerical Investigation of Diffusion Flame in Transonic Flow with Large Pressure Gradient
[ "Yalu Zhu", "Feng Liu", "William A. Sirignano" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
Generalizing Graph ODE for Learning Complex System Dynamics across Environments Wei Wang August 12, 2023 ================================================================================= A finite-volume method for the steady, compressible, reacting, turbulent Navier-Stokes equations is developed by using a steady-state preserving splitting scheme for the stiff source terms in chemical reaction. Laminar and turbulent reacting flows in a mixing layer with large streamwise pressure gradient are studied and compared to boundary-layer solutions. Influence of chemical reaction on the turbulent transport in the mixing layer is analyzed. Influence of vitiated air on the combustion process and aerodynamic performance is also investigated for the cases of turbulent mixing layer and turbine cascade. § NOMENCLATURE @l @ = l@ C molar concentration C_p specific heat capacity at constant pressure E total energy e internal energy h enthalpy h^0 enthalpy of formation 𝐈 Kronecker tensor 𝐣 diffusive flux of species k turbulent kinetic energy N total number of species n index of time step Pr Prandtl number p pressure Q̇ source term in energy equation due to reaction 𝐪 heat flux in energy equation R gas constant R_0 universal gas constant, R_0 = 8.3145 J/mol/K Sc Schmidt number T temperature t time u velocity in x direction 𝐕 velocity vector, 𝐕 = (u, v) v velocity in y direction W molecular weight x,y Cartesian coordinates Y mass fraction Δ t time step μ viscosity coefficient ρ density τ viscous stress tensor ω specific dissipation rate ω̇ production rate of species by reaction (·)^T transpose of quantity (̅·̅)̅ Reynolds-averaged quantity (̃·̃)̃ Farve-averaged quantity (·)_i species (·)_T quantity of turbulence (·)_∞ quantity at the top far away from the mixing layer § INTRODUCTION To reduce the weight and widen the range of operation, the designer continues to pursue compact design of the combustor and the turbomachinery of a gas turbine engine. In a compact combustor, the fuel residence time becomes shorter than the the time required for complete combustion. As a result, the combustion process would be extended into the downstream turbine passage. This increases the challenge of heat transfer in turbine at the first sight. However, the thermodynamic analysis by Sirignano and Liu <cit.> showed that intentional augmented burning in the turbine passage, called turbine-burner, allows for significant benefits: 1) reduction in after-burner length and weight, 2) reduction in specific fuel consumption, and 3) increase in specific thrust. To take advantage of the turbine-burner design concept, it is necessary to address some fundamental issues of aerodynamics and combustion associated with it. In a turbine passage, the compressible turbulent flow is subjected to strong streamwise and transverse pressure gradients produced by the turbine blade profiles. The flow accelerates from subsonic to supersonic in a very short distance, creating a challenge for flameholding. Large gradients of temperature, velocity, and species concentration occur on the fuel-oxidizer interface due to mixing and combustion. This can result in hydrodynamic instabilities that might significantly affect the energy conversion, heat transfer, and aerodynamic loading on the turbine blades, and the character of turbulent flow <cit.>. The high-accelerating transonic flow with mixing and chemical reaction is an important area of applied scientific research. Sirignano and Kim <cit.> obtained the similarity solution for a diffusion flame in the two-dimensional, laminar, steady, compressible mixing layer with constant pressure gradient along the streamwise direction. Fang et al. <cit.> extended the study to include cases with arbitrary pressure gradients by using a finite-difference method for the boundary-layer equations. The influence of pressure gradient, initial temperature, initial velocity, and transport properties on the ignition process and flame structure was studied. Mehring et al. <cit.> further extended the laminar boundary-layer computation to include the effects of turbulence. Cai et al. <cit.> investigated the turbulent, transonic, reacting flows in a mixing layer and curved duct by solving the two-dimensional Reynolds-Averaged Navier-Stokes (RANS) equations. The flame structures in transonic flows with large axial and transverse pressure gradients were examined. Cheng et al. <cit.> simulated the development of reacting mixing layers in straight and curved ducts from laminar flow to transition regime by solving the two-dimensional Navier-Stokes equations. These numerical computations of the reacting flows with pressure gradients are based on the boundary-layer equations or the two-dimensional Navier-Stokes equations in which the pressure gradient is provided by the variation of flow passage width in the third direction. In order to simulate the reacting turbulent flow in a real turbine, numerical methods to solve the full three-dimensional Navier-Stokes equations with chemical reaction must be developed. Since the turbine vane is downstream of the primary combustion chamber in a gas turbine engine, the gases at the turbine entrance are a mixture of unburned air and reaction products. However, a pure gas model, treating the medium as a single species with altered specific heat capacities, is usually used for flow simulation in a turbine when there is no chemical reaction and heating <cit.>. This simplification has little influence on the aerodynamic performance and heat transfer in a turbine. However, it significantly affects the heat release and flow characteristics if a combustion process is incorporated into a turbine due to the different chemically thermodynamic properties of each species. To accurately model the reacting flow and thus predict its influence on the aerodynamic and thermodynamic performance of a turbine, the vitiated air composed of unburned air and reaction products should be used at the turbine entrance instead <cit.>. The differences between pure air and vitiated air are also necessary to be evaluated. In the present paper, a code to solve the three-dimensional compressible RANS equations with chemical reaction and turbulence models by using finite-volume method is developed and implemented. The code is then applied to study the laminar and turbulent reacting flows in an accelerating mixing layer and to compare the vitiated air and pure air in the same mixing layer and a real turbine cascade. The governing equations and numerical methods are presented in Sec. <ref> and Sec. <ref>, respectively. The nonreacting laminar flow in a mixing layer is presented in Sec. <ref>. The reacting laminar case is discussed in Sec. <ref>. The reacting turbulent case is given in Sec. <ref>. The differences between vitiated air and pure air for the turbulent mixing layer are discussed in Sec. <ref>. The reacting turbulent flow in a turbine cascade is analyzed in Sec. <ref>. The concluding remarks are given in Sec. <ref>. § GOVERNING EQUATIONS §.§ Reynolds-Averaged Navier-Stokes Equations The Reynolds-Averaged Navier-Stokes (RANS) equations for compressible flows are expressed by the following transport equations for mass, momentum and energy ∂ρ̅/∂ t + ∇· (ρ̅𝐕) = 0 ∂(ρ̅𝐕)/∂ t + ∇· (ρ̅𝐕𝐕) = -∇p̅ + ∇·τ ∂(ρ̅E)/∂ t + ∇· (ρ̅E𝐕) = -∇· (p̅𝐕) + ∇· (𝐕·τ) - ∇·𝐪 + Q̇ The chemical reaction in the flow is taken into consideration by the mass fraction transport equation for each species in a mixture with N species ∂ρ̅Y_i/∂ t + ∇· (ρ̅Y_i 𝐕) = -∇·𝐣_i + ω̇_i, i = 1, 2, ..., N The energy equation is expressed in terms of the total energy, E, which consists of the internal energy and the kinetic energy, i.e. E = ẽ + 1/2𝐕·𝐕 where the internal energy ẽ is related to the enthalpy h̃ by ẽ = h̃ - p̅/ρ̅ The enthalpy is the summation of the sensible enthalpy weighted by the mass fraction h̃ = ∑_i=1^NY_i h̃_i with h̃_i = ∫_T_0^TC_p,idT where the specific heat capacity at constant pressure C_p,i is a function of temperature given by the empirical polynomial formula of NASA <cit.> for each species. An additional heat source term Q̇ appears on the right-hand side of the energy equation Q̇ = -∑_i=1^Nω̇_i h_i^0 where h_i^0 is the enthalpy of formation of species i at the reference temperature T_0. The viscous stress tensor τ is the sum of the molecular stress tensor τ_L and turbulent stress tensor τ_T with τ_L = μ[∇𝐕 + (∇𝐕)^T ] -2/3μ(∇·𝐕) 𝐈 τ_T = μ_T [∇𝐕 + (∇𝐕)^T ] -2/3μ_T (∇·𝐕) 𝐈 where μ is the molecular viscosity computed by the mass-weighted summation of molecular viscosity of each species given by the Sutherland's law <cit.>, and μ_T is the turbulent viscosity determined by the turbulence model in the next subsection. The diffusive flux of species i is given by 𝐣_i = -( μ/Sc_i + μ_T/Sc_T) ∇Y_i where Sc_i and Sc_T are the Schmidt number of species i and the turbulent Schmidt number, respectively. In the present study, we set Sc_i = 1.0 and Sc_T = 1.0. The heat flux in the energy equation is computed by 𝐪 = -( μ/Pr + μ_T/Pr_T) ∇h̃ + ∑_i=1^Nh̃_i𝐣_i where the last term stands for the energy transport due to mass diffusion of each species with different enthalpy, and Pr and Pr_T are the Prandtl number and the turbulent Prandtl number, respectively. In the present study, we set Pr = 1.0 and Pr_T = 1.0. A perfect gas is assumed in this study, in which the pressure, density and temperature are related by the equation of state p̅ = ρ̅RT where R is the gas constant of the mixture, computed by the mass-weighted summation of the gas constant of each species R_i with R_i = R_0/W_i. §.§ Turbulence Model The improved kω Shear-Stress Transport (SST) model presented by Menter et al. <cit.> in 2003 is used to evaluate the turbulent viscosity. This model combines the advantages of the kε model and the kω model. In the inner zone of a boundary layer, the SST model degenerates into the kω model, which avoids the stiff source term in the kε model due to the damping function in the viscous sublayer and the defect in capturing the proper behaviors of turbulent flows with adverse pressure gradients up to separation. In the outer zone of a boundary layer or in a free-shear flow, the SST model switches to the kε model, avoiding the strong sensitivity of the kω model to freestream turbulence. In addition, the SST model takes into account the transport of principal turbulent shear stress to enhance its ability to predict turbulent flows with adverse pressure gradient and separation. The SST model is established by the transport equations for turbulent kinetic energy k and specific dissipation rate ω ∂ρ̅ k∂ t + ∇·(ρ̅ k 𝐕) = P - β^*ρ̅ kω + ∇· [(μ+σ_kμ_T) ∇ k] ∂ρ̅ω∂ t + ∇·(ρ̅ω𝐕) = γρ̅μ_TP - βρ̅ω^2 + ∇·[(μ+σ_ωμ_T)∇ω] + 2(1-F_1)ρ̅σ_ω2ω∇ k·∇ω where the production term in the k equation is P = min(μ_TS^2, 10β^*ρ̅ kω) The turbulent viscosity is then computed by μ_T = a_1ρ̅ kmax(a_1 ω, F_2S) where S is the second invariant of strain rate tensor S = √(2S_ijS_ij), S_ij = 1/2(ũ_j,i + ũ_i,j) The blending functions F_1 and F_2 are defined by, respectively F_1 = tanh(Γ^4), Γ = min[max(500μρ̅ω d^2, √(k)β^*ω d), 4σ_ω 2ρ̅ kCD_kω d^2], CD_kω = max(2ρ̅σ_ω 2/ω∇ k·∇ω, 10^-10) F_2 = tanh(Π^2), Π = max(500μρ̅ω d^2, 2√(k)β^*ω d) where d is the distance to the nearest wall. F_1 is set to zero away from the wall (kε model), and switched to one inside the boundary layer (kω model). F_2 is one for boundary-layer flows and zero for free-shear layers. Both of them are artificially set to zero in the turbulent mixing-layer case below. Each of the constants in the SST model is computed by a blend of the corresponding constants of the kω and kε models via ϕ = F_1 ϕ_1 + (1-F_1) ϕ_2, where ϕ = σ_k, σ_ω, β, γ. The other constants are: a_1 = 0.31, β^* = 0.09, σ_k1 = 0.85, σ_k2 = 1.0, σ_ω_1 = 0.5, σ_ω_2 = 0.856, β_1 = 3/40, β_2 = 0.0828, γ_1 = 5/9, γ_2 = 0.44. §.§ Chemistry Model The combustion of methane (CH_4) in air is considered in the current computations. The production rate ω̇_i of each species due to chemical reaction is calculated by the Westbrook and Dryer's one-step reaction mechanism <cit.> CH_4 + 2 O_2 + 7.52 N_2 → CO_2 + 2 H_2 O + 7.52 N_2 where only four species, i.e. methane (CH_4), oxygen (O_2), carbon dioxide (CO_2), and water vapor (H_2O), are tracked besides nitrogen (N_2) in air. Thus, the number of species N = 5 in the present work. Note that although this work only focuses on one-step reaction of one type of fuel, it is straightforward to extend the present method to different fuels and oxidizers with more complex chemical reaction mechanisms if the increased computational costs are acceptable. The reaction rate by the laminar kinetics is given by the modified Arrhenius expression ε = A T^β e^-E_a/(R_0 T) C_fuel^a C_ox^b where "fuel" and "ox" stand for CH_4 and O_2 in this study, respectively, and C_i is the molar concentration of species i defined by C_i = ρ̅Y_i/W_i. According to Westbrook and Dryer <cit.>, A = 1.3 × 10^9 s^-1, β = 0, E_a = 202.506 kJ/mol, a = -0.3, and b = 1.3 for methane. To be consistent with the setups of Fang et al. <cit.> and Mehring et al. <cit.>, the values of A are adjusted to be 2.8 × 10^9 s^-1 and 1.3 × 10^10 s^-1 in the laminar and turbulent cases below, respectively. Computations show that this change has little influence on flame development except ignition length. It is found that ignition may not happen if the original value of A is used in the turbulent mixing-layer case. Note that the influence of turbulence on the reaction rate has been neglected in this analysis. Since the turbulent length scale is orders of magnitude smaller than the scale across which the largest temperature gradient occurs in the ignition region of mixing-layer flow as analyzed by Mehring et al. <cit.>, the averaged reaction rate by Eq. (<ref>) is considered to be reasonable. The net rate of production of species i by the chemical reaction is thus calculated by ω̇_i = W_i(v_i^'' - v_i^') ε where v_i^' is the stoichiometric coefficient for reactant i in Eq. (<ref>), and v_i^'' is the stoichiometric coefficient for product i. Obviously, the net production rate of nitrogen is zero. § NUMERICAL METHODS §.§ Numerical Solver An in-house three-dimensional code of simulating the steady and unsteady transonic flows for single species within turbomachinery blade rows has been developed, validated, and applied by Refs. <cit.>. The code solves the Navier-Stokes equations together with various turbulence models by using the second-order cell-centred finite-volume method based on multi-block structured grid. The central schemes with artificial viscosity, flux difference splitting schemes, and advection upstream splitting methods with various options to reconstruct the left and right states have been developed and implemented in the code. In this study, the present code is extended to the case of multiple species with varying specific heat capacities and to include appropriate chemistry models. The species transport equations (<ref>) have the same form as the basic conservation equations (<ref>) and (<ref>). Consequently, the numerical methods for the Navier-Stokes equations are still applicable, except that the chemical source terms in both species equations and energy equation need to be treated separately to avoid stiffness. The convective and viscous fluxes are discretized by the JST scheme <cit.> and the second-order central scheme, respectively. The local time-stepping method is introduced to accelerate the convergence to steady state. Thus, the time t in the governing equations (<ref>), (<ref>), and (<ref>) is interpreted as a pseudo time, and a large enough pseudo-time step determined by the local flow field can be used in each grid cell since time accuracy is not required for steady-state solutions. The Lower-Upper Symmetric-Gauss-Seidel (LU-SGS) method <cit.> is applied to the pseudo-time stepping to obtain the converged steady-state solutions. The parallel technique based on Message Passing Interface (MPI) is adopted to further accelerate the computation by distributing grid blocks among CPU processors. Note that the continuity equation (<ref>) and the N species transport equations (<ref>) are not independent of each other. The summation of the N species equations should reduce to the continuity equation, which gives the following restrictions on the terms in the species transport equations ∑_i=1^NY_i = 1, 0 ≤Y_i ≤ 1 ∑_i=1^N𝐣_i = 0 ∑_i=1^Nω̇_i = 0 These restrictions should be satisfied during the computation to maintain consistency of the final converged solution. Equation (<ref>) is automatically satisfied due to the balanced stoichiometric coefficients in Eq. (<ref>). Additional treatments should be adopted to guarantee the other two conditions. To ensure Eq. (<ref>), after each iteration for Eq. (<ref>), the mass fraction of each species is corrected as Y_i^ corr = Y_i/∑_k=1^NY_k To ensure Eq. (<ref>), the diffusive flux of each species is corrected as 𝐣_i^ corr = 𝐣_i - Y_i∑_k=1^N𝐣_k §.§ Splitting Scheme In the species and energy equations, the source terms exhibit fundamentally different physical properties from the terms of convection and diffusion due to significantly smaller time scales for chemical reactions than for the flow, resulting in strong stiffness in solving the governing equations. The operator-splitting method is a natural choice to achieve efficient integration in time for unsteady problem. Equations (<ref>) and (<ref>) can be rewritten as dW/dt = T(W)+ S(W) where W = [ρ̅Y_i, ρ̅E]^T, with size of N+1, is the state variables. T and S represent the transport term (convection and diffusion) and the reacting source term, respectively. The operator-splitting method integrates the two terms sequentially. Consider the integration from time step n to n+1 over the time interval of Δ t. A natural splitting scheme first integrates the Ordinary Differential Equation (ODE) of S over the time interval Δ t with the solution at the time step n as initial value, and then solves the Partial Differential Equation (PDE) of T over Δ t with the solution at the end of the first step as initial value. It is easy to prove that this simplest splitting is of first-order accuracy. The accuracy can be improved to second order by using the Strang splitting scheme <cit.>, in which the integration proceeds in a symmetric way: first a half interval Δ t/2 is taken with the S operator, then a full interval Δ t with the T operator, and finally another half interval Δ t/2 with the S operator. Both splitting schemes are strongly stable and applicable to unsteady problems. However, they are not steady-state preserving <cit.>. A numerical integration method is called steady-state preserving, if given an initial solution W_0 satisfying T(W_0)+ S(W_0) = 0, the solution of the next step remains W_0, regardless of the step size. We propose here a steady-state preserving splitting scheme for solving stiff reacting flow. The integration of Eq. (<ref>) from pseudo-time step n to n+1 over the time interval of Δ t is split into two separate sub-integrations dW^*/dt = T(W^n)+ S(W^*), (W^*)^n = W^n dW/dt = R, R = (W^*)^n+1 - W^n/Δ t In the first chemical sub-integration, Eq. (<ref>) is integrated by a stiff ODE solver over the time interval Δ t with the initial value W^n, giving an intermediate value (W^*)^n+1 at the end of the sub-integration. Since the time scale of chemical reaction is several orders of magnitude smaller than that of the flow, the transport term T is evaluated at time step n and is kept unchanged within the chemical sub-integration. Although advanced full-implicit method or Quasi-Steady-State (QSS) method <cit.> is usually chosen as the stiff ODE solver, the simplest Euler explicit integration method is applied to solve Eq. (<ref>) in the present study due to the extremely stiff source terms introduced by the empirical global reaction mechanism in the absence of reverse reaction. The chemical time step in the explicit integration is determined by the spectral radius of the Jacobian matrix ∂S/∂ W. Once the intermediate solution (W^*)^n+1 is obtained by the chemical sub-integration, the LU-SGS method is then applied to the flow sub-integration (<ref>) over the time interval Δ t to obtain the solution W^n+1 at time step n+1. In the original LU-SGS method for unsplit problems, the residual on the right-hand side is computed by the solution at the time step n, i.e., R = T(W^n) + S(W^n). However, in Eq. (<ref>), the residual is replaced by the difference of solutions at the intermediate time step and the time step n. In consideration of Eq. (<ref>), this residual can be regarded as the weighted average of T(W) + S(W) over the time interval Δ t R = (W^*)^n+1 - W^n/Δ t = 1/Δ t∫_t_n^t_n+1dW^*/dtdt = 1/Δ t∫_t_n^t_n+1[T(W^n)+ S(W^*)] dt = T(W^n) + 1/Δ t∫_t_n^t_n+1S(W^*) dt It is considered to be more reasonable to maintain the stability of the integration than that computed at time step n. The contributions of transport term and reacting source term are incorporated into both the chemical sub-integration and the flow sub-integration, which guarantees that the right-hand sides are always consistent with the original differential governing equations for steady problem. In other words, the present splitting scheme is steady-state preserving. However, since the contribution of chemical source term is not included into the Jacobian matrix of LU-SGS iteration, this splitting scheme may degrade the convergence as the chemical time scale becomes much smaller than the flow time scale. Even so, this splitting scheme is attractive since it is easy to be implemented in the existing LU-SGS method. We need only reset residuals of species and energy equations before performing LU-SGS iteration. Thus, in the second flow sub-integration, the LU-SGS iteration for species and energy equations can be performed together with the other non-stiff equations, which avoids the separate solving of governing equations. It is especially important for the energy equation since it is closely coupled with the continuity and momentum equations. In addition, in contrast to the Strang splitting scheme, the computational cost of the presented scheme is smaller since only one chemical sub-integration and one flow sub-integration are necessary within one time step determined by the flow time scale. §.§ Computational Configuration and Mesh To model the combustion flow in the turbine burner, the diffusion flame in a two-dimensional steady transonic mixing layer with strong favorable pressure gradient is considered in this study. The flow condition is from the cases of Fang et al. <cit.>, Mehring et al. <cit.>, and Cai et al. <cit.>. To produce the prescribed streamwise pressure gradient in the mixing layer, a configuration of converging-diverging nozzle is created in this paper, which was not necessary in Fang's and Mehring's work since the boundary-layer approximation was made. It was also avoided in Cai's work by introducing a streamtube thickness function into the two-dimensional equations. At the inlet of the nozzle, the hot air mixed with burned gases flows into the upper side and comes into contact with the fuel vapor from the lower side. To achieve the prescribed pressure levels in the nozzle passage, given the flow conditions and nozzle height at the inlet, the downstream profiles of the upper and lower surfaces can be determined by the isentropic relations of quasi-one-dimensional flow for air and fuel, respectively. Since this is based on the assumption of quasi-one-dimensional flow without mixing and reaction, there exists a difference between the computed pressure in the diffusion flame and the prescribed one. This is eliminated by reshaping the nozzle profiles according to the pressure difference using the isentropic relations again. Figure <ref> shows the converging-diverging nozzle configuration in the turbulent mixing-layer case. To reduce the disturbance of inlet boundary condition on the downstream flow, a uniform inlet section is added ahead of the mixing layer. The upper side and lower side of the nozzle are almost symmetric, both of which rapidly converge at the inlet, gradually slow down in the middle, and keep diverging after the throat at x = 70 mm. To reduce the slope of the side surfaces near the inlet while keeping them away from the mixing layer at the throat, the nozzle height at the inlet should be carefully chosen. The half height at the inlet is 30 mm for the turbulent case, whereas it is reduced to 3.5 mm for the laminar case due to the thinner mixing layer. As a result, the half heights at the throats for the turbulent and laminar cases are about 3 mm and 0.4 mm, respectively. Figure <ref> presents the pressure variations along the center line of the nozzle for three cases to be analyzed in Secs. <ref>, <ref> and <ref>, along with the linearly varying pressure imposed in the boundary-layer approximation. The pressure variations in the two laminar cases are sufficiently close to the boundary-layer case. However, the pressure levels for the reacting turbulent case deviate from the linear values, especially in the middle of the nozzle. This is because the thick turbulent shear layer approaches the side surfaces and is evidently perturbed by them in the downstream nozzle. This pressure difference in the reacting turbulent case is believed to have some influence on the development of the mixing layer and combustion process in it, which will be discussed later. Refs. <cit.> and <cit.> studied the diffusion flame in the mixing layer under 13 different flow conditions by solving the boundary-layer equations. One of those cases, identified as the base case, is chosen to be simulated by using the full Navier-Stokes equations in this paper. The base case, corresponding to a constant streamwise pressure gradient of -200 atm/m, has a static pressure of 30 atm at the inlet. The temperature and velocity of the air at the inlet are T_air = 1650 K and u_air = 50 m/s, respectively. Those of the fuel are T_fuel = 400 K and u_fuel = 25 m/s. The inviscid slip and adiabatic wall boundary conditions with zero normal pressure gradient are specified on the two side surfaces of the nozzle. At the inlet, the total pressure and total temperature of air and methane are fixed on the upper stream and lower stream, respectively. For turbulent cases, the turbulent intensity and ratio of turbulent to molecular viscosity for air are set as 5% and 10.0 at the inlet, respectively. Those for the fuel are 10% and 100.0. All flow quantities are extrapolated (zero streamwise gradient) at the exit flow boundary since the flow is supersonic there. This ensures that no backward waves propagate into the computational domain at the exit. The segment of the center line between the inlet and the starting point of convergence is set as a symmetric plate to avoid pre-mixing of air and fuel. Multi-block grids with matched interfaces between neighboring blocks are generated in the nozzle. Figure <ref> shows the grid for the turbulent case, which is rescaled along the vertical direction to obtain the grid for the laminar case. The vertical grid lines cluster near the inlet, whereas the transverse grid lines cluster near the center line and scatter towards the two side surfaces. The height of the first cell off the center line is less than 0.006 mm for the laminar case, and less than 0.05 mm for the turbulent case. The total number of grid cells is 38272. To check the grid independence, the nonreacting case and reacting case in a turbulent mixing layer are performed on the current grid and another fine grid with 83776 cells, respectively. Figure <ref> compares the profiles of temperature at four streamwise positions for both cases. The solutions on the two grid levels are almost indistinguishable. The profiles of other flow variables (not shown here for brevity) show similar behavior, indicating achievement of grid independence on the coarser 38272 grid already for the turbulent case. For the laminar case, grid independence is also reached on the coarse grid since the grid size (length of cell size) in the vertical direction is smaller than the grid for the turbulent case because of the smaller throat height. Therefore, the results in the following section are based on the 38272 grid. § COMPUTATIONAL RESULTS AND DISCUSSIONS §.§ Nonreacting Laminar Mixing Layer To demonstrate the abilities of the present code to deal with multi-component flows, we first examine the nonreacting laminar flow in the nozzle. At the trailing edge of the splitter plate (x = 0), the hot air on the upper stream starts to mix with the fuel vapor on the lower stream, producing velocity and thermal mixing layers in the middle of the nozzle. Figure <ref> shows the profiles of density and streamwise velocity at four different streamwise positions. The velocity is normalized by the corresponding freestream value in the air side at each position. The mixing layer grows in thickness along the streamwise direction, and the freestream density decreases due to flow acceleration caused by the favorable pressure gradient. The thickness of the thermal mixing layer is almost the same as that of the velocity mixing layer since unity Prandtl number is used in the computation. In addition, the mixing layer on the upper side is thicker than that on the lower side. This is because the smaller density and larger molecular viscosity resulting from the higher temperature produce a smaller Reynolds number on the upper side although the velocity is higher there. §.§ Reacting Laminar Mixing Layer The contours of temperature for the laminar mixing layer are shown in Fig. <ref>. At the trailing edge of the splitter plate, the hot air on the upper stream starts to mix with the fuel vapor on the lower stream. Chemical reaction happens shortly downstream the trailing edge. A diffusion flame is established within the mixing layer as indicated by the high-temperature region slightly biased towards the air side. The reason for that is that the momentum of fuel on the lower stream is higher than that of air on the upper stream, resulting in an upward tilting of the mixing layer. The flame continues to move upward with the mixing layer along the streamwise direction. The freestream temperatures on the two sides decrease downstream because of the flow acceleration under the favorable pressure gradient. The peak temperature within the diffusion flame also decreases along the streamwise direction due to not only the decreasing freestream temperatures but also reduced reaction rate resulting from the reduced freestream temperatures and reactant concentrations. Figure <ref> shows the profiles of density and streamwise velocity at four different streamwise positions. The results computed by Fang et al. <cit.> using the boundary-layer approximation are also shown for comparison. The present density and velocity profiles agree very well with those by Fang et al. Within the diffusion flame, however, the present density is slightly smaller and the velocity is slightly larger. This discrepancy is mainly because of the different governing equations used. In the flame, the density reaches a trough where the temperature peaks since the pressure is the same along the transverse direction. Along the streamwise direction, the density at the freestreams and in the flame decreases, which is consistent with the contours of temperature in Fig. <ref>. Although the normalized freestream velocity is unchanged along the streamwise direction, the peak velocity in the reaction region increases, which is different from the case in the nonreacting mixing layer in Fig. <ref>. This happens because the lighter gas in the reaction region gets accelerated more under the same pressure gradient. Figure <ref> compares the profiles of mass fraction of products (CO_2 and H_2O) at four different streamwise positions. At the positions near the inlet (x = 3 mm and x = 5 mm), both the thickness of reaction region and the peak value of mass fraction by the full Navier-Stokes equations differ from those by the boundary-layer equations. This is primarily attributed to the two-dimensional effects dominated at the initial stage of mixing-layer flow, which is neglected in the boundary-layer approximation. This is also due to the difficulty to exactly maintain a constant streamwise pressure gradient at the initial stage of the two-dimensional nozzle in the present computation. For both computations, the profiles of mass fractions vary sharply near their peaks, and the peak values obviously increase along the streamwise direction. This demonstrates that the chemical reaction dominates the flow over the molecular diffusion at the initial stage of the mixing layer. At the two downstream positions (x = 30 mm and x = 40 mm), the two computational results are quite close to each other with the reaction region of the present simulation slightly biased to the upper side, consistent with the profiles of density and normalized velocity in Fig. <ref>. For both computations, the peak mass fraction of products, corresponding to the stoichiometric reaction, keeps unchanged at the two positions, while the thickness of reaction region continuously increases. This indicates that the diffusion begins to dominate the flow as the mixing layer further develops. §.§ Reacting Turbulent Mixing Layer After validating the numerical method by the laminar case, the present method is extended to the reacting turbulent case. The flow condition is the same as that used for the laminar case, except that the nozzle height is enlarged about eight times to prevent the thicker turbulent mixing layer from touching the side surfaces downstream. To be consistent with the case in Mehring et al.'s work <cit.>, the standard kω model proposed by Wilcox <cit.> in 1988 rather than the SST model is used to evaluate the turbulent viscosity in this section. The kω model exhibits a strong sensitivity to the freestream value of ω <cit.>. We set k = 2.5 × 10^-4 m^2/s^2 and ω = 500 s^-1 at the inlet, same as the setup of Mehring et al. <cit.>. The contours of temperature for the turbulent mixing layer are shown in Fig. <ref>, along with the boundary-layer results by Mehring et al. <cit.>. Similar to the laminar case, a diffusion flame appears on the upper air side downstream of the splitter plate. The growths of both the mixing layer itself and flame are significantly larger than those in the laminar case shown in Fig. <ref> due to stronger diffusion of turbulence. In Mehring et al.'s results, the ignition occurs after a certain distance (about 10 mm) downstream from the trailing edge of the splitter plate, whereas it ignites much earlier (about 5 mm after the splitter plate) in the present computation, as indicated by the high-temperature regions. This discrepancy is attributed to two reasons. On the one hand, the pressure gradient is exactly prescribed as -200 atm/m in the boundary-layer equations. However, it is produced by the converging-diverging side walls in the present full Navier-Stokes equations. The pressure levels near the nozzle inlet cannot stay exactly the same as the prescribed values due to the strong two-dimensional flow caused by the large slopes of the side walls. On the other hand, the boundary-layer approximation is not sufficiently accurate at the initial stage of mixing layer where the flow is fully two-dimensional in nature. The flame by the boundary-layer approximation spreads like a straight line along the streamwise direction. In the present computation, the flame keeps straight on the whole but with slight deflection near the inlet and after the throat. The deflection near the inlet is again due to the two-dimensional effects. The flame distorts after the throat since it is quite close to the top side surface due to the strong turbulent diffusion. We mainly focus on the streamwise ranges between x = 10 mm and x = 70 mm in the following. The present chemistry model does not explicitly include the influence of turbulent kinetics. However, the chemical reaction affects the turbulent transport of flame in the mixing layer. Figure <ref> shows the contours of the reaction rate computed by Eq. (<ref>). The intense chemical reaction is concentrated around a narrow band on the upper side, where the fuel and oxidizer have been fully mixed. Note that the reaction rate on the air side is remarkably higher than that on the fuel side due to the negative exponent of fuel concentration in Eq. (<ref>). As the mixing layer develops, the reaction region becomes thick due to diffusion, while its strength reduces due to the decreased temperature and concentrations of reactants. Similar to the laminar case, the mixing-layer flow is dominated by both reaction and diffusion in the beginning, while it becomes diffusion-dominated further downstream. High velocity gradient is generated in the reaction region under the combined actions of convection and diffusion. This induces significantly strong turbulent production in the reaction region, as indicated by the contours of production rate of turbulent kinetic energy in Fig. <ref>. The production rate is computed by P = μ_TS^2 in the kω model. Another stronger-production region near the center line originates from the shear in the turbulent mixing layer. The intense production of turbulence leads to high turbulent kinetic energy and thus large turbulent viscosity in the flame as shown in Fig. <ref>. As a result, not only the diffusion in the mixing layer itself is strengthened by the turbulence, but also the turbulent diffusion in the flame region is further enhanced by the chemical reaction. Figure <ref> compares the profiles of temperature and density at four different streamwise positions. Compared to the laminar solutions in Fig. <ref>, the thickness of the turbulent mixing layer is one order of magnitude larger. However, the basic behavior remains the same. The present temperature and density agree with those by Mehring et al. in general. The boundary-layer solution is more diffusive as indicated by the thicker temperature and density shear layers. This is due to the higher production rate of turbulent kinetic energy, and thus the higher turbulent viscosity in the mixing layer and flame region for the boundary-layer approximation, as shown by the profiles at x = 37.5 mm in Fig. <ref>. The stronger turbulent diffusion in the boundary-layer solution induces a lower peak temperature in the middle of the flame. Figure <ref> compares the profiles of mass fraction of each species at four different streamwise positions by the full Navier-Stokes and boundary-layer equations. At each streamwise position, the mass fraction of products reaches its peak at the transverse location where both methane and oxygen are simultaneously depleted. Nitrogen is inert in the present reaction model and is thus smoothly diffused from the air side to the fuel side. The peak product mass fraction stays almost constant along the streamwise direction. This is because the balance among convection, diffusion, and production in the species transport equations is independent of the pressure gradient after ignition. Similar to the profiles of density and temperature in Fig. <ref>, the two solutions for the mass fractions generally agree well with each other. The profiles of species mass fractions by the boundary-layer equations are slightly thicker than the full Navier-Stokes equations. The peak mass fractions of products by the full Navier-Stokes equations have slightly larger values. At the first two streamwise positions (x = 12.5 mm and x = 25.0 mm), there is a local peak in the mass fraction of oxygen slightly below the center line. In addition, the peak value in the boundary-layer solution is larger than those in the full Navier-Stokes solution. This can be explained by the contours of mass fraction of oxygen shown in Fig. <ref>. The flame is ignited after a certain distance downstream from the splitter plate. Before the ignition, the flow is dominated by convection and diffusion. Due to the higher velocity on the upper air side, a significant amount of oxygen is convected to and then diffused in the lower fuel side in front of the flame. Compared to the boundary-layer solution, a smaller quantity of oxygen is transported into the lower side in the present solution since it ignites much earlier. As a result, the peak value of oxygen mass fraction is smaller. §.§ Comparisons of Pure Air and Vitiated Air To approximate the inlet flow conditions in a turbine burner, the case with vitiated air at the upper-stream inlet is considered. The vitiated air is used to simulate the exhaust gas from the upstream primary combustion chamber of a turbine engine. It is estimated that, to reach a turbine inlet temperature of 1650 K, a fuel-air mass ratio of 0.03 is needed for the stoichiometric combustion in the primary combustion chamber. As a result, the vitiated air at the turbine inlet consists of 73.77% N_2, 11.01% O_2, 8.04% CO_2, and 7.18% H_2O by mass fraction. These compositions of vitiated air are specified as the inlet boundary conditions of species mass fractions at the upper stream of the nozzle. The other flow conditions are kept the same as the pure air case. To avoid the influence of nozzle geometry, the profiles without reshaping the side walls are applied in this section. Figure <ref> compares the profiles of temperature and density at different streamwise positions by using the inlet conditions of pure air and vitiated air. Compared to the pure air case, the peak flame temperature in the vitiated air case is reduced since the lower oxygen concentration in the upper stream significantly weakens the chemical reaction. As a result, higher density levels are found in the flame for the vitiated air case under the same pressure levels as the pure air case. The transverse location of peak temperature and thus minimum density, corresponding to stoichiometric combustion, is sightly shifted upward for the vitiated air case. The oxygen is redistributed along the transverse direction under the actions of molecular and turbulent diffusion. However, the concentration of oxygen is globally lower in the vitiated air case. Thus, the amount of oxygen required for the stoichiometric combustion moves farther towards the upper freestream. Compared to the pure air case, the thickness of the shear layer is decreased for the vitiated air case, as indicated by the profiles of temperature and density. This is because the reduced velocity gradients in the shear layer resulting from the weak chemical reaction produce a low production of turbulent kinetic energy. Consequently, the turbulent diffusion along the transverse direction is reduced. It is interesting to note that turbulence modeling has significant influence on the developments of flow and combustion in the mixing layer. Comparison between the pure-air results by the SST model in Fig. <ref> and those by the kω model in Fig. <ref> shows that the mixing layer by the SST model develops slower. For example, the thickness of the shear layer on the air side is about 70% of the kω model at x = 25.0 mm, and it decreases to less than half at x = 50.0 mm. This weaker turbulent diffusion of the SST model reduces the mixing between air and fuel, and thus the chemical reaction, as seen from the lower peak temperature at x = 25.0 mm and x = 50.0 mm. This discrepancy between the two RANS turbulence models indicates that more elaborate turbulence modeling, such as large-eddy simulation or detached-eddy simulation, is necessary to accurately resolve the interaction between chemical reaction and turbulent flow. The profiles of mass fraction of products at different streamwise positions are shown in Fig. <ref>. Similar to the temperature and density, the thickness of profile of mass fraction is obviously reduced in the vitiated air case. Although there already exists initial CO_2 and H_2O in the freestream vitiated air, the peak mass fraction of products is still lower than that in the pure air case due to the significantly reduced chemical reaction. In fact, the flame almost becomes extinct after x =75.0 mm, as indicated by the extremely low levels of product mass fraction and temperature at x =75.0 mm and x = 100.0 mm. The upward shift of the peaks in the vitiated air case is also clearly observed. §.§ Reacting Turbulent Flow in Turbine Cascade The reacting flow in a highly-loaded transonic turbine guide vane, named VKI LS89 <cit.>, is simulated and compared for the cases with pure air and vitiated air inlets. The chord of the vane is 76.674 mm, and the pitch-to-chord ratio is 0.85. The stagger angle of the blade is 55^∘. The multi-block structured grid, as shown in Fig. <ref>, is generated for a single cascade passage with translational periodicity on the pitchwise boundaries (green curves in Fig. <ref>). There are 317 grid points around the blade (blue curve in Fig. <ref>), with the points concentrated near the leading edge and trailing edge. The dimensionless distance y^+ of the first grid point away from the blade is less than one. The total grid has 26416 cells, which are divided into 7 blocks for the parallel computation. At the inlet of the turbine cascade, methane with total temperature of 400 K is injected over part of the middle section. Air with total temperature of 1650 K is specified at the remaining part of the inlet section. The total pressure is uniform 166834 Pa, and the inlet flow angle is 0. The back pressure at the outlet is set as 101325 Pa. This produces an averaged streamwise pressure gradient of -20 atm/m within the cascade passage, which is an order of magnitude lower compared to the mixing-layer cases. Figure <ref> shows the contours of temperature for the pure air and vitiated air cases. Two diffusion flames, generated on the interfaces between the fuel and air, are transported downstream in the cascade passage. One of them is near the suction surface, and the other one goes through the middle of the passage. After the trailing edge, the middle branch merges with the suction-surface branch from the adjacent blade, and then they move downstream together. The variations of temperature for the vitiated air case are similar to the pure air case, but the flame temperature levels are obviously reduced. The maximum temperature within the flames is about 3200 K for the pure air case, while it reduces to about 2400 K for the vitiated air case. Same as in the mixing-layer cases, the streamwise and transverse pressure gradients produced by the curved suction and pressure surfaces have significant effects on the flow and combustion process in the turbine cascade, which can be seen from the contours of chemical reaction rate for both pure air and vitiated air cases in Fig. <ref>. The region with high reaction rate in the figure indicates the flame. Ignition immediately happens at the turbine inlet, even if the reaction rate is relatively low due to the insufficient mixing between fuel and air. Disturbed by the blade, the pressure starts to decrease along the streamwise direction near the leading edge. At first glance, the chemical reaction would be weakened by this favourable pressure gradient as in the mixing-layer cases above. However, the local velocity gradients resulting from the blade surface curvature significantly reinforce the molecular and turbulent diffusion and thus make the mixing between fuel and oxidizer sufficient enough, which consequently enhances the chemical reaction. This is why both the strength and thickness of the flames significantly increase near the leading edge. It turns out that the mixing dominates the chemical reaction over the pressure gradient at the initial stage of the turbine cascade. However, after the suction peak (x = 10 mm) on the suction surface, the two flames gradually become weak until reaching the trailing edge due to the strong favorable streamwise pressure gradient produced by the converged turbine passage. After the trailing edge, in the absence of constraints from blade surfaces, the gases within the two flames mix with the low-speed wakes from the suction and pressure surfaces. Hence, the two flames are enhanced again in both strength and thickness, and finally they merge together. The variations of temperature in the turbine passage in Fig. <ref> are clearly consistent with those for reaction rate. Note that the overall pressure gradient in the turbine passage is strongly lower than that in the mixing-layer case. Hence, extinction does not happen on any of the two flames within the accelerating turbine passage, although the reaction rate in the vitiated air case is evidently reduced. This is especially important for the flameholding in high-speed flow. Figure <ref> shows the contours of mass fraction of methane for the pure air and vitiated air cases. The methane from the inlet is transported downstream and continuously consumed on the interfaces between fuel and oxidizer in the turbine passage. The mass fraction of methane is significantly reduced due to the increased reaction rate after the two flames merge. However, the methane is not depleted until it is transported out of the computational domain by the main stream. This is mainly due to the excessive fuel provided at the inlet. Fuel will be depleted if we further decrease the area portion it occupies at the inlet. Chemical reaction in the turbine passage affects the aerodynamic performance of the blade. The distributions of pressure over the turbine blade for the nonreacting case, pure air case, and vitiated air case are compared in Fig. <ref>, in which the pressure is normalized by the total pressure at the inlet. The pressure distributions on the pressure surface are not affected by the combustion since the two flames in the turbine passage are far away from it. However, the pressure distributions on the suction surface are different for the three cases, especially after the suction peak. Compared to the nonreacting case, the pressure in the pure air case is higher on the suction surface since the increased temperature by the intense reaction in the passage and wake reduces the pressure diffusion in the turbine. This results in a lower pressure difference between the pressure and suction surfaces and thus reduces the aerodynamic loading of the blade. Resulting from the weaker chemical reaction, the pressure levels in the vitiated air case are between the levels in the nonreacting case and the pure air case. § CONCLUSION A finite-volume method for the compressible reacting Reynolds-averaged Navier-Stokes equations is developed by using a steady-state preserving splitting scheme to treat the stiff source terms. Laminar and turbulent reacting flows in an accelerating mixing layer are studied and compared to the boundary-layer solutions. The influence of vitiated air on the combustion process and aerodynamic performance is investigated for the cases of mixing layer and turbine cascade. For the reacting flow in the accelerating mixing layer, a diffusion flame is established slightly biased towards the air side after the splitter plate and then transported downstream. The chemical reaction strongly enhances turbulent transport due to intensive production of turbulence by the increased velocity gradients and thus produces large turbulent viscosity in the reaction region. Compared to the laminar case, the turbulent shear-layer thickness is one order of magnitude larger. However, the basic behaviors of the two cases remain the same. Vitiated air has significant influence on the combustion process and aerodynamics. In the mixing layer, the peak temperature in the flame reduces while the minimum density increases compared to the pure air case. The location of peak reaction is slightly shifted upward to the air side. The thickness of shear layer is decreased due to the reduced turbulent diffusion by the weak chemical reaction. In the turbine cascade, the variations of temperature for the vitiated air case are similar to the pure air case, but the flame temperature levels are lower. Although the reaction rate for the vitiated air case is evidently reduced, the extinction does not happen within the accelerating passage. The turbine cascade analysis indicates viability for the turbine burner concept. Future three-dimensional large-eddy simulation with improved chemistry modeling will be pursued. § ACKNOWLEDGMENTS The research was supported by the Office of Naval Research through Grant N00014-21-1-2467 with Dr. Steven Martens as program manager.
http://arxiv.org/abs/2307.05722v1
20230710112941
Exploring Large Language Model for Graph Data Understanding in Online Job Recommendations
[ "Likang Wu", "Zhaopeng Qiu", "Zhi Zheng", "Hengshu Zhu", "Enhong Chen" ]
cs.AI
[ "cs.AI", "cs.CL", "cs.IR" ]
Calculating Originality of LLM Assisted Source Code Shipra Sharma [email protected] Balwinder Sodhi Department of Computer Science and Engineering Indian Institute of Technology Ropar India [email protected] ========================================================================================================================================================================== [1]Corresponding Author. Large Language Models (LLMs) have revolutionized natural language processing tasks, demonstrating their exceptional capabilities in various domains. However, their potential for behavior graph understanding in job recommendations remains largely unexplored. This paper focuses on unveiling the capability of large language models in understanding behavior graphs and leveraging this understanding to enhance recommendations in online recruitment, including the promotion of out-of-distribution (OOD) application. We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs and uncover underlying patterns and relationships. Specifically, we propose a meta-path prompt constructor that leverages LLM recommender to understand behavior graphs for the first time and design a corresponding path augmentation module to alleviate the prompt bias introduced by path-based sequence input. By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users. We evaluate the effectiveness of our approach on a comprehensive dataset and demonstrate its ability to improve the relevance and quality of recommended quality. This research not only sheds light on the untapped potential of large language models but also provides valuable insights for developing advanced recommendation systems in the recruitment market. The findings contribute to the growing field of natural language processing and offer practical implications for enhancing job search experiences. § INTRODUCTION Recommendation in online recruitment aims at suggesting relevant job opportunities to job seekers based on their preferences and qualifications, improving the chances of matching the right employment. With the exponential growth of online recruitment platforms and the need for efficient and personalized job search experiences, the development of effective job recommendation systems has become crucial. In online recruitment systems, job postings and resumes are written in natural language. Traditional approaches have treated job-resume matching as a supervised text-matching problem using paired data for training <cit.>. However, online recruitment platforms often suffer from sparse interaction data, with job postings attracting only a few candidates on average <cit.>. To address this, recent studies <cit.> have explored the use of behavior graphs to capture high-order interactions and alleviate the sparse interaction issue. These behavior graphs leverage message passing to enhance the understanding of user preferences. Different from many general recommendation tasks, it is easy to find that textual understanding forms the backbone of job recommendation, and behavior modeling contributes to the personalized module. In our work, we aim to break through the accuracy bottleneck of job recommender by promoting the semantic richness of textual representation. Inspired by several recent successful recommendations based on text pre-training <cit.>, we first introduce the large language model (LLM) as the job recommendation framework that directly generates targets to achieve this goal. There are many benefits and is also very natural to do this. For instance, out-of-distribution items usually appear in recruitment markets since new job demands are constantly emerging, such as prompt engineers for generative models. The powerful semantic mining ability and massive external knowledge of LLM enhance the generation and associative power of recommender, which is able to generate reasonable recommendation results for the hard OOD items. However, the existing learning schema of LLM recommender cannot understand the non-textual behavior graph which weakens the personalized recommendation ability for different job seekers. To address this challenge, we propose a meta-path prompt constructor to encode the interaction information of graph into the natural language prompt. Specifically, in such a heterogeneous behavior graph, each meta-path composed of various types of nodes and edges can be transferred into a description naturally since each type indicates a specific and meaningful interaction, e.g., interview, conversation, etc. Along this line, for each job seeker, LLM captures the high-order interaction feature to augment her personality with the meta-path prompt. Based on the above analysis, we explore the inclusion of graph data understanding in large language model-based recommendations for the first time. An efficient large language model named GLRec (Graph-understanding LLM Recommender) is proposed to optimize the recommended quality of job recommendation, which is fine-tuned with LoRa <cit.> in our constructed instruction dataset for aligning the gap between pre-trained knowledge and actual recruitment domain. Especially, our exploration presents two valuable and important findings that largely influence the graph understanding strategy of LLM: (i). Different paths would present different weights for the model decision. (ii). The position bias of the order of path prompts brings unstable answers. For this issue, we carefully design path shuffling, adaptive path selector, and their hybrid path augmentation mechanism to alleviate the negative impact brings by different path prompts. Through extensive experiments on real-world recruitment datasets, we observe a significant performance gain through the development of LLM and its graph learning strategy. The main contributions could be summarized as follows: * To our best knowledge, we are the first to implement the fine-tuned large language model as job recommender, which promotes matching accuracy via the semantic richness and massive knowledge of LLM. * We propose the meta-path prompt constructor that leverages LLM recommender to understand behavior graphs for the first time and design a corresponding path augmentation module to alleviate the prompt bias. * We conduct sufficient experiments on real-world recruitment datasets, and the experimental results and visualization cases show the superiority of our model. § RELATED WORK By combing through the research idea, our work is mainly related to two research areas: job recommendation and methods of LLM for recommendation. We will introduce the mainstream work of these two research directions in detail, and point out the shortcomings of existing methods to draw the motivation for our proposed framework. §.§ Job Recommendation Job Recommendation, especially job-resume matching is a necessary task in recruitment data mining, and it has been extensively studied in the literature <cit.>. Early methods approached this problem as a recommendation task <cit.>, relying on collaborative filtering assumptions. However, recent research has focused more on text-matching technology, aiming to improve the representation of job and resume documents [26]. Various techniques have been proposed to encode job and resume information. For example,  <cit.> utilized CNN for encoding, while <cit.> leveraged RNN and BiLSTM to capture sequential information.  <cit.> introduced a profiling memory module to learn latent preference representation by interacting with both job and resume sides. Additionally,  <cit.> explored the effectiveness of adversarial training for job-resume matching. In addition to the aforementioned research, there are also works that consider multi-granularity interactions. The ranking-based loss function can be used to capture multi-level interactions as supervision signals <cit.>. <cit.> propose a bilateral multi-behavior sequence model to describe users' dynamic comprehensive preferences. These approaches highlight the importance of considering various interaction patterns and incorporating additional user information to improve the quality of job recommendations. However, online recruitment platforms frequently encounter challenges due to sparse interaction data, resulting in job postings attracting only a limited number of candidates on average <cit.>. Recent studies <cit.> have investigated the utilization of behavior graphs to capture high-order interactions and mitigate the problem of sparse interactions. These behavior graphs employ message-passing techniques to enrich the understanding of personalized user preferences. §.§ Large Language Models for Recommendation LLMs offer the potential to extract high-quality representations of textual features and leverage extensive external knowledge to enhance recommendation systems.  <cit.> conducted a systematic review and analysis of existing LLM-based recommendation systems. Existing work can be divided into two categories: discriminative models and generative models. Most discriminative models align the representations of pre-trained models like BERT with domain-specific data through fine-tuning. For example, <cit.> proposed pre-training and fine-tuning-based approach to learn users' representation, which leveraged content-rich domains to complement those users' features with insufficient behavior data. Additionally, some research explores training strategies like prompt tuning.  <cit.> leveraged BERT's Masked Language Modeling (MLM) head to uncover its understanding of item genres using cloze-style prompts. Prompt4NR <cit.> pioneered the application of the prompt learning paradigm for news recommendation. Generative models usually translate recommendation tasks as natural language tasks, and then apply techniques such as in-context learning <cit.>, prompt tuning <cit.>, and instruction tuning <cit.> to adapt LLMs to directly generate the recommendation results. Compared to discriminative models, generative models have better natural language generation capabilities. In the job-resume matching area, there is a generative model which develops LLM to generate potential JDs for more explainable and suitable recommendations <cit.>. Although LLM recommenders achieve successful applications through their ability of knowledge association, the lack of graph data understanding ability reduces personalized adaption. In our work, we aim to address this crucial challenge in the online recruitment scenario. § METHODOLOGY In this section, we first illustrate our research problem formally and present related notations. Then the technical detail of GLRec would be introduced progressively. The overall framework is shown in Figure <ref>. §.§ Preliminary §.§.§ Problem Formulation Consider a set of candidates C = {c_1, c_2, …, c_n_1} and a set of jobs 𝒥 = {j_1, j_2, …, j_n_2}, where n_1 and n_2 represent the total number of candidates and jobs, respectively. Each candidate and job are associated with textual documents that describe their resumes and job requirements. They are also linked to a collection of directed interaction records (such as interviewing and discussing) within the recruitment platform. These interactions are formally represented as 𝒜c_i = {c_i → j' | c_i ∈ C, j' ∈𝒥} and 𝒜j_k = {j_k → c' | j_k ∈𝒥, c' ∈ C}, indicating the directed interactions or links initiated by candidate c_i or employer j_k (referred to as a job). We use i and k as indices for candidates and jobs, respectively. Our objective is to predict the compatibility between a job posting and a candidate. §.§.§ Generative Large Language Models Generative LLMs are powerful language models that can generate coherent and contextually relevant text. These models, such as GPT-3/4, are trained on vast amounts of text data and can generate human-like text based on a given prompt or input. Fine-tuning is a common adaption strategy to align the target of pre-trained model and domain-specific applications, such as two popular paradigms of prompt tuning, and instruction tuning. For all these tuning methods, they have an equal final objective loss of autoregressive training as follows: ℒ_f = max _Θ∑_(x, y) ∈𝒯∑_t=1^|y|log(𝒫_Θ(y_t| x, y_<t)), Taking instruction tuning as an example, which designs and constructs instruction data to restrict the output scope and format. x and y represent the “Instruction Input” and “Instruction Output” in the self-instruct data, respectively, e.g., Instruction Input: “Do you like this item?”, Instruction Output: “Yes.”. And y_t is the t-th token of the y, y_<t represents the tokens before y_t, Θ is the original parameters of LLM, and 𝒯 is the training set. §.§.§ Task-specific Instruction In our work, we design two job recommendation tasks to test the LLM recommender following existing related work <cit.>, i.e., point-wise and pair-wise job matching. Here we introduce our designed template for the sample in our dataset, where information related to privacy and business has been filtered. Assume there is a job seeker called candidate whose Candidate Profile Prompt and recommended JD Prompt are defined as: Candidate Profile Prompt: Age: 25, Education: Bachelor's degree, Graduation School: XXX University, Major: Computer Applied Science, Work Experience: 2 years. JD Prompt: Position Title: Full Stack Engineer, Educational Requirement: Bachelor's degree, Work Experience: 1-3 years, Skill Requirements: HTML/JAVA/Spring Boot/SQL. For the point-wise task, we let the LLM recommender learn to predict the satisfaction of a candidate with a recommended job. The instruction is designed as: Point-wise Instruction: You are a recommender, determining whether a candidate would be satisfied with the recommended job position. Please answer with “Yes." or “No.". For the pair-wise task, we let the LLM recommender learn to justify the preference of a candidate for a recommended job pair. Given two jobs' JD Prompt “A" and “B", the instruction is designed as: Pair-wise Instruction: You are a recommender, determining which position will match the candidate. Please answer with “[A]." or “[B].". With the above designed prompts and instruction text, the LLM is able to adapt to a domain recommendation situation. Note that, to ensure the stability of training, we add the JD prompt to the back of ground truth to increase the predicted length. To further fuse interaction knowledges, in the next section, we will illustrate the understanding part of graph data for the large language model: behavior meta-path prompt generation. §.§ Behavior Meta-path Prompt Generation To inject large language models with the ability to comprehend interactive relationships in graph data, we propose a meta-path-based prompt constructor to obtain prompt inputs that represent local subgraphs. Before delving into the details of our approach, it is necessary to provide a formal introduction to heterogeneous graph and meta-path. Heterogeneous Graph. A heterogeneous graph, denoted as 𝒢=(V, E), consists of an object set V and a link set E. A heterogeneous graph is also associated with a node type mapping function ϕ: V →𝒱 and a link type mapping function ψ: E →ℰ. 𝒱 and ℰ denote the sets of predefined object types and link types, where |𝒱|+|ℰ|>2. Meta-path. A meta-path P is defined as a path in the form of 𝒱_1 𝒱_2 ⋯𝒱_l+1 (abbreviated as 𝒱_1 𝒱_2 ⋯𝒱_l+1), which describes a composite relation ℰ_1 ∘ℰ_2 ∘⋯∘ℰ_l between objects 𝒱_1 and 𝒱_l+1, where ∘ denotes the composition operator on relations. Heterogeneous graphs are more diverse and complex in terms of their semantics compared to homogeneous graphs. Meta-paths are commonly used techniques to mine and represent the interaction semantics within them. In the context of online recruitment, the interactions between job seekers and job positions, which involve different types of behaviors, form a behavior graph. This behavior graph is a typical heterogeneous graph, where different node types include Candidate, JD, and different edge types include messaging, interviewing, matching, and more. Due to the unique and defined semantics of each type of edge in the behavior graph, it is natural to consider transferring the graph data format meta-path to a natural language description which is acceptable for the large language model. We only need to predefine the prompt template according to the appeared edges in a path and then fill in the template with the resume or job description information. For instance, given a typical meta-path c_1 j_1 c_2. The prompt template is constructed as: Meta-path Prompt: c_1 interviewed for position j_1. This position discussed with a job seeker c_2. The node information, i.e., the description of candidates or JD, then will be filled in the meta-path prompt template to generate the final prompt data in our dataset. The real case can be referred to in Figure <ref>. In addition, to avoid too similar meta-paths leading to redundancy, we define a simple similarity metric as follows, 𝒮_i,j = |P_i ∩ P_j |/|P_i ∪ P_j|,     P_i, P_j ∈Φ_P, where Φ_P denotes the set of sampled meta-paths for a candidate. P_i, P_j indicates two meta-paths in this set. |P_i ∩ P_j| is the number of tokens that exist simultaneously in two paths, and P_i ∪ P_j is the union of them. We ensure that 𝒮_i, j≤γ between the final selected M meta-paths and 0 ≤γ≤ 1 is a hyperparameter. §.§.§ Path Debiasing and Soft Selection Different from the traditional network embedding, sequence-based meta-path prompts would lead to two challenges for LLM to understand the candidates' behavior sub-graph. Influence of Path Weight. Different meta-paths would present different weights for the model decision. Position Bias of Path Prompt. The position bias of the order of path prompts brings unstable answers. These two challenges appeared when recognizing the pre-trained large language model as a recommender, which hinders the effective modeling of semantic relationships in the graph by LLM recommendation models. To provide a more intuitive explanation, we extracted a real-world case from the log of a popular recruitment platform and visualized them in Figure <ref>. Specifically, for a job seeker in the IT industry, given his Candidate Profile Prompt, Meta-path Prompt 1, and Meta-path Prompt 2, we further feed the LLM with a Task-specific Instruction belonging to point-wise recommendation. The LLM recommender is expected to output the decision of “Yes” or “No” to present the preference of the candidate. Challenge 1 corresponds to Case 1 and Case 2 in this figure. We can find that the same profile and task description with different behavior meta-paths forces LLM to make different predictions. Obviously, the diversity of technology stacks in Path 1 reveals the candidate's preference for full-stack development, and compared to Path 2, the background of path-related job seeker is more close to our candidate. Therefore, for this candidate, Path 1 is evidently more important for the final decision. For Challenge 2, if we construct the input sequence as Case 3, i.e., the order is meta-path prompt 1 → meta-path prompt 2, the LLM outputs the wrong answer “No”. But with a reverse path prompt order, the LLM is able to provide an accurate prediction. Similar to the widely known position bias of candidate items <cit.>, the position of context prompt clearly misleads the model to generate unstable outputs. To address the negative impact of these two challenges on the recommendation results, we carefully design an augmentation module specifically for the meta-path prompt, which consists of three concise but effective strategies. The first strategy is Shuffle Mechanism. When preparing domain data for the model's supervised fine-tuning (SFT), for each sample that contains multiple paths, we randomly shuffle the meta-path prompts in the sample m times. This data augmentation technique allows the model to learn semantic invariance patterns from different combinations of paths, leading to more stable results. It enhances the robustness of the model without introducing redundant information. The second strategy is Path Soft Selector. In this work, we regard the path sampling process in Behavior Meta-path Prompt Generation as a hard selection to heuristic selects semantically rich paths. The Path Soft Selector is used to further adaptively assign a learned weight distribution to the constructed meta-path prompts. Firstly, for a given meta-path prompt ℳ_i , i ∈{1, 2, ..., M} (M denotes the number of path), we obtain the LLM word embedding e_t of each token t ∈ℳ_i. So, the meta-path embedding H_i of ℳ_i can be obtained via a mean pooling as follows, H_i = 1/|ℳ_i|∑_t ∈ℳ_i e_t,     i ∈{1, 2, ..., M}. Then we propose a soft selector to calculate the weight for each meta-path embedding as: α_i = softmax (W_a H_i) = exp(W_a H_i)/∑_j=1^Mexp(W_a H_j), where W_a ∈ℛ^1 × d_e is a trainable parameter, and d_e denotes the dimension of E_i. To avoid the training collapse caused by changed value scale, we utilize a controller parameter λ∈ (0, 0.5] to update word embeddings in Eq. (<ref>). ê_t = e_t + λ·α_i e_t,    t ∈ℳ_i, Compared with most existing tuned or non-tuned LLM models, our prompt augmentation mechanism considers phrase-based attention to distinguish different paths. Actually, this simple solution can be transferred to other similar situations, such as weighed sentence embeddings. What's more, the third strategy is the Hybrid Mechanism which implements Shuffle Mechanism and Path Soft Selector simultaneously. This hybrid module is expected to address the both two challenges. We will evaluate these three strategies in the experiment section. §.§ LLM Instruction Tuning and Recommendation In this subsection, we will introduce the instruction tuning and recommendation process, which aims to align the used LLM with the recommendation task effectively and efficiently. For instruction tuning, we follow the general supervised fine-tuning method to minimize the autoregressive loss calculated by ground truth and corresponding LLM output. In our work, we mask the loss position of the prompt part. Specific prompt format, task-specific instruction, and ground truth have been introduced in the Methodology section. However, direct fine-tuning of the entire model can be computationally intensive and time-consuming. To address this, we propose a lightweight fine-tuning strategy using LoRA, which involves freezing the pre-trained model parameters and introducing trainable rank decomposition matrices into each layer of the Transformer architecture. This approach facilitates lightweight fine-tuning while reducing GPU memory consumption. And the final learning objective can be computed as follows: ℒ_f = max _Θ_L∑_(x, y) ∈𝒯∑_t = 1^|y|log(P_Θ+Θ_L(y_t| e_x, y_<t)) where Θ_L is the LoRA parameters and we only update LoRA parameters during the training process. Note that, different from existing fine-tuning frameworks for recommendation systems, we replace their token input x by the embedding e_x in Eq. (<ref>), since we update the prompt token embedding in the soft selector. As for the recommendation process, since the trained model has learned the output format of our defined ground truth after several SFT alignment steps. So our designed answer parsing is a simple way. We catch the softmax probability of label generation (the token used to denote label, such as “Yes./No.” or “[A]/[B]” in our work ) in the position of model's output corresponding to that in the ground truth. Along this line, the final prediction probability is calculated. § EXPERIMENTS To evaluate the motivation of our model, we conduct experiments to answer the following research questions: * RQ1: How much improvement can be achieved in the field of job recommendation by using recommendation systems based on generative large language models? * RQ2: How does the inclusion of behavior graph understanding affect the effectiveness of GLRec? * RQ3: How well does the meta-path augmentation module optimize the influence of path selection on decision-making and the bias introduced by prompts? §.§ Experimental Settings §.§.§ Datasets. We conduct experiments on the dataset Recr which is collected from a real-world and large online recruitment platform in China to assess recommendation methods. The dataset was constructed from the online logs and contained two kinds of behavior: Match and Interaction, corresponding to the matching set and interaction set mentioned in Problem Formulation. Besides, each candidate (and job) is associated with a descriptive text (i.e., resume or job description). The overall statistics are shown in Table <ref>. From the statistical data, it can be seen that job recommendation is a sparsely interactive scenario. The segmentation ratio of the training set and testing set is 5:1. Note that all sensitive or private information has been filtered out from the data. §.§.§ Baseline. To provide a comprehensive evaluation of our GLRec, we compare it against both LLM-based and traditional recommendation methods: * RobertaRec <cit.>: Candidate resume and JD text are encoded into fixed-length vectors using RoBERTa encoder and then used to calculate similarity scores, enabling personalized recommendations. * HGT <cit.>: Heterogeneous Graph Transformer is a powerful graph learning model which propagates the embeddings (initialized by RoBERTa) of nodes on behavior graph to capture high-order interactions. * TALLrec <cit.>: An advanced fine-tuned LLM recommender that uses instruction tuning on self-instruct data with users' historical interactions. The original backbone of its pre-trained model is LLaMA, and we change it by BELLE as the same as ours for the Chinese corpus. §.§.§ Evaluation Metric. We evaluate the two tasks using the conventional evaluation metric for explicit recommendation: Area Under the Receiver Operating Characteristic (AUC), as our two tasks can be transferred to binary classification problems and the metric captures the similarity between our setting and predicting user interest in a target item. We calculate the AUC score using the Scikit-learn package. §.§.§ Implementation Details. In this paper, we utilize BELLE-LLaMA-7B <cit.> as the pre-trained LLM backbone due to its expanded Chinese vocabulary. The instruction-tuning and model inference, using LoRa, are conducted on 4 Tesla A100 80G GPUs. Our approach incorporates the meta-path prompt and user-specific task instructions as model inputs for personalized recommendations. In our experiments, we investigate the impact of different numbers of paths, specifically [0, 1, 2, 3], for GLRec. Further details regarding the path prompt and instructions can be found in the Methodology section. Additionally, both RobertaRec and HGT have a token embedding dimension of 768, and HGT utilizes mean pooling to obtain the initial node embedding. For all methods, we optimize model parameters using the Adam <cit.> optimizer with a default learning rate of 1e-4, minimizing the MSE loss as the optimization objective. §.§ Performance Comparison In this section, we conduct performance comparison experiments on Recr to answer RQ1. As mentioned in the task definition in Section Methodology, the point-wise and pair-wise settings are implemented for evaluation. We also explore the influence of the OOD situation on different models. The experimental split settings of Random, OOD_position, and OOD_JD are introduced below: * Random: We randomly split the training and testing dataset based on the interaction record of each user. * OOD_position: The intersection on JD's “job position” feature between the training set and the testing set is empty. * OOD_JD: The intersection on JD items between the training set and the testing set is empty. Our experimental results are reported in Table <ref>. Overall, our proposed GLRec model achieves the best performance among all baselines. There are distinctive score gaps between GLRec and all baselines according to the improvement in Table <ref>. It demonstrates the superiority and adaptability of the large-scale model framework that incorporates relationship understanding and extensive semantic knowledge in the job recommendation scenario. What's even more exciting is that GLRec demonstrates impressive performance on OOD tasks. While its performance may decline slightly compared to the random setting, our model achieves a significant breakthrough compared to other models, which essentially result in near-random guessing. This phenomenon illustrates the necessity of utilizing knowledge association for model generalization. Go deeper into the part of baselines, the graph-based HGT outperforms the conventional dual-tower matching model (RobertaRec) in the context of job recommendation, which further proves the significance of learning relationships. What's more, we find that most models perform better on the pair-wise task than that of point-wise task. That is to say, directly determining whether an item is suitable is more challenging than comparing its priority with another item. §.§ The Impact of Meta-path Number In this experiment, we investigate the impact of meta-path number on the effectiveness of GLRec. Here we evaluate the point-wise performance on Random setting using the AUC metric for different numbers of meta-paths, ranging from 0 to 3. We also input the meta-path prompt (removing extra instruction text for feature conciseness) into RobertaRec for comparison. From the line graph of Figure <ref>, we can observe the following trends: * For GLRec, the results consistently increase as the number of meta-paths increases. This indicates that the inclusion of behavior graph understanding significantly improves the recommendation effectiveness of GLRec. * One notable observation is the significant improvement in GLRec's performance when transitioning from 0 meta-paths to 1 meta-path, and achieve the peak with only 2 meta-paths. The core increases from 0.71 to 0.88, indicating a substantial boost in recommendation effectiveness. This improvement suggests that the chain-of-thought ability of the LLM, inspired by in-context learning, plays a crucial role in GLRec's performance. * For RobertaRec, which does not incorporate behavior graph understanding, the values remain relatively stable across different meta-path numbers. The reason is that discriminative bert-based model lacks the ability to effectively understand prompts like generative LLMs. The results indicate that the inclusion of behavior graph understanding through meta-path prompt input has a significant positive impact on the effectiveness of GLRec. By leveraging the rich information in behavior graphs, GLRec gains a deeper understanding of user-item interactions, leading to improved recommendation performance, which provides the sufficient evidence for RQ2. §.§ The Impact of Bias of Meta-path Prompt Due to the sequential nature of language model input, the construction of multi-path prompt sequences results in a human-induced position bias, or order bias, which disrupts the final decision-making of LLM model. Additionally, this input pattern does not allow the model to learn the importance of semantic information in different paths. Therefore, we design a path shuffle mechanism, a path soft selector, and a hybrid mechanism combining both to enhance the model's understanding of path information and mitigate bias. The experimental results are reported in Figure <ref>. Here the metric is AUC and the task is point-wise setting. According to Figure <ref>, our three strategies can all surpass the original input without path prompt augmentation in both two sub-experiments, which proves the necessity of path debiasing. Although the shuffle mechanism and soft selector have their own advantages and disadvantages in two different path scale experiments, both can relatively improve the quality of the results. And the hybrid module of both can bring more stable results, indicating that it is indeed necessary for the model to consider the position factors of input meta-paths and the influencing factors of different path prompts on decision-making in experiments, in order to cope with actual recommendation scenarios. In theory, in other similar scenarios, such as the input for LLM consists of multiple sentence prompts without prior order, our proposed shuffle mechanism and the soft selector can both play a certain role in enhancing the robustness of model training. We will continue to explore this property in our future work. § CONCLUSION In conclusion, this paper proposed GLRec, a job recommendation model that first combines large language models (LLMs) with behavior graph understanding. By leveraging the semantic richness and massive knowledge of LLMs, GLRec improved the quality of job recommendations compared to traditional approaches. The meta-path prompt constructor encoded the behavior graph's interaction information into natural language prompts, enhancing personalized recommendations. Experimental results validated the effectiveness of GLRec, showcasing its superiority in real-world recruitment datasets. This research contributes to the advancement of LLM-based job recommendation and opens up new possibilities of graph data understanding for LLMs in personalized recommendations. However, there are still some areas that need to be further optimized in our work, such as larger scale experimental validation and finer grained module testing. aaai
http://arxiv.org/abs/2307.04316v3
20230710030524
Accelerating Secure and Verifiable Data Deletion in Cloud Storage via SGX and Blockchain
[ "Xiangman Li", "Jianbing Ni" ]
cs.CR
[ "cs.CR" ]
Accelerating Secure and Verifiable Data Deletion in Cloud Storage via SGX and Blockchain Xiangman Li and Jianbing Ni X. Li and J. Ni are with the Department of Electrical and Computer Engineering and Ingenuity Labs Research Institute, Queen's University, Kingston, Ontario, Canada K7L 3N6. Email: [email protected]. ============================================================================================================================================================================================================================================== Secure data deletion enables data owners to fully control the erasure of their data stored on local or cloud data centers and is essential for preventing data leakage, especially for cloud storage. However, traditional data deletion based on unlinking, overwriting, and cryptographic key management either ineffectiveness in cloud storage or rely on unpractical assumption. In this paper, we present SevDel, a secure and verifiable data deletion scheme, which leverages the zero-knowledge proof to achieve the verification of the encryption of the outsourced data without retrieving the ciphertexts, while the deletion of the encryption keys are guaranteed based on Intel SGX. SevDel implements secure interfaces to perform data encryption and decryption for secure cloud storage. It also utilizes smart contract to enforce the operations of the cloud service provider to follow service level agreements with data owners and the penalty over the service provider, who discloses the cloud data on its servers. Evaluation on real-world workload demonstrates that SevDel achieves efficient data deletion verification and maintain high bandwidth savings. Cloud storage, secure data deletion, Intel SGX, data outsourcing, verifiability. § INTRODUCTION Outsourcing data to the cloud storage is a common practice for data owners to save the burden of self-managing massive data <cit.>. The data owners can on-demand rent the storage spaces provided by the cloud service providers. The outsourcing data management enables the data owners to access their data at anytime and from anywhere. Due to the appealing features, the cloud storage services, such as Amazon S3 <cit.>, Google Drive <cit.>, Dropbox <cit.>, Apple iCloud <cit.>, and Microsoft OneDrive <cit.>, have attracted a large number of stable and loyal users. Data security is one of the primary concerns for data owners. After data owners outsource their data to the cloud data centers, they lose their physical control over their data. Thus, the data owners have no choice, but relying on the cloud service providers to protect their data. Unfortunately, due to the frequently happened data leakage or breach accidents, security are always ranked as the top threats in cloud storage <cit.>, although the cloud service providers have great efforts for guaranteeing the confidentiality, integrity, and availability of the outsourced data on their cloud servers. Data deletion <cit.>, as one of important security technologies, has not received sufficient attentions from both data owners or cloud service providers in cloud storage. It provides methods to securely erase data from local storage medium or remote cloud servers, which can significantly reduce the probability of data leakage. Moreover, privacy regulations, such as GDPR <cit.>, CPPA <cit.>, and PIPL <cit.>, have clearly defined the principles of data deletion, called right to be forgotten. The data centers should delete the personal data if they are no longer necessary to the purpose for which it was collected, and the data owners have the right to request the data centers to delete their personal data stored on the data centers. Therefore, data deletion becomes increasingly critical for the service providers, which should provide effective ways to guarantee secure data deletion. §.§ Related Work It is not a trivial problem to securely delete data. It is well recognized that there is no existing software-based solution that can provide complete data removal from storage medium. Existing deletion methods can be summarized in the following categories: Deletion by unlinking. This method is widely deployed on the file management system in operating systems, such as Windows, IOS, and Linux. When the user would like to delete a file (i.e., press the "delete" button), the operating system delete the link of the file from the file systems, and returns "success" to the user. The file is no longer accessible because the link of the file is removed. Nevertheless, this is not the real file deletion, as the file content still remains on the disk. An adversary can simply use a file recovery tool to access the deleted file by scanning the disk <cit.>. Deletion by block erasure. This method is utilized by storage mediym, such as solid-state drive (SSD) to securely clean the data. It applies a voltage spike to all available flash memory blocks in unison. Each block is altered with a vendor-specific value and SSD become "clean" <cit.>. However, this method erases all the data on the drive and does cause a small amount of wear. Deletion by overwriting. Overwriting is an important tool to delete the data by overwriting the data with new, insensitive data, e.g., all zeros. There are multiple tools that can offer 35-pass overwriting times. However, one inherent limitation with the overwriting methods is that they cannot guarantee the complete removal of data. It is effectively impossible to sanitize storage locations by simply overwriting them, no matter how many overwrite passes are made or what data patterns are written <cit.>. The conclusion holds for not only magnetic drives, but also tapes, optical disks, and flash-based solid state drives. In all these cases, an attacker, equipped with advanced microsoping tools, may recover overwritten data based on the physical remanence of the deleted data left on the storage medium. Therefore, although overwriting data makes the recovery harder, it does not change the basic one-bit-return protocol. Deletion by encryption. Boneh and Lipton <cit.> proposed the first cryptography-based method for secure date deletion by encrypting data before saving it to the disk, and deleting the data by discarding the decryption key after encryption. This method is desirable when duplicate copies of data are backed up in distributed. However, this method essentially change the problem of deleting a large amount of data to the problem of deleting a short key. However, forgetting a decryption key is non-trivial. The key can be stored on a hard disk is not easy to be permanently deleted, i.e., never be recoverable for an adversary even it obtain the storage medium <cit.>. However, the problem of key deletion becomes dramatically difficult as the cloud server performs the encryption of the outsourced data in traditional secure cloud storage. Although some cloud storage services enable user-side encryption, i.e., the data owners also can encrypt their data before outsourcing, the server-side encryption is more general. The cloud server encrypts the data after receiving them from the data owners with an encryption key and decrypts the data that the owners would like to access before returning them to the data owners. In this model, the encryption is fully controlled by the cloud servers, which brings the worries of the data owners about the encryption of their outsourced data and secure deletion of the decryption keys. §.§ Contributions In this paper, we propose a novel secure and verifiable data deletion scheme, named SevDel, for cloud storage. To reduce the concern that whether the cloud server honestly encrypts the outsourced, we utilize the randomly sampling method and the zero-knowledge proof <cit.> to verify the encryption without retrieving the ciphertexts of the outsourced data. The encryption is also performed based on the Intel SGX <cit.> to prevent the possible data leakage. The enclave is created for each file for the encryption and the management of the keys. Thus, the operation of the deletion of the decryption becomes the destroy of the enclave. In addition, to enforce the cloud servers to protect the outsourced data, the smart contract is designed based on the service-level agreements between the data owners and the cloud service providers. We demonstrate the properties of confidentiality, verifiability, erasability, and auditability of SevDel through security analysis and show that the proposed SevDel has outstanding performance for deployment. § SYSTEM AND SECURITY MODELS In this section, we introduce the system model and security model of our SevDel. §.§ System Model We present the system model of SevDel, that comprises three kinds of entities: 1) a data owner that outsources the data to the cloud and requests to delete them after the data is processed or used; 2) a cloud service provider that offers secure cloud storage services (i.e., the outsourced data of data owners are encrypted by the service provider with its chosen secret keys or by the data owners before outsourcing) to data owners with its storage servers in the cloud data center, and each server has high-performance hard disks for data storage and has the Intel Core that supports for SGX <cit.>; and 3) a blockchain node <cit.> that participates the blockchain network to maintain transactions happened between two parties. The blockchain can be the public blockchain, e.g., Bitcoin blockchain, Ethereum blockchain, or Hyperledger. It maintains an automatically executable smart contract that enforces the penalty on the cloud service provider if it leaks the outsourced data of users. Intel SGX <cit.>, a suite of security-related instructions built into modern Intel CPUs, can create a hardware-protected environment, enclave, for shielding the execution of code and data. An enclave resides in a hardware-guarded memory region called the enclave page cache (EPC) for hosting any protected code and data. In enclave, SGX performs the encryption of the outsourced data with a secret key stored on the EPC. The deletion of the encrypted data for the data owner is the deletion of the secret key in enclave. More specifically, the secret in enclave is erased after the enclave is destroyed. §.§ Threat Model The security threats are mainly from the outsider attackers or the data thief. An outsider attacker or a data thief may compromise the cloud server to steal the data on the hard disks. The frequently happened data leakage incidents on cloud have witnessed the risks of cloud storage services. This risk is high because of potential code vulnerability, and the damage is severe as the data leakage incidents significantly affect reputation. Moreover, the employees in cloud service may steal the data on cloud servers. We have witnessed many data corruption or leakage incidents that occur due to the operation errors or misbehavior of the employees. The main security objective is to protect the cloud data for users against data leakage incidents. A cloud service provider is the legitimate processor of the Intel SGX and holds the service level agreements with the data owners for maintaining outsourced data. It is expected that the cloud service provider stores the encrypted outsourced data of data owners on the hard disks of cloud servers and deletes the data under the requests of data owners or based on the principles of privacy regulations, like GDPR and CPPA, and PIPEDA. It is assumed that the cloud service provider may not deviate from the expectation due to the agreement with data owners, that is, the cloud service provider is rational. It follows the service level agreements to honestly offer data storage services. Undoubtedly, regulating the implementation of the agreement between the users and the cloud service provider become necessary. A data owner is an honest party to rent storage spaces from the cloud storage services and outsources the data to the cloud servers in the data center. The data owner chooses the reliable service providers for data outsourcing. According to the modes for protecting cloud data in cloud storage, e.g., Amazon S3 of Amazon Web Service, the owners can determine whether to encrypt their data before outsourcing. The data owners can use secret keys to encrypt their data before outsourcing. If the owners do not encrypt the data, the cloud server chooses the secret keys for data encryption. In this paper, we study secure data erasure for the latter case because it is trivial to achieve data deletion if the data owners encrypt their data by themselves, as they can delete their keys and then no one can read the cloud data. § PROPOSED SEVDEL In this section, we propose the overview and the detailed construction of our SevDel. § OVERVIEW Our SevDel accelerates security and verifiability of cloud data erasure in cloud storage. It can serve as the central element of secure cloud storage and erasure in cloud storage services, such as Amazon S3 Find and Forget, the solution to selectively erase records from data lakes stored on Amazon S3. To prevent data leakage, the received file from the data owner is encrypted by the cloud server with a randomly selected private key with additive homomorphic encryption, such as lifted ElGamal encryption <cit.>. The encryption operation is performed in the enclave of Intel SGX. The encryption of the file is audited by the data owner to ensure that the file is correctly encrypted as claimed by the cloud service provider. The random sampling is utilized to enable probabilistic auditing of the encrypted data and the ciphertexts are aggregated to compress auditing messages. The cloud server proves to the data owner that the entire file is encrypted with lifted ElGamal encryption by a randomly chosen key with a large probability, without retrieving the encrypted file. The challenge here is to ensure that the proved ciphertext is really the encryption of the correct outsourced file. To blind the file and its ciphertext during auditing, the cloud server should prove that the plaintext of the ciphertext is the outsourced file in the homomorphic authentication tags, which are produced by the data owner and outsourced along with the file. Meanwhile, they can also used to verify the integrity of the outsourced file based on provable data possession <cit.> or proof of retrievability <cit.>. The deletion of the oursourced file on the cloud server is enabled by the deletion of the secret key of the file. If the secret key is permanently deleted, no one is able to decrypt the ciphertext. The secret key deletion is realized by the Intel SGX. An enclave is created when the cloud server receives the file and the encryption is performed in the enclave. Also, the encryption key is stored in the enclave. In order to permanently forget the key, the simple way is to destroy the corresponding enclave. To ensure the cloud service provider to honestly maintain and encrypt outsourced files of data owners, a smart contract is created based on the service level agreement between the service provider and data owners. the deposits of the service provider are made when the cloud storage service is bootstrapped. The deposits are paid to the data owner if the file of the data owner is found on the Internet, which means that the file is leaked during storage. The condition to trigger the payment is the key point of the smart contract. We convert this data leakage problem to be the provable data possession. If a data owner is succeed to giving a proof that she possesses the encrypted version of her outsourced files, the penalty is performed over the service provider and a certain amount of the deposits is transferred to the data owner. The conversion is valid because only the cloud server has the encrypted version of the outsourced file of the data owner. The cloud server performs encryption after receiving the outsourced file and decryption before returning it to the data owner. The ciphertext of the outsourced file should be only known by the cloud server. Although the data owner knows the cleartext of the file, the data owner obtains the same ciphertext, as the data encryption on the side of the cloud server is probabilistic. Our SecDel consists of the following algorithms. Setup: This algorithm is run by the cloud service provider to bootstrap the cloud storage systems. With the input of the security parameter, the algorithm outputs the system parameters and the public-private key pairs of the cloud servers. Contract: This algorithm is run by the cloud service provider to initialize a smart contract that implements the service level agreement with the data owners. The smart contract is maintained by the blockchain nodes. KeyGen: This algorithm is run by the data owner. With the input of the system parameters, the algorithm takes the input of the system parameter and generate the public-private key pair of the data owner for data outsourcing. Outsource: This algorithm is run by the data owner to outsource the file to the cloud server. With the input of the security parameters, the private key of the data owner, and the outsourcing file, the algorithm produces the homomorphic authentication tags of the data blocks of the file and outsource the file, along with the generated tags. Encrypt: This algorithm is run by the cloud server that encrypts the received file with a randomly chosen private key. With the input of the file, the private key of the cloud server, and the chosen private key, the algorithm outputs the encrypted file, the corresponding public key, and the homomorphic authentication tags of the data blocks of the encrypted file. Verify: This is an interactive protocol between the cloud server and the data owner to audit the encryption of the outsourced file. The data owner randomly samples the data blocks, and the cloud server generates a proof that proves the encryption of the sampled data blocks. The data owner finally verifies the proof to learn whether the file has been encrypted by the cloud server. Delete: This algorithm is run by the cloud server who deletes the file under the request of the data owner or the data is no longer needed for data analysis. Audit: This is an interactive protocol between the data owner and the blockchain nodes. The blockchain nodes randomly samples the data blocks owned by the data owners and the data owner responds the proof that proves the ownership of the encrypted data block. Then, the blockchain nodes verify the proof to learn whether the file has been disclosed. If the proof is valid, the smart contract is executed to give penalty to the cloud service provider. The correctness of SevDel has the following aspects: 1) The encryption of the outsourced file should be correctly recovered by the cloud server with the corresponding secret key; 2) the data owner can identify that the cloud server does not encrypt the oursourced file on hard disks as agreed with the service level agreement; 3) the deleted outsourced file can be no longer recovered; and 4) the blockchain node can execute the penalty if the data owners find the leaked outsourced data. §.§ Detailed SevDel Setup: Let q be a large prime and 𝔾_1, 𝔾_2 and 𝔾_T be three multiplicative cyclic groups of the same prime order p. g_1 and g_2 are the generators of 𝔾_1 and 𝔾_2, respectively. e:𝔾_1 ×𝔾_2 →𝔾_T denotes an admissible bilinear pairing. The file M to be outsourced is divided into n blocks and each block is further split into s sectors. Thus, the fiel is denoted as M={m_ij}_i ∈ [1,n],j ∈ [1,s] and the abstract information of M is denoted as 𝕀_M. H:{0,1 }^* →𝔾_1 is a cryptographic hash function that maps the 𝕀_M to a point in 𝔾_1. The cloud service provider chooses a random number a ∈ℤ_p and calculates A=g_2^a∈𝔾_2. The private key of the data owner is a, and the corresponding public key is A. Contract: The service provider creates the smart contract CS-SevDel to provide cloud storage services to data owners. To provide the service, the service provider initiates CS-SevDel.Init to setup the smart contract and deposits an amount of money on the blockchain as insurance in CS-SevDel.Service. The a part of the deposit would be sent to the data owner if the outsourced data is leaked and the remainder would be re-fund to the service provider. KeyGen: An data owner chooses a random number w ∈ℤ_p and calculates W=g_2^w∈𝔾_2. The private key of the data owner is w, and the corresponding public key is W. Outsource: The data owner chooses s random values x_1,⋯, x_s ∈ℤ_p and computes u_j=g_1^x_j∈𝔾_1 for j ∈ [1,s]. Then, for each block m_i (i ∈ [1,n]), it computes a tag t_i as ϕ_i=(H(𝕀_M||i)·∏_j=1^su_j^m_ij)^w. The data owner outputs the set of homomorphic authentication tags T={ϕ_i}_i ∈ [1,n]. The tag set Φ, the file index 𝕀_M, and the file M are sent to the cloud server. Encrypt: After receiving (𝕀_M,M,Φ) from a data owner, the cloud server first randomly selects a private key v ∈ℤ_p and computes V=g_1^v ∈𝔾_1. The cloud server uses the random private key v to encrypt each data block of the received file m_ij as E_ij=(E'_ij,E”_ij)=(g_1^m_ijV^r_ij, g_1^r_ij), where r_ij is a random number chosen from ℤ_p. The set of the encrypted blocks is denoted as E={E_i}_i ∈ [1,n]. Then, for each encrypted block E_i (i ∈ [1,n]), the cloud server computes a homomorphic authentication tag σ_i for the encrypted block as σ_i=(H(𝕀_M||i)·∏_j=1^su_j^E'_ijv_j^E”_ij)^a. The set of the tags of the encrypted blocks is denoted as Σ={σ_i}_i ∈ [1,n]. Finally, the cloud server stores (𝕀_M,E, Σ) on the hard disks and uploads (𝕀_M, Σ) to the blockchain. Verify: To verify the encryption of the outsourced file M, the data owner takes the abstract information 𝕀_M as inputs. It selects some data blocks to construct a challenge set Q and picks a random l_i ∈ℤ_p^* for each m_i (i ∈ Q). The challenge (i, l_i)_i ∈ Q is sent to the cloud server. To respond the challenge, the cloud server generates P_1 as P_1=∏_i ∈ Q g_1^l_im_ijV^l_ir_ij. The cloud server computes Q_j=∑_i ∈ Ql_i·m_ij for each j ∈ [1,s]. Then, it computes Q_2 as P_2=∏_j=1^s ϕ_i^l_i. π← NIZK {(Q_j, r_ij): P_1=∏_i ∈ Q g_1^l_im_ijV^l_ir_ij, P_2=∏_j=1^s ϕ_i^l_i}. The data owner verifies the validity of the zero-knowledge proof π to determine whether the outsourced file has been encrypted or not. Delete: The cloud server deletes the random private key v that is used to encrypt the file F by destroying the enclave that used to store v. The cloud server creates an enclave for each file received and use the enclave to maintain the private key. Audit: If the data owner obtains the leaked encrypted file F, the data owner can prove to the blockchain nodes that the cloud server has data leakage. The blockchain node selects some data blocks to construct a challenge set R and picks a random γ_i ∈ℤ_p^* for each E_i (i ∈ R). The challenge (i, γ_i)_i ∈ R is sent to the data owner. To respond the challenge, the data owner generates Q_1 as Q_1=∏_i ∈ Rγ_i E_ij. Then, it computes Q_2 as Q_2=∏_j=1^s σ_i^γ_i. The data owner returns (Q_1, Q_2) to the blockchain node. The blockchain node verifies (Q_1, Q_2) to determine whether the cloud server has disclosed the file F. If yes, the blockchain node performs CS-SevDel.Penalty to give penalty to the cloud service provider. The correctness of SevDel can be check that 1) the encryption of the outsourced file is correctly recovered; 2) the verification equation can pass; 3) the security of Intel SGX; and 4) the blockchain node can execute the penalty. § SECURITY OF SEVDEL The security of SevDel should capture the properties of confidentiality, verifiability, erasability, and auditability. The confidentiality of the outsourced file relies on the semantic security of the data encryption scheme used by the cloud server. SevDel utilizes the lifted ElGamal encryption scheme to encrypt each block of the outsourced file M. Here, each block is independently encrypted with the key V. As the lifted ElGamal encryption scheme can be proved semantic security under the Decisional Diffie-Hellman (DDH) assumption, the confidentiality of the outsourced file is achieve as long as the DDH assumption holds. (-10,608) (0,10)(248,600) (0,554)(100,100)[l]                  Smart Contract CS-SevDel (0,542)(200,95)[l]     Init: Set state:=INIT, File:={}, Onwer:={}, RU:={}, (0,530)(200,95)[l]           Tags:={}, Param:=SerDel(1^λ). (0,518)(200,95)[l] Service: Upon receiving (“Create", N, file, A, Deposit, (0,506)(200,95)[l]           T_1,T_2,T_3,T_4) from a service provider 𝒮: (0,494)(200,95)[l]               Assert state=INT. (0,482)(200,95)[l]               Assert current time T≤ T_1. (0,470)(200,95)[l]               Assert ledger|𝒮|≥ $Deposit. (0,458)(200,95)[l]               ledger |𝒮|:=ledger|𝒮|–$Deposit. (0,446)(200,95)[l]               Set state:=CREATED. (0,434)(200,95)[l]               Set Accept:=0. (0,422)(200,95)[l]               File:=File∪{𝒮,N,A,Deposit,Accept,T_j=1-4}. (0,410)(200,95)[l] Agree: Upon receiving (“Accept", 𝒰_i,N,R_i) from a (0,398)(200,95)[l]           data owner 𝒰_i: (0,386)(200,95)[l]               Assert state=CREATED. (0,374)(200,95)[l]               Assert T_1 ≤ T ≤ T_2. (0,362)(200,95)[l]               Assert $R_i >0. (0,350)(200,95)[l]               Assert ledger|𝒰_i|≥ $R_i. (0,338)(200,95)[l]               ledger |𝒰_i|:=ledger|𝒰_i|–$R_i. (0,326)(200,95)[l]               Set Accept:=Accept+1. (0,314)(200,95)[l]               Set state_i:=ACCEPTED. (0,302)(200,95)[l]               Owner_N:=Owner_N∪{𝒰_i}. (0,290)(200,95)[l]  Claim: Current time T=T_2: (0,278)(200,95)[l]               Assert state_i=ACCEPTED. (0,266)(200,95)[l]               Assert the data outsourcing N. (0,254)(200,95)[l]               Set state:=CLAIMED. (0,242)(200,95)[l] Audit: Upon receiving (“Audit", 𝒰_i,N,c_i,d_i,σ_i,e_i,rk_i, (0,230)(200,95)[l]            𝒫𝒦_i) from 𝒰_i: (0,218)(200,95)[l]               Assert state=CLAIMED. (0,206)(200,95)[l]               Assert T_2≤ T≤ T_3. (0,194)(200,95)[l]               Assert 𝒰_i∈AU_N. (0,182)(200,95)[l]               Assert 𝒫𝒦_i=1. (0,170)(200,95)[l]               Set state_i:=UPLOADED. (0,158)(200,95)[l]               Set ledger |𝒰_i|:=ledger|𝒰_i|+$R_i. (0,146)(200,95)[l]               Owner_N:=owner_N∪{𝒰_i}. (0,134)(200,95)[l]               File_N:=File_N∪{(𝒰_i,N,σ_i,e_i,rk_i)}. (0,122)(200,95)[l] Refund: T_3≤ T≤ T_4 and Owner_N=File_N: (0,110)(200,95)[l]               Set state:=FULFILLED. (0,98)(200,95)[l]               Set ledger |𝒰_i|:=ledger|𝒰_i|+$Deposit_i. (0,86)(200,95)[l]               Assert $Deposit=∑_i=1^n$Deposit_i. (0,74)(200,95)[l]               Set state:=FINISHED. (0,62)(200,95)[l] Penalty: T_3≤ T≤ T_4 and AU_N⊃RU_N: (0,50)(200,95)[l]               Set state:=UNFULFILLED. (0,38)(200,95)[l]               ledger|𝒰_i|:=ledger|𝒰_i|+$R^*_i, for 𝒰_i ∈RU_N. (0,21)(200,95)[l]               Assert ∑_i∈{_N-_N}$R_i =∑_i∈{_N}$R^*_i. (0,6)(200,95)[l]               Set state:=ABORTED. (0,-6)(200,95)[l]  Timer: If state=ABORTED and T>T_4; (0,-18)(200,95)[l]               Set ledger |𝒮|:=ledger|𝒮|+$Deposit. (0,-30)(200,95)[l]               Set state:=ABORTED. (0,-10)(250,20)Alg. 1. Smart Contract CS-SevDel The verifiability of the data encryption is achieved based on provable data possession and zero-knowledge proofs. The data owners are able to audit the encrypted data by randomly sampling the encrypted blocks. The homomorphic authentication tags guarantee the authentication of data blocks in the aggregated way. First, the homomorphic authentication tags are created in the way of digital signatures. They are not forgeable under the assumption of computational Diffie-Hellman assumption. Second, it is impossible to generate a proof if the cloud server does not encrypt the sampled data blocks because the proof is the linear aggregation of the tags. Therefore, the verifiability of the data encryption is realized. The erasability of the data is achieved based on the Intel SGX. The enclave is created for the file when the cloud server receives the file. The enclave is used to maintain the decryption key. The deletion of the data is achieved when the enclave is destroyed. The destroy of the enclave would permanently lose the information in the enclave. According to this feature, the decryption key is lost after the destroy of the enclave. Thus, the encrypted file can never be decrypted, so the file is permanently deleted. The auditability of data leakage is achieved based on the smart contract. The smart contract makes sure the automatic execution of the service-level agreement between the data owners and the cloud service providers. The condition that triggers penalty is the data leakage incident, so the data owner needs to prove to the blockchain node that they have the leaked data. This proof generation method is the same as the method for data encryption proof, so they are based on the same assumption. § CONCLUSION In this paper, present a secure and verifiable data deletion scheme that leverages the zero-knowledge proof to achieve the verification of the encryption of the outsourced data without retrieving the ciphertexts. The deletion of the encryption keys are guaranteed based on Intel SGX. The proposed scheme implements secure interfaces to perform data encryption and decryption for secure cloud storage and utilizes smart contract to enforce the operations of the cloud service provider to follow service level agreements with data owners and the penalty over the service provider, who discloses the cloud data on its servers. As the proposed scheme enables the cloud server to handle the service-side encryption, which make the scheme particularly suitable for the popular secure cloud storage services. IEEEtran
http://arxiv.org/abs/2307.04636v1
20230710152529
On the importance of illustration for mathematical research
[ "Rémi Coulon", "Gabriel Dorfsman-Hopkins", "Edmund Harriss", "Martin Skrodzki", "Katherine E. Stange", "Glen Whitney" ]
math.HO
[ "math.HO", "01-02, 01A65, 01A67, 00-02" ]
Properties of the η_q leading-twist distribution amplitude and its effects to the B/D^+ →η^(')ℓ^+ ν_ℓ decays Long Zeng August 12, 2023 ============================================================================================================ In the last decade, it has become increasingly possible for researchers to augment their experience of abstract mathematics with immersive sensory experience: 3D-printed or CNC-milled models, the ability to walk through impossible physical spaces with virtual reality, and the potential to explore high-dimensional mathematical spaces through computer visualisation, to name a few. Now much more than simply an aid to understanding, these tools have reached a level of sophistication that makes them indispensable to many frontiers of mathematical research. To preview one particular case recounted below, the tantalizing structure visible in Figure <ref> (and many others like it) led to conjectures and proofs that would likely otherwise have been inaccessible. The list of examples of research driven by illustration is rapidly expanding in recent years. We use the term illustration to encompass any way one might bring a mathematical idea into physical form or experience, including hand-made diagrams or models, computer visualization, 3D printing, or virtual reality, among many others. We will discuss instances of this interplay in fields ranging from representation theory to geometry and many others. Many readers will also be aware of the recent and celebrated solution of the einstein problem with the hat monotile and its chiral version, the spectre <cit.>. Illustration is beginning to find a home at programs like the special semester in Illustrating Mathematics in Fall 2019 at the Institute for Computational and Experimental Research in Mathematics (ICERM) and the Institute for Advanced Study (IAS)/Park City Math Institute (PCMI) virtual program in Summer 2021,[See <http://illustratingmath.org/> for links to these two programs, along with many other resources .] and a community is forming around many modern tools. Of course, the importance of illustration to research is not new: abstraction was linked to plane diagrams in the work of the ancients, including Euclid's Elements or the Chinese treatise The Nine Chapters on the Mathematical Art. Precise three-dimensional models were produced by skilled artisans in the 19th century, notable examples of which remain in the collections at the Institute Henri Poincaré[<https://patrimoine.ihp.fr/>] or Göttingen University[<https://sammlungen.uni-goettingen.de/sammlung/slg_1017/>], among many other institutions. When computer visualization first became widely available in the 1980's, the Geometry Center was founded at the University of Minnesota, with a mission to exploit these new tools on behalf of mathematics. But we are now at another cusp: modern technological tools have suddenly made 3D models and virtual reality widely available, and computation and computer visualization is more accessible and more powerful than ever. We can now collect huge mathematical datasets and examples, and it has become urgent to develop ways to interact immersively with this data. Making full use of modern tools is not without its challenges: beyond the obvious technical challenges and software learning curves, there are important questions about how an illustration, much like a statistic or an experiment, can subtly mislead the researcher, or miss the essential mathematical pattern sought. Researchers often individually reinvent the necessary skill sets as they seek to advance their own projects, and these projects are pushing the boundaries of the possible. But by building a discipline around this enterprise, we can develop its full potential to advance mathematical research. § SOME HIGHLIGHTS FROM THE HISTORY OF MATHEMATICAL ILLUSTRATION Illustration of mathematics goes back as far as mathematical ideas themselves. In fact, some of the earliest evidence we have for abstract thinking comes from human-made designs, for example the cross-hatched carvings in Blombas Caves in South Africa, potentially from 73,000 years ago <cit.>. A little more recently, the middle-eastern tradition of geometry presented in Euclid's Elements provides a structural link between statements deduced from axioms and figures made with straight edge and compass. These two tools provide a physical realization of the two key objects (straight lines and circles) described by the axioms. Euclid's diagrams give a map to help follow (or discover!) the chain of deduction in a proof. Conversely, the proof validates the image (which could otherwise mislead by error or the selection of a non-generic example). This approach leads at the conclusion of Book 1 to a proof of the Pythagorean theorem; see Figure <ref>. UTF8bsmi In Chinese mathematics, this theorem is the 勾股 (Gougu) theorem. In the classic Nine Chapters on the Mathematical Art (九章算術) , it plays a key role in applying the arithmetical mathematics of the text to geometric problems, for example in measuring altitude. The Chinese tradition also gives an elegant visual proof of the result by rearranging triangles, as in Figure <ref>. Although the Chinese proof is not considered rigorous by modern standards, Euclid was also criticized by Bertrand Russell when he wrote “A valid proof retains its demonstrative force when no figure is drawn, but very many of Euclid’s earlier proofs fail before this test.” <cit.>. This criticism reveals one of the challenges of mathematical illustration. A powerful example comes from a well-known “proof” that all triangles are equilateral, wherein a slightly misleading diagram (shown in Figure <ref>) can be used together with an otherwise correct proof. Disallowing these particular subtle errors requires axioms capturing the meaning of “between,” that took considerable work by David Hilbert to formulate <cit.>. A related pitfall – when a good illustration, overused, can become a pair of blinders – is illustrated by the following example. In the Elements, the concept of number is based on the concept of length. So the squares in the Pythagorean theorem are actual squares (the area of which are equal), not squared numbers. In the 11th century algebra treatise of Omar Khayyam, although he gives solutions to equations with higher powers than three, he also states: “Square-square, which, to the algebraists, is the product of the square by itself, has no meaning in continuous objects. This is because how can one multiply a square, which is a surface, by itself? Since the square is a two-dimensional object … and two-dimensional by two-dimensional is a four dimensional object. But solids cannot have more than three dimensions.” <cit.>. The relation between number and length was also an important factor in the European reluctance to consider negative numbers. A line, after all, cannot have negative length. In contrast, negative quantities are used freely in the Nine Chapters, where arithmetic is the foundational idea, with geometry built from it. In Europe the development of the number line, starting with John Wallis, gave an alternative illustration of number (see Figure <ref>) with the capacity to include negative quantities as numbers in their own right . Powers and negative numbers are but two examples of a productive pattern of mathematics developing from the tension between illustration and symbolic idea. The study of complex numbers advanced significantly with the concept of the complex plane, and then allowed a new algebraic approach to the geometry of the plane. Both quaternions and matrices were developed to try to extend that understanding to higher dimensions. In the case of real numbers, although the symbolic ideas would refine the illustrations needed, it was not until the late nineteenth century when fully symbolic definitions were developed, such as Dedekind cuts and Cauchy sequences. At that point the need for illustrations as foundational objects was removed, although the potential for developing intuition and challenging what might be done with the concepts remained . Projective geometry, first developed (as perspective) by artists as a tool to create realistic images, provided one such challenge. These ideas were explored mathematically by Johannes Kepler, Gérard Desargues and Blaise Pascal. In the early nineteenth century, perspective was developed by Gaspard Monge into “descriptive geometry” for the training of engineers in constructing forts and later developed and axiomatized in the foundational work by Jean-Victor Poncelet <cit.>. In turn this work would be key in establishing models for non-euclidean geometry, explored axiomatically by Nikolai I. Lobachevsky and János Bolyai <cit.>. In this case it was such models, themselves illustrations, that convinced mathematicians of the existence and interest of the non-euclidean geometries. Projective geometry also spurred the study of algebraic geometry. In the late nineteenth century an industry emerged to reveal the surfaces constructed in this field and their properties, such as cone singularities and embedded straight lines. One pioneer was Alexander Brill, a student of Alfred Clebsch with a degree in architecture. Following the work of Peter Henrici (another student of Clebsch), Brill made sliceform paper models of surfaces. He later worked with Felix Klein in Munich to set up a laboratory for the design and production of mathematical objects. This lab grew into a company that, when it was taken over by Martin Schilling in 1911, had a catalogue of over 400 models. His work combined deep mathematical understanding with a knowledge of printing and construction from his family business <cit.>. The need to combine mathematical knowledge with fabrication techniques is also highlighted by a story of missed opportunity: how to make physical patches of hyperbolic planes. In addition to his disk model (often called the Poincaré disk model), Eugenio Beltrami also attached together strips of paper to approximate the surface. Other examples used paper polygons connected to make a sort of hyperbolic “soccer ball.” These paper models are often fragile, and the rigidity of the paper means that it cannot change its local geometry; thus such models are crude approximations. Roughly a century later, Daina Taimiņa realised that crocheting could produce far more resilient surfaces, with local stretching that meant the negative curvature was more smoothly distributed <cit.>. An example of this medium of representation is shown in Figure <ref>. In fact, similar techniques had been used to create ruffles in scarves and skirts for decades. If the methods of fiber arts had earlier been considered seriously and not dismissed as “work for women,” researchers could have had the opportunity to handle robust hyperbolic planes far sooner. § THE INCREDIBLE POTENTIAL FOR MATHEMATICAL ILLUSTRATION Turning to recent developments, the work of Lionel Levine, Wesley Pegden, and Charles K. Smart provides an excellent example of the value of illustration as a research tool. Their Annals of Mathematics paper The Apollonian structure of integer superharmonic matrices <cit.> was motivated by the study of Abelian sandpiles on ℤ^2: place a large number N of sand grains at the origin, and allow any position with at least four grains to distribute those grains equally to its four neighbours. The stable configuration that results from this simple system displays impressive large-scale structure that can be discovered through computer visualization (see Figure <ref>). Especially striking is the vivid visual impression that the structure continues to refine at larger N toward a continuum scaling limit, which was proven earlier by Pegden and Smart . To describe the PDE governing this process, the individual periodic tilings in the regions of the limit must be understood. They are each governed by an integer superharmonic matrix. Levine, Pegden, and Smart generated a picture of the set of integer superharmonic matrices, and were astonished to see the familiar fractal structure of an Apollonian circle packing (Figure <ref>). Each circle of the packing was associated to a periodic pattern appearing in the scaling limit. Through extensive computer investigation, the authors were able to determine the intricate recursive relationships between the patterns for circles generated from one another (`ancestors' overlap and merge to form `descendent' patterns according to complicated rules). These recursions led to a difficult inductive proof that the set did indeed have the Apollonian structure evident in experiments. The development of these results provide a perfect example of the role illustration can play in the cycle of conjecture, theorem, and proof. Without the data available through large-scale computer experimentation and the ability to explore it visually, the question of the scaling limit may not have been raised at all, and the recursive proof of their main result would likely not have been discovered. Another area where research is intertwined with illustration is in the study of William Thurston's geometrization conjecture , proved by Grigori Perelman. This key tool in our understanding of 3-manifolds implies, for instance, the famous Poincaré conjecture. Geometrization states that any compact topological 3-manifold can be cut into finitely many pieces, each of which carries a geometric structure. There are eight possible such structures, known as Thurston geometries. Some of them are rather familiar to mathematicians, such as the 3-dimensional euclidean and hyperbolic spaces or the 3-sphere. Despite the fact that Thurston's geometries have been intensively studied, the more exotic geometries such as Nil and Sol still defy our “Euclidean-grown” spatial intuition. Keeping in mind the well-established power of our physical and visual intuition to aid geometrical research, Rémi Coulon, Elisabetta Matsumoto, Henry Segerman, and Steve Trettel developed virtual reality software to immerse the user in any of the eight Thurston geometries <cit.> (see Figure <ref>). Besides building the much-needed intuition for these spaces, the development of the software itself raised mathematical questions. The meshes used in most animations must be replaced with raymarching techniques, which require computation of distances between objects. But, for example, there is no closed formula for the distance in Nil or Sol! Thus, the development of the algorithms themselves becomes a mathematical result in its own right. Work on Thurston's geometries has very often been closely tied with illustration. For example, the study of Spheres in Sol by Matei P. Coiculescu and Rich Schwartz in Geometry and Topology (positively) answers an old open question, whether metric spheres in Sol are homeomorphic to S^2 <cit.>. Each step of the proof was found after numerous graphical experiments, and 3D printing brings yet another perspective (see Figure <ref>). For an example at the intersection of algebraic geometry and number theory, a few key illustrations have helped drive developments in the field of p-adic analytic geometry. At the same time, illustrating the p-adic analogs of complex analytic manifolds presents unique challenges, not the least of which is the fact that the p-adic numbers themselves are topologically a Cantor set. Nevertheless, clever and meaningful illustrations of p-adic analogs to the complex upper half-plane and complex unit disk have proved incredibly fruitful. An illustration of Vladimir Drinfeld's p-adic upper half plane as tubular neighborhoods of Bruhat-Tits trees (Figure <ref>) clarified the behavior of the action of GL_2(ℚ_p) by Möbius transformations. Understanding this action was instrumental in the construction of p-adic analytic uniformization of elliptic curves (reflecting the famous complex analytic uniformization of elliptic curves as quotients of the complex upper half plane). Similarly, Peter Scholze's illustrations of the adic unit ball (Figure <ref>) provide access to the foundational geometric construction in his theory of perfectoid spaces <cit.>. The act of illustrating the central geometric objects of p-adic analysis has proven both beneficial and uniquely challenging, demanding a systematic and critical approach. An example arising somewhat further afield of geometry is the work of Allen Knutson, Terence Tao, and Christopher Woodward in representation theory honeycombAMS. Knutson and Tao introduced the notion of honeycombs (subsets of the plane as in Figure <ref>) to solve a longstanding open problem: Alfred Horn's conjectured shape of the polyhedral cone (sometimes called the Littlewood-Richardson cone) of triples of eigenvalue spectra (λ, μ, ν) for Hermitian matrices A, B, C which satisfy A + B+ C = 0. This sum-of-Hermitian-matrices problem has applications to perturbation theory, quantum measurement theory, and the spectral theory of self-adjoint operators. Knutson and Tao were able to show that there exist such Hermitian matrices with the specified spectra if and only if there exist honeycombs with a specified boundary. They used this correspondence to prove Horn's conjecture. The honeycomb formalism also led naturally to a polynomial time algorithm to decide whether a triple of spectra can be realized by Hermitian matrices. In a follow-up, Knutson, Tao, and Woodward extended the study of honeycombs to define puzzles (Figure <ref>), which they described as replacing the Schubert calculus in past approaches to the Hermitian matrices problem, and used geometric arguments to give a complete characterization of the facets of the cone <cit.>. Puzzles and honeycombs provide an example of the power of rephrasing an algebraic problem as one about visual objects, where we can draw on other types of intuition. In what circumstances can we expect these sort of insightful geometric versions to exist for algebraic problems? When a geometric analog exists, it naturally exhibits additional features – can we then find new corresponding objects in the original problem? For example, what do the vertices of a honeycomb actually represent? There are, of course, many more examples. Among these, the most famous may be the computer exploration of the Mandelbrot set and fractal geometry in the 1980's (Figure <ref>). In the 1990's, Jeffrey Weeks created SnapPea (which now exists as SnapPy under the guidance of Marc Culler and Nathan Dunfield[<http://snappy.computop.org>]) as part of his doctoral thesis <cit.>, to explore the cusp structures of hyperbolic 3-manifolds. Its use inspired David Gabai, Robert Meyerhoff, and Peter Milley to invent mom structures to answer questions of the volumes of hyperbolic 3-manifolds <cit.>. In the same decade, the Geometry Center founded by Al Marden was focused on the use of computer visualization in mathematics.[<http://www.geom.uiuc.edu/>] It hosted mathematicians such as Eugenio Calabi, John Horton Conway, Donald E. Knuth, Mumford, and Thurston, among others, and produced the GeomView software used to create some famous early computer visualizations, including the sphere eversion[Outside In, (1994), <http://www.geom.uiuc.edu/docs/outreach/oi/>] and illustrations for knot theory.[Not Knot, (1991), <http://www.geom.uiuc.edu/video/NotKnot/>] Illustration has shown its importance in virtually all areas of mathematics, from random tilings in combinatorics, to diagrammatic approaches to algebra, to Apollonian circle packings and Schmidt arrangements in number theory, and their higher dimensional analogs, to mention just a few. The examples above focus on pure mathematics, which is poised to join a great many other areas of scientific endeavour embracing illustration. In applied mathematics, illustration has already made great strides. Consider for instance the process of Alan H. Schoen, when describing the gyroid decades before it was mathematically proven to be a minimal surface. He worked with both a sculpture of the surface and various models in Computer-Aided Design / Modelling (CAD/CAM), which ultimately led to the structure being found in various lipid and liquid crystalline systems <cit.>. Other fields, like mathematical geometry processing rely equally on quantitative measures and qualitative visualizations for judging the quality of their results <cit.>. Still, a back-and-forth between the development of mathematical procedures and their application to real-world data yields results that are well-grounded in mathematical quality guarantees, yet efficient and relevant for their applications. In the field of exploratory data analysis, visualizations even form the main tool for finding research results. Here, large, possibly high-dimensional, datasets are investigated for patterns by embedding them, e.g., as 2D scatter plots that can then be inspected by domain experts. With this technique, in 2020, a novel type of anti-tumor cell was discovered <cit.>. None of these research results would have been possible without the utilization of illustrations. Furthermore, this last example utilized non-linear dimensionality reduction techniques for the visualization of high-dimensional data. These techniques were themselves the result of research driven by the desire for better illustrations. The very closely allied field of computation in mathematics is a little ahead of illustration in its maturity as a tool for mathematical research. To give just one significant example in number theory, much recent activity has centered around the multi-million-dollar Simons Collaboration on Arithmetic Geometry, Number Theory, and Computation,[<https://simonscollab.icerm.brown.edu/>] whose mission states: “Our common perspective is that advances in computational techniques accelerate research in arithmetic geometry and number theory, both as a source of data and examples, and as an impetus for effective results. The dynamic interplay between experiment, theory, and computation has historically played a pivotal role in the development of number theory.” The work supported by the collaboration is rapidly expanding the L-Functions and Modular Forms Database,[<http://www.lmfdb.org>] an online database of mathematical objects (including visualizations) that is at the center of much modern progress in number theory.[See the extensive list of publications arising from the collaboration: <https://simonscollab.icerm.brown.edu/publications/>.] The discipline of mathematical computation is supported by a number of journals[Consider for instance “Advances in Computational Mathematics”, <https://www.springer.com/journal/10444>, the “Journal for Computational and Applied Mathematics”, <https://www.sciencedirect.com/journal/journal-of-computational-and-applied-mathematics>, or the “Journal of Computational Mathematics”, <https://www.jstor.org/journal/jcompmath>.] and has engendered areas of research in their own right, such as computational geometry. Illustration appears to be following a similar trajectory. As it becomes more accessible and pervasive it demands rigorous and careful study, leading to the development of mathematical illustration as a discipline in its own right. § ILLUSTRATION AS A DISCIPLINE Thurston once said, “mathematicians usually have fewer and poorer figures in their papers and books than in their heads” <cit.>. Although the power of good illustrations to advance mathematical knowledge is clear, they are not simple to produce. The challenges to creating powerful and trustworthy illustrations come on many levels. On the one hand, some challenges are technical and concern rather practical questions regarding the production of mathematical illustrations. Especially with newer technologies like virtual reality or 3D modeling, the learning curves are steep and while there are general tutorials available, just a handful are targeting issues specific to the illustration of mathematics.[A noteworthy example for introductory material, aimed at illustration of mathematics, is the Processing tutorial of Roger Antonsen, to be found online: <https://rant.codes/pcmi/>.] Consider for instance <cit.> for a nice discussion of some of the challenges of 3D printing for mathematical illustration. On the other hand, there are challenges within the mathematics itself. The objects to be illustrated do not necessarily come with a description that lends itself to a suitable illustration. Thus, a necessary initial step is the translation of the underlying mathematical object into a form that allows illustration in the first place. However, this transformation is usually not enough by itself. Subsequent steps aim at making the illustration effective, which can entail bridging the gap between the theoretical and the computational, crafting a responsive and immersive experience, or ensuring the illustration actually imparts the desired aspects of the mathematical object. In particular the last part implies important theoretical considerations: What exactly do we want to illustrate? And how do we do so faithfully, i.e., without creating wrong impressions of the mathematical object illustrated? Mathematics is not the first field of research to tackle these difficulties. There are parallels to be found in the development of the scientific method and statistical methods for the natural sciences: Which experimental designs and statistics can be relied upon for developing conjectures and conclusions? Cornerstones of the scientific methods were laid down, such as the important notion of falsifiability of a scientific theory. Similarly, statistical methods amplified their usefulness and trustworthiness when expanded from pure descriptive statistics to inference statistics and statistical tests to assert the validity of results. So in fact, all scientific fields have progressed by examining head-on some of the questions raised by their methodologies. The question of illustrating well has been asked in statistics and data visualization, as explored in Darrell Huff's best-selling book How to Lie with Statistics, which became a standard college text. The pioneering and richly illustrated books of Edward Tufte and Tamara Munzner on data visualization established that field in its own right. Every year, new research in data visualization is discussed at various venues, such as the Institute for Electrical and Electronics Engineers (IEEE) VIS meeting or the EuroVis conference, and published in outlets like the IEEE Transactions on Visualization and Computer Graphics. As it matures, the data visualization community addresses meta-questions on its research, such as where “the value of visualization” lies <cit.> or “Are we making progress in visualization research?” <cit.>. Thus, the example of data visualization provides a pattern of development that the field of mathematical illustration might follow. However, in comparison, mathematical illustration is just taking the first steps on its journey towards being a research field. It is still facing basic challenges with regard to creating and evaluating the illustrations it produces. As an example of these challenges, consider the images in Figure <ref> showing polynomial roots near i in the complex plane. The leftmost is an image of all roots of polynomials of degree 3 with integer coefficients between N and -N, where here N=10 <cit.>. The rightmost is an image of all roots of polynomials with coefficients from {-1,1} and degree no more than D, where in this case D = 13. In both, in the region around i, there appears to be a hole shaped like two ellipses overlapping at right angles. How to interpret this shape? It turns out that at left it is very much an artifact of the algorithm for creating these images. If you consider the picture as an approximation of all cubic roots (by allowing N to tend to infinity), there are infinitely many such polynomials. By limiting N, we are looping through them in a growing hypercube in the coefficient space. The corners of this cube are the corners jutting in toward i, and as the cube expands in the coefficient space, this hole will get filled in. If instead of looping through coefficient space in a growing cube, we choose a different ordering, the limiting shape changes. This is shown in the center image of Figure <ref>. On the right, however, we can think of approximating the set of all roots of polynomials with coefficients ± 1 by allowing D to tend to infinity. In this case, the size and shape of the void remains essentially fixed, no matter how large D is taken. So this hole `really exists' in the picture! The shapes one sees at the boundaries of the limiting set of roots are explained in terms of fractal geometry and certain symmetries of this set.[These features are beautifully described by John Baez on his personal website: < https://math.ucr.edu/home/baez/roots/>.] As another example of the challenges discussed above, the virtual reality versions of Thurston's geometries of <cit.> are a profound way to experience these spaces, but can feel overwhelming and nearly psychedelic, as our brains struggle to make sense of what we are seeing. As an alternative, for several of the geometries, it is possible to place the geodesics of the geometry into familiar euclidean space as curves (see Figure <ref>). The interplay between these two methods of illustration can be much more enlightening than either one alone. The mathematical arguments that are developed to explain how one view can predict the other can end up as the basis of a mathematical proof. Conjectures and mathematical arguments about the space can quickly be evaluated by predicting their effect on these illustrations. § LOOKING FORWARD Illustrations have been used both historically and in recent state-of-the-art research projects to expand the boundaries of knowledge in pure mathematics. Other fields of research, such as statistics and microbiology, have systematized visualization, and studied it in its own right. However, as our gallery of examples shows, the quality of illustrations in pure mathematics varies, and there is no common framework to create, discuss, or evaluate them. To further the possibilities that illustrations provide, there needs to be a dedicated community to tackle the next important problems. These include, among others: * How to identify illustrations that have rich potential to provide insight? * How to identify (and mitigate) the ways that illustrations can mislead and distract? * How to measure the fidelity of an illustration; are perceived patterns a result of its construction or the underlying mathematics? * How can we harness the processing power and pattern-recognition capabilities of the human visual system? * How can we empower a next generation of mathematical illustrators to create and leverage sophisticated illustrations? * And how do we increase professional recognition of the illustration of mathematics? Exploring these questions will lay the foundation of a discipline built around the illustration of mathematics, providing powerful tools for the advancement of mathematical research. plain
http://arxiv.org/abs/2307.04014v2
20230708164616
Novel Pipeline for Diagnosing Acute Lymphoblastic Leukemia Sensitive to Related Biomarkers
[ "Amirhossein Askari-Farsangi", "Ali Sharifi-Zarchi", "Mohammad Hossein Rohban" ]
cs.CV
[ "cs.CV" ]
A. Askari Farsangi et al. Sharif University of Technology, Iran [email protected] {asharifi,rohban}@sharif.edu Novel Pipeline for Diagnosing Acute Lymphoblastic Leukemia Sensitive to Related Biomarkers Amirhossein Askari Farsangi1 Ali Sharifi Zarchi1 Mohammad Hossein Rohban1 August 12, 2023 ========================================================================================== Acute Lymphoblastic Leukemia (ALL) is one of the most common types of childhood blood cancer. The quick start of the treatment process is critical to saving the patient's life, and for this reason, early diagnosis of this disease is essential. Examining the blood smear images of these patients is one of the methods used by expert doctors to diagnose this disease. Deep learning-based methods have numerous applications in medical fields, as they have significantly advanced in recent years. ALL diagnosis is not an exception in this field, and several machine learning-based methods for this problem have been proposed. In previous methods, high diagnostic accuracy was reported, but our work showed that this alone is not sufficient, as it can lead to models taking shortcuts and not making meaningful decisions. This issue arises due to the small size of medical training datasets. To address this, we constrained our model to follow a pipeline inspired by experts' work. We also demonstrated that, since a judgement based on only one image is insufficient, redefining the problem as a multiple-instance learning problem is necessary for achieving a practical result. Our model is the first to provide a solution to this problem in a multiple-instance learning setup. We introduced a novel pipeline for diagnosing ALL that approximates the process used by hematologists, is sensitive to disease biomarkers, and achieves an accuracy of 96.15%, an F1-score of 94.24%, a sensitivity of 97.56%, and a specificity of 90.91% on ALL IDB 1. Our method was further evaluated on an out-of-distribution dataset, which posed a challenging test and had acceptable performance. Notably, our model was trained on a relatively small dataset, highlighting the potential for our approach to be applied to other medical datasets with limited data availability. § INTRODUCTION Leukemia is a type of cancer that affects the body's blood-forming tissues, including the bone marrow and lymphatic system <cit.>. Based on whether the leukemia is acute or chronic and whether it is lymphoid or myeloid, four main types of leukemia can be considered: ALL, AML, CLL, and CML <cit.>. Diagnosing leukemia through the examination of blood smear images is a common method used by hematologists <cit.>. While additional tests may be necessary for a more complete understanding of the patient's condition, experts are able to determine the presence and type of leukemia based on the number and shape of different types of white blood cells <cit.>. This suggests that deep learning has great potential for developing computer models for diagnosing leukemia from related microscopic images. Among all the four types of leukemia, acute lymphoblastic leukemia (ALL) has special diagnostic significance because an early start to treatment can save a patient’s life. This significance grows when we consider that 75 percent of cases involve children under the age of 14 <cit.>. The detection of blast cells in the blood and bone marrow makes it a suitable target for diagnosis using microscopic images. Therefore, studying the diagnosis of ALL using deep learning models is of particular importance in order to improve the accuracy and speed of diagnosis for this common and serious disease. The performance of deep learning models is highly dependent on the size of the dataset used for training. In medical applications, obtaining large datasets can be a challenge, and many datasets are small in size. A popular ALL dataset is the ALL IDB provided by Scotti et al. <cit.>, which contains images of both ALL and normal patients. It is quite small in size. Another dataset, the Raabin dataset <cit.>, was recently introduced and contains a variety of data classes, but it has not yet been much explored. Several classifiers have been proposed for diagnosing leukemia from related microscopic images <cit.>. These classifiers can be categorized based on their target classes. Some classifiers have been designed to classify more than two classes, and they often incorporate the ALL IDB dataset as part of their training due to its importance for ALL diagnosis and its availability <cit.>. However, it's important to consider the issue of dataset bias <cit.> when combining datasets, but some methods may not have paid enough attention to this point. In contrast, other methods have focused on the two-class problem, specifically classifying ALL from the Normal class, and the ALL IDB dataset has been widely used for this purpose <cit.>. In the following, we focus only on the two-class classifiers that distinguish ALL from normal. We can categorize these classifiers based on their input type. Some of these classifiers perform on single-cell images such as those in the ALL IDB 2 dataset, which means they can only perform on processed data in the form of cropped images <cit.>. On the other hand, other classifiers accept images similar to what can be seen under microscopes, such as those in ALL IDB 1 <cit.>. In order to become practical models, classifiers of the first type only judge the image of single cells, while what is available are microscopic images containing a large number of white blood cells. Therefore, not only are object detection methods required in these cases, but it is also necessary to aggregate the results of all cells in a way to make a judgment about the patient. It seems that the second category of models is in better condition. These models do not need object detection methods, but the second problem somehow still exists for them. There is a possibility that in a patient with ALL, there are no signs of the disease in a single microscopic image and that different parts of the patient's blood sample must be examined to make an accurate diagnosis. This is exactly what expert doctors do in this situation. In other words, labels are weak in this case, and a multiple-instance learning setup is needed. Therefore, a third category of models that handle multiple images of the same patient should be considered, and our model belongs to this category of models. Although there are some examples of multiple-instance learning methods for other problems related to diagnosis from blood microscopic images, there are only a few methods in the literature for diagnosing leukemia. Therefore, our work aims to fill this gap and explore the potential of this approach for improving leukemia diagnosis results <cit.>. Finally, one of our model's most important strengths is its reliability and sensitivity to related biomarkers, and we achieved these properties by applying a special training method. We demonstrated that removing blast cells from microscopic images of patients with ALL made the model have difficulty diagnosing the patient as having ALL, whereas removing normal cells made the model accurately diagnose the patient as having ALL. Since our model evaluates patients based on multiple images, testing this property required a completely independent dataset, as ALL IDB's samples are single images. We utilized a subset of Raabin's dataset as an out-of-distribution test set. In general, testing deep learning models on out-of-distribution test sets is a challenging task, and most methods tend to fail during these tests <cit.>. However, our model achieved acceptable accuracy in this challenging setting. § TRAINING CONSIDERATIONS FOR SMALL MEDICAL DATASETS In medical applications of machine learning, in addition to common evaluation metrics such as accuracy and precision, a potential qualitative criterion for assessing model reliability is the similarity between the model's decision-making process and that of a human expert. Evaluating models based on this criterion can provide insight into their performance and trustworthiness. As an illustration of this approach, we conducted a reimplementation of the method proposed by Ahmed et al. <cit.> using their dataset and visualized the model's attention map with the GradCAM algorithm <cit.>. Through our analysis, we identified that the model makes decisions based on unexpected patterns in the input image, which we refer to as "shortcuts," and that these shortcuts do not have any significant medical meaning. This highlights a potential flaw in these models, specifically, the issue of overfitting. Overfitting can occur for a variety of reasons, including model complexity and dataset quality. Complex models are required for extracting high-level features from image data for proper processing; however, when the dataset is small in comparison to the model's complexity, the model's variance increases, resulting in overfitting. Another factor that can contribute to overfitting is when the training data is not clean, causing the model to learn shortcuts. To avoid this issue, it is crucial to use a clean dataset that minimizes the chances of spurious correlations. To address the potential causes of overfitting, two potential solutions are to increase the dataset size through augmentation methods and to manually clean the dataset. It should be noted that we have an implicit assumption that we aim to train a classifier that performs similarly to the human expert classifier. This means that, in the expert's opinion, applied augmentation should not change the label of the image. In our case, cell morphology is an important criterion for classifying a specific cell as normal or diseased. As a result, we are not allowed to use augmentations like shearing that change the cell's morphology, and we have to use augmentations like rotating and translating instead. However, we can see that the majority of the proposed methods have not paid enough attention to this point. Using these considerations, we discovered that the issues with shortcuts persisted and that we needed to find another solution. Two of these visualizations are shown in Fig. <ref>. In the following, we express our method to overcome this problem. § METHOD §.§ Pipeline We presented a pipeline that performs the decision-making process step by step. This approach allows us to observe the model's procedure for producing the final result and helps us design each step based on the actual process used by hematologists. Since the presence of blast cells plays a decisive role in the existence of this disease, specialists look for these cells among white blood cells when examining the patient's microscopic image and making a decision based on that. The first step in the computer simulation of this process should be a cell detector that detects white blood cells in the input image. The second step is to analyze each cell image using the criterion that is sensitive to whether a cell is a blast or not. In the third step, we must summarize the previous step's results and describe the patient's condition using a set of parameters. Finally, based on the generated report, we must determine the existence of this disease. Fig. <ref> depicts the overall layout of this pipeline. The following goes over each step in detail. §.§.§ Object Detection For the white blood cell detector, we used a pre-trained Faster RCNN network with a ResNet50 backbone <cit.>. We fine-tuned it using the ALL IDB 1 dataset, which was manually annotated. §.§.§ Feature Extraction To extract informative features from images, large networks with numerous training parameters are often required. However, training these networks from scratch on small datasets often leads to overfitting. A common solution is to use pre-trained networks on ImageNet, which are global feature extractors that produce rich feature vectors for each input image. In this study, we utilized a pre-trained AlexNet network with fixed weights as the global feature extractor <cit.>. To customize these feature vectors for our problem, we added a trainable 256-node, fully connected layer that takes these feature vectors as input. In our study, we refrained from fine-tuning the global feature extractor since the computationally intensive process of training the LSTM-based architectures does not permit simultaneous optimization of all weights. §.§.§ Profiling There are various ways to aggregate the extracted features from the cell images of a patient. In this work, we used an LSTM-based architecture for this purpose. As the LSTM network accepts each series length as its input, this model doesn't have a problem with the different numbers of white blood cell images from different patients. In addition, it allows us to analyze the set of microscopic images for each patient, increasing the model's reliability and accuracy. It is reasonable because hematologists do not just look at one part of a patient's blood sample; instead, they move the blood slide under the microscope and make decisions based on what they see at several points. We used the LSTM network with 256 internal nodes, which produces a vector of length 64 as a patient's feature vector. §.§.§ Final Classification To classify the patient's feature vector, we used only one fully connected layer with two nodes in the final classification. §.§ Dataset Our research utilized two different datasets: the ALL IDB and the Raabin dataset. The ALL IDB dataset is the most widely used dataset in the literature, consisting of two subsets. The first subset, ALL IDB 1, contains 108 normal or diseased blood microscopic images, with blast cell centers annotated in ALL cases. The second subset, ALL IDB 2, includes 260 images of single white blood cells labeled as normal or cancerous. The Raabin dataset, on the other hand, comprises 938 single-cell images of normal white blood cells. §.§.§ Cell Detection Dataset To train the faster RCNN network, a dataset with blood microscopic images containing bounding boxes around white blood cells is required. Although ALL IDB 1 is a suitable option, it only provides annotations for blast cells, not normal ones. Therefore, we manually annotated the normal cells in these images to utilize this dataset for our purpose. §.§.§ Generated Dataset For training LSTM Inputs in the form of series of the same size are required for training LSTM networks. Furthermore, because LSTM networks are data-hungry models, a large dataset of image series of the same length is required for successful convergence. Because there is no such dataset in our case, we decided to create one. We generate a dataset based on the following assumption: Assumption: The series of white blood cell images belong to ALL class if and only if there were at least one blast cell in it. Based on this assumption, we generate different training image series of the same length by randomly selecting proper cell images from ALL IDB 2 and single cell images from the Raabin dataset. We choose a different number of cells because we expect the number of cells in each input image to be different. To make the sequences the same size, we add the required number of empty images to the set of selected images. Because LSTM networks are sensitive to the order of inputs, and in our case, the order of images is unimportant, we decided to shuffle the image series to reduce LSTM sensitivity to the image series order. Fig. <ref> depicts an example of one of these image series.  §.§ Training The proposed pipeline's training was completed in three stages. Faster RCNN was trained on its dataset to become a white blood cell detector in the first step. LSTM and the classifier had been trained in two other steps. These two steps are explained below. §.§.§ Making sensitivity to the blast cells To train the network to pay attention to blast inputs, we first trained the LSTM and classifier on a one-length image series. Because of our dataset generation assumption, the classifier output should be considered cancer if and only if the input image is a blast cell. If this training process is successful, we expect the LSTM and classifier to extract information from AlexNet features that correlates with whether the input cell is a blast or not. §.§.§ Training for analyzing image series We optimize our network on the generated image series of length 15 in the final training step. Our goal in this step is to teach the model to apply what it learned about blast cells in the previous stage to the analysis of a series of images. § RESULTS §.§ Classification Performance To ensure a valid evaluation of our model on the ALL IDB 1 dataset, which includes cropped cells from the ALL IDB 2 dataset used in training, we first removed the corresponding cells from the evaluation set. Fig. <ref> provides a sample of this modified dataset. Our model achieved accuracy and F1-score of 96.15% and 94.24%, respectively. Table <ref> shows the accuracy achieved by our method in comparison to other methods. In the following, we evaluated our model on an out-of-distribution test set consisting of multiple images for each patient, requiring classification in a multiple-instance setup. To do this, we utilized a subset of the Raabin Leukemia dataset, which contains numerous microscopic images for each patient. Since the number of patients is small in this dataset, we decided to split all images of each patient and considered them as separate patients. We did this in a way that the total white blood cells of different patients are equal and called this constant number partition size. On a partition size of 50, our model attained an accuracy of 72.88% and an F1-score of 71.01%. The average accuracy and F1-score across partition sizes ranging from 20 to 100 were found to be 71.78% and 70.35%, respectively. It is important to note that testing on an out-of-distribution test set is a challenging task, and it is common for model performance to decrease significantly under such conditions. It is necessary to mention the outcome of the cell detector section here. The faster RCNN was trained on 85 percent of the ALL IDB 1 images and achieved a mean average precision (mAP) of 96.03% on the remaining 15 percent of ALL IDB 1 images. §.§ Sensitivity to Blast Cells Our special training method has led us to expect that our model will be sensitive to ALL biomarkers, particularly blast cells. To put this hypothesis to the test, we designed a test that involved removing blast cells from the image series of ALL patients to see if it would make it difficult for our model to identify this new sample as belonging to the ALL class. We also expected that removing normal cells from these images would improve the model's performance. To perform this test, we required the coordinates of blast and normal cells to be annotated in each dataset. While these annotations were available for the ALL IDB 1 dataset, the Raabin leukemia dataset did not have such annotations. Therefore, we trained a Faster RCNN on ALL IDB 1 data to detect blast and normal cells. This object detector achieved a mean average precision of 93.41% on the test set split from ALL IDB 1. Our test on the ALL IDB 1 dataset showed that the model's recall under blast removal, normal removal, and no attack conditions was 43.90%, 97.56%, and 97.56%, respectively. The observed decrease of 53.66% in recall under blast removal indicates that our hypothesis was correct and that the model was indeed sensitive to blast cells. For this test on the Raabin dataset, we evaluated the model's performance on groups of images of varying sizes. For each patient, we selected as many images as the group size and used object detection to form three different cell series: one with only blast cells, one with only normal cells, and one with all cells. It should be noted that the length of these three cell series for each patient is not equal, and the sum of the first two is equal to the third. We plotted the recall of the model under blast removal, normal cell removal, and no attack conditions. Our results showed that removing blast cells reduced the model's recall by an average of 18.84% across all group sizes. Conversely, removing normal cells has a large effect on model recall and, on average, increases it by 18.69%. This finding validates our model's proposed hypothesis on an even out-of-distribution test set and demonstrates the significance of blast cells in our model's decision-making process. Fig. <ref> provides a visual representation of our findings. It should be noted that while we used the results of the trained detector on the ALL IDB 1 dataset to evaluate the model's performance on the Raabin leukemia dataset, there is no ground truth for the Raabin dataset. Therefore, an error in object detection may be present in the obtained results. § ABLATION STUDY §.§ Cell numbers effect Based on our understanding that blast cells are not uniformly distributed throughout a blood smear, we hypothesize that increasing the number of white blood cells per patient will improve our model's performance in detecting ALL. Fig. <ref> shows the accuracy plot for varying the number of images per patient, and as expected, we observe a positive correlation between the number of images and model performance. In all subsequent reports, the reported metrics are the averages of the results obtained for group sizes ranging from 20 to 100, since evaluations were performed on different group sizes. §.§ Different feature extractors Our method can employ several pre-trained feature extractors, including AlexNet, InceptionV3 <cit.>, ResNet50 <cit.>, VGG16 <cit.>, and ViT-base-patch-16 <cit.>. We conducted a comparison of these models, and the results are presented in Table <ref>. From this table, it is evident that the AlexNet feature extractor yields the best performance. We will conduct further tests using this feature extractor in the subsequent sections. §.§ LSTM effect The main reason for using LSTM in our architecture was to add the ability to aggregate the extracted results in our model. It may seem that since in the first step of training, we trained the model on single-cell images to learn how to identify blast cells, in the second step of training the model has generalized this result with a counting approach. In other words, it would be possible that the LSTM layer may not perform more than a linear operator. How to test this hypothesis is a bit unclear, but we can still define some tests. For each group of patient images, we fed the extracted cells individually to the model for labeling as either blast or normal cells. As a result, each patient in the test set was assigned two values, representing the number of normal and blast cells, respectively. Using this data, we trained a perceptron to classify each patient. Since the training and testing sets were identical for this perceptron, the resulting accuracy can be considered ideal. The average accuracy of the ideal perceptron was found to be 8.51 percent lower than the average accuracy of the LSTM model. Therefore, it can be concluded that the LSTM layer is capable of more than just linear operations. §.§ Pre-training effect It is necessary to investigate the potential benefits of pre-training, the first step of our training process. We hypothesize that pre-training can accelerate model convergence by providing a controlled environment in which the model can recognize the important biomarker for ALL, which is blast cells, and learn to differentiate them from normal cells. To evaluate this hypothesis, we trained two models, one with pre-training and one without. All other conditions were kept identical. Our results indicate that the model without pre-training has an average accuracy that is 2.05 percent lower than that of the pre-trained model. Therefore, pre-training appears to be a beneficial step in our training process, leading to improved model accuracy. §.§ Impact of training series length To assess the impact of training series length on model performance, we trained several classifiers with varying series lengths ranging from 1 to 32. The results of these classifiers are shown in Fig. <ref>. The plot suggests that there is an initial positive correlation between series length and model performance. However, beyond a certain threshold, the impact of series length on performance diminishes, and longer series lengths do not significantly improve the model's accuracy. § CONCLUSION In this work, we aimed to develop a machine learning-based model for diagnosing acute lymphoblastic leukemia from blood smear images. Since the size of the datasets is small in this field, training networks in an end-to-end manner leads the model to find shortcuts for making decisions instead of using medically meaningful patterns. To address this issue, we introduce a pipeline inspired by the hematologists' approach, consisting of four main steps: detecting white blood cells, analyzing each cell, aggregating results, and decision-making. Compared to end-to-end training, this approach has several advantages. First and foremost, the training process is a kind of search among all feasible classifiers, and if we want to achieve a classifier similar to a human expert, we must constrain this search space. Each data point is a kind of constraint, and that's why the size of the dataset becomes important. In our problem, we do not have access to such large datasets, so we should apply the constraints in another way. We did this by constraining the classifier architecture to the described pipeline. In addition, this approach allows us to monitor the performance of individual components and to find and fix possible faults. Another important thing that we did for training our pipeline was to train the final classifier in two stages. The first step can be considered an auxiliary task that makes the classifier sensitive to the biomarkers of ALL, and the second step is for the model to learn to generalize the knowledge learned in the first step. Finally, we show that our model is sensitive to ALL biomarkers. Furthermore, we analyzed the impact of our design choices, such as the use of the AlexNet feature extractor, the LSTM layer, the pre-training step, and the length of the training series. In our work, we addressed the need to redefine the problem of acute lymphoblastic leukemia (ALL) diagnosis as a multiple-instance learning problem, which has not been done before. To overcome this, we generated a suitable training dataset and evaluated our model on an out-of-distribution test set, achieving acceptable results. It seems that the model's sensitivity to staining is its major weakness, and further work should focus on addressing this issue to improve its performance. As a result, one potential solution could be to train feature extractors that are less sensitive to staining. unsrt
http://arxiv.org/abs/2307.04471v1
20230710104047
Phonon-assisted coherent transport of excitations in Rydberg-dressed atom arrays
[ "Arkadiusz Kosior", "Servaas Kokkelmans", "Maciej Lewenstein", "Jakub Zakrzewski", "Marcin Płodzień" ]
cond-mat.quant-gas
[ "cond-mat.quant-gas", "physics.atom-ph", "quant-ph" ]
new_plots f frozen ⟨⟩kJ^_ijdscg_Jg_WIPRḍÑInstitute for Theoretical Physics, University of Innsbruck, 6020 Innsbruck, AustriaDepartment of Applied Physics, Eindhoven University of Technology, PO Box 513, 5600 MB Eindhoven, The NetherlandsInstitut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels, Barcelona, SpainICREA, Passeig Lluis Companys 23, 08010 Barcelona, SpainInstytut Fizyki Teoretycznej, Uniwersytet Jagielloński, Łojasiewicza 11, 30-348 Kraków, PolandMark Kac Center for Complex Systems Research, Jagiellonian University, Łojasiewicza 11, 30-348 Kraków, PolandInstitut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels, Barcelona, Spain Polarons, which arise from the self-trapping interaction between electrons and lattice distortions in a solid, have been known and extensively investigated for nearly a century. Nevertheless, the study of polarons continues to be an active and evolving field, with ongoing advancements in both fundamental understanding and practical applications. Here, we present a microscopic model that exhibits a diverse range of dynamic behavior, arising from the intricate interplay between two excitation-phonon coupling terms. The derivation of the model is based on an experimentally feasible Rydberg-dressed system with dipole-dipole interactions, making it a promising candidate for realization in a Rydberg atoms quantum simulator. Remarkably, our analysis reveals a growing asymmetry in Bloch oscillations, leading to a macroscopic transport of non-spreading excitations under a constant force. Moreover, we compare the behavior of excitations, when coupled to either acoustic or optical phonons, and demonstrate the robustness of our findings against on-site random potential. Overall, this work contributes to the understanding of polaron dynamics with their potential applications in coherent quantum transport and offers valuable insights for research on Rydberg-based quantum systems. Phonon-assisted coherent transport of excitations in Rydberg-dressed atom arrays Marcin Płodzień August 12, 2023 ================================================================================ § INTRODUCTION Polarons are quasi-particles that emerge from the coupling between electrons (or holes) with ions of a crystalline structure in polarizable materials. The idea of electron self-trapping due to lattice deformations dates back to Landau's seminal 1933 paper <cit.>, but the modern concept of a polaron as an electron dressed by phonons was formulated in 1946 by Pekar <cit.>, and developed later by Fröhlich <cit.>, Feynman <cit.>, Holstein <cit.>, and Su, Schrieffer and Heeger <cit.>. Since their discovery, polarons have been extensively investigated, both theoretically and experimentally, not only in the field of condensed matter physics (for reviews see Refs. <cit.>), but also in various chemical and biological contexts, e.g., in protein propagation <cit.>. In particular, in the modeling of charge migration in DNA molecules, it is assumed that a localized polaron is formed in the helix near a base due to an interaction between a charge carrier and a phonon. When a uniform electric field is applied, the polaron moves at a constant velocity, and a current flows through the chain <cit.>. The charge carrier transport takes place due to coupling between carrier and phonons; in contrast, in the absence of phonons, an external constant force induces Bloch oscillations <cit.>, where the mean position of the carrier is constant while its width periodically changes in time. Polarons have been studied in many, seemingly different experimental setups, ranging from ultracold ions <cit.>, polar molecules <cit.>, mobile impurities in Bose and Fermi gases <cit.>, ultracold Rydberg atoms <cit.>, to quantum dots on a carbon nanotube <cit.>. Although each of these platforms possesses its unique strengths and benefits, recently there has been an exceptional outburst of interest in quantum simulation and computation with Rydberg atoms, which provide a remarkable level of flexibility for executing quantum operations and constructing quantum many-body Hamiltonians <cit.>. While the latter can contribute to our comprehension of the static properties of many-body systems, their main benefits are centered around exploring the complex dynamics displayed by these systems. In particular, in the context of polarons, it has been demonstrated that the dipole-dipole interactions between distinct Rydberg-dressed states can result in coherent quantum transport of electronic-like excitations <cit.>, which can further be coupled to optical phonons <cit.>. The paradigmatic one-dimensional topological Su-Schrieffer-Heeger (SSH) model <cit.> describing the soliton formation in long-chain polyacetylene due to excitation-phonon coupling, has been realized in Rydberg arrays <cit.>. In this paper, we continue along this path and present theoretical studies of an implementation of a microscopic model featuring the interplay of Su-Schrieffer-Heeger (SSH) and Fröhlich electron-phonon interaction terms under the influence of an external force and disorder. In particular, we focus on the directional transport of an excitation interacting with phonons. We indicate an excitation-phonon coupling regime where the competition between Bloch oscillations and interactions results in the coherent transport of a well-localized wave packet over a long distance. Moreover, we show the robustness of such a coherent transport of well-localized wave packets to the on-site random potential, indicating that a relatively strong disorder does not affect significantly the transport properties. The paper is divided into three parts. In the first part, Sec. <ref>, we describe the physical setup and derive the effective Hamiltonian in Rydberg-dressed atomic arrays. The second part, described in Section <ref>, focuses on the dynamics of the system under experimentally relevant parameters. In this section, we observe the macroscopic transport of the center of mass and a transition between Bloch oscillations and moving polaron regimes. In the third part, Sec. <ref>, we comprehensively analyze the previously derived microscopic model, which exhibits a rich phase diagram due to the interplay of two different electron-phonon coupling mechanisms. Finally, we compare the behavior of excitations with acoustic and optical phonons and demonstrate the robustness of our results. § THE MODEL AND ITS HAMILTONIAN We consider a one-dimensional chain of N equidistant Rydberg atoms with lattice constant x_0 and positions x_j = j x_0, confined in a periodic trap, implemented either by an optical lattice <cit.>, an optical tweezer array <cit.>, a Rydberg microtrap <cit.>, or a painted potential <cit.>. We assume that the spatial motion of the atoms is suppressed by the strong confinement of each Rydberg atom in local potential minima. Although the atomic motion is frozen, it is remarkable that such a Rydberg system can display highly non-trivial dynamics. In particular, the induced dipole-dipole interactions between distinct Rydberg-dressed states can lead to the emergence of coherent quantum transport of electronic-like excitations <cit.>. In the following, we first briefly repeat the derivation of the Hamiltonian that characterizes the dynamics of single excitations <cit.>. The purpose of this recap is to modify the setup in order to incorporate nearly arbitrary on-site potential terms. Next, after introducing phonons into the system <cit.>, we derive an effective nearest-neighbor Hamiltonian that includes two excitation-phonon coupling terms, which we comprehensively study in the forthcoming sections, focusing on the dynamics in the presence of an external constant field. §.§ Single excitation Hamiltonian in arbitrary potentials We assume that each Rydberg atom of can be initially found in one the ground state hyperfine levels, |g⟩ or |g'⟩. By applying far-detuned dressing laser fields, with effective Rabi frequencies Ω_s, Ω_p and detunings Δ_s, Δ_p respectively, these two hyperfine states can be coherently coupled to selected highly excited Rydberg states, |s⟩ or |p⟩, with principal quantum number n≫ 1 and different angular momenta. Consequently, each atom can be found in one of the two Rydberg dressed states <cit.>, which are a slight admixture of Rydberg states to the atomic ground states, |0⟩_j ≈ |g⟩_j + α_s |s⟩_j |1⟩_j ≈ |g'⟩_j + α_p |p⟩_j , with α_s/p = Ω_s/p/[2Δ_s/p] and j denoting the position of an atom. Treating α_s, α_p as perturbation parameters in van Vleck perturbation theory, Wüster at al. <cit.> have shown that the dipole-dipole interaction can exchange the internal states of a neighboring pair, e.g. |1⟩_1 |0⟩_2 → |0⟩_1 |1⟩_2. This process can be viewed as a hopping of an excitation from j=1 to j=2 lattice site, which conserves the number of excitations. The perturbation analysis can be extended to a chain of N atoms, where the effective Hamiltonian in the single excitation manifold (up to the fourth order in α_s and α_p) reads <cit.>Ĥ_0 = ∑_j n̂_j (E_2 + E_4+ A_j) + ∑_j,k A _jkâ_j^†â_k, where â_j (â^†_j) denote an annihilation (creation) operator of excitation on site j, while A _j = ħα_s^2 α_p^2 (∑_k j1/1-U̅_kj^2) (Δ_s + Δ_p), A_jk = ħα_s^2 α_p^2 U̅_jk/1-U̅_jk^2 (Δ_s + Δ_p), with U̅_jk = C_3/[ħ| x_i- x_j|^3 (Δ_s + Δ_p)] and C_3 quantifying the transition dipole moment between the Rydberg states, describe perturbative dipole-dipole interactions. Finally, E_2 and E_4 are constant energy shifts of the second and fourth order, respectively, E_2 /ħ = (N-1)α_s^2 Δ_s + α_p^2 Δ_p, E_4 /ħ = (N-1) α_s^4 Δ_s+ α_p^4 Δ_p + (N-1)α_s^2 α_p^2 (Δ_s+ Δ_p) . Although in principle constant energy terms could be always ignored as they do not contribute to the dynamics of excitations, let us consider now a scenario where the Rabi frequency Ω_p depends on the atomic position on the lattice, i.e., we assume that Ω_p →Ω_p(j) ≡Ω_p [1 + δΩ(j) ], where δΩ(j) is arbitrary, but small correction of the order (α_p/s)^2. With this assumption, and by retaining terms up to the fourth order, the effective Hamiltonian in Eq. (<ref>) acquires an additional term, namely Ĥ = Ĥ_0 + ħα_p^2 Δ_p ∑_j δΩ(j) n̂_j . Because the term proportional to α_p^2δΩ(j) is of the same order as A_j, it can be incorporated into the definition of A_j in Eq. (<ref>). With this simple modification, we have gained a position-dependent effective potential term that can strongly affect the dynamics of excitations. Although the potential term can be tailored almost arbitrarily, from now on we consider one of its simplest forms, i.e., we choose δΩ(j) = 2 α_s^2 (F j + ϵ_j ). The first term in the parentheses, being linearly proportional to position j, emulates the presence of a constant external field F. The second term, with ϵ_j being a random variable, gives rise to the on-site potential disorder. Note that both terms lead to localization of the excitation either due to Stark localization <cit.> in a constant tilt, F, or Anderson localization <cit.> due to random ϵ_j. As explained in the next part, the situation is not so straightforward. §.§ Excitation-phonon Hamiltonian In this part, we relax our previous assumption that the atoms of the array are completely immobile. Although we still assume that no atom can move through the lattice, we now let them vibrate in the vicinity of their local equilibrium points. This will affect, as we shall see, the dynamics of excitations. We consider now a scenario where an atom in the j-th lattice site and with mass m may oscillate with a frequency ω_0 = √(k/m) inside a local potential well, that can be approximated by a quadratic potential k/2 (x-j x_0)^2 ≡k x_0^2/2 (u_j)^2, with k being the force constant and where u_j denotes dimensionless distortion from the local equilibrium position. The motion of atoms can be quantized u_j →û_j and described by a simple quantum harmonic oscillator. This vibrational motion is responsible for the distortion of an atomic array and can be considered as a phonon. Since the Hamiltonian of the previous section describing the motion of single excitations strongly depends on the position of atoms, phonons can propagate through space due to the coupling to excitations. Before proceeding to derive the effective Hamiltonian of the system with phonon-excitation coupling, for clarity and simplicity we assume that α≡α_s=α_p , Δ≡Δ_s=Δ_p . Moreover, from now on we also fix the time and energy scales and go to the dimensionless units by dividing all the energy scales by 2ħα^4 Δ. Although the setup described in Section <ref> admits only dispersionless optical phonons that correspond to local vibrations of atoms around local minima, we consider here two different types of phonons. We proceed by writing the phononic Hamiltonian explicitly in terms of the dimensionless position and momentum operators û_j, p̂_j of local distortions, Ĥ_ph = ∑_j p̂^2_j/2 m_eff + m_effω_eff^2/2 (û_j - ηû_j-1)^2, with the effective dimensionless mass, m_eff = 2 m x_0^2 α^4 Δ / ħ, and the effective oscillator frequency, ω_eff =ω_0 / (2 α^4 Δ), ω_0 = √(k/m), where ω_0 is the bare frequency. By changing the parameter η in Eq. (<ref>), diverse phonon types can be achieved. In particular, η=0 corresponds to the aforementioned local vibrations (i.e., dispersionless optical phonons), and η=1 describes acoustic phonons. These two phonon types are characterized by the dispersion relation ϵ_q = ω_eff , 2 ω_eff|sin(q x_0/2)|, , which can be readily found by writing the phononic Hamiltonian (<ref>) in terms of its eigenmodes, Ĥ_ph = ∑_q ϵ_q(b̂^†_qb̂_q+1/2), where b̂^†_q (b̂_q) creates (annihilates) the phonon with quasi-momentum q, and are related to the local dimensionless momentum and position operators p̂_i, û_i of distortion by û_j = ∑_q 1/√(2 N ϵ_q m_eff)(b̂_q+ b̂^†_-q)e^iqjx_0, p̂_j = -i∑_q √(ϵ_q m_eff/ 2 N )(b̂_q - b̂^†_-q) e^iqjx_0. Having discussed the phononic degrees of freedom, we can now write the fully effective Hamiltonian governing the motion of single excitations coupled to phonons. The derivation is straightforward and requires: (i) the expansion of the position-dependent coefficients [given by Eq. (<ref>)] in the Hamiltonian (<ref>) of the previous section up to the first order in û_j, and (ii) dropping the next-to-nearest neighbor contributions <cit.>. By following these steps, we obtain the effective excitation-phonon Hamiltonian [cf. Fig. <ref>], which consists of four parts, i.e., Ĥ_eff = Ĥ_ph + Ĥ_ex + Ĥ_J + Ĥ_W, where Ĥ_ph is the phononic Hamiltonian, Eq. (<ref>), Ĥ_ex = J_0(â^†_j+1â_j + ) + ∑_j (j F +ϵ_j)â^†_jâ_j , describes excitations with the hopping amplitude J_0 = κ/(1-κ^2), κ = C_3/(2ħΔ x_0^3), experiencing an external constant force F, and a local on-site disorder ϵ_j. Finally, Ĥ_J = g_J∑_j (û_j+1 - û_j) â^†_j+1â_j + , Ĥ_W = g_W ∑_j (û_j+1 - û_j-1)â^†_jâ_j, are the notable SSH and Fröhling Hamiltonians <cit.>, respectively, that correspond to two different mechanisms of excitation-phonon couplings, with dimensionless coupling parameters g_J = -3κ(1+κ^2)/(κ^2-1)^2, g_W = -6κ^2/(κ^2-1)^2 . §.§ Equations of motion The full numerical analysis of the polaron dynamics on the many-body level is one of the most challenging computational tasks, due to the non-conserved total number of phonons in the system, which prevents it from working in a restricted, fixed particle-number Hilbert space sector of the phononic degrees of freedom. Additionally, even without a force F the effective Hamiltonian of the systems (<ref>) depends, in principle, on many parameters, namely J_0, g_W, g_J, ω_ eff and m_ eff, making the full analysis of the system even more challenging. To analyze the dynamical properties of the considered system, in the following, we assume that the phononic degrees of freedom are independent in each lattice site. We make the semiclassical approximation by applying the Davydov Ansatz <cit.>, i.e., we assume that phonons are in a coherent state and that the full wave function is a product state of the excitation and coherent phonons part, as | Ψ (t)⟩ = (∑_jψ_j(t)â_j^†)⊗( e^-i ∑_n[ u_j(t)p̂_j-p_j(t)û_j])|𝚟𝚊𝚌⟩, where |ψ_j(t)|^2 is a probability of finding an excitation at a site j, u_j(t) and p_j(t) are expectation values of phononic position and momentum operators. The equation of motion for ψ_j(t) and u_j(t) can be subsequently derived from a classical conjugate variable Heisenberg equations of motions using the generalized Ehrenfest theorem, see, for example, Ref. <cit.>. By following these steps, we obtain a closed set of coupled differential equations for the excitation amplitude ψ_j(t) and classical field u_j(t). The equations can be written in a concise form, as i ψ̇_̇j̇ = J_j ψ_j+1 + J_j-1ψ_j-1 + W_j ψ_j , ü_j = -ω_eff^2 D[u_j] + S[ψ_j], where the effective potential experienced by an excitation W_j(t) = j F +ϵ_j + g_W[u_j+1(t) - u_j-1(t)], and the effective hopping amplitude J_j(t) = J_0 + g_J[u_j+1(t) - u_j(t)] are both time-dependent functions due to the coupling to the gradient of phononic field u_j(t). As such, both W_j(t) and J_j(t) are responsible for the self-trapping of an excitation. Similarly, the phononic equation (<ref>) also depends on the excitation amplitude ψ_i(t) through the S[ψ_j] operator, given by S[ψ_j] = - g_W/m_eff (|ψ_j+1|^2 - |ψ_j-1|^2) - g_J/m_eff[ψ_j^*(ψ_j+1 - ψ_j-1) + ], which acts as a time-dependent source for the phonon field u_j(t). Finally, the phononic dispersion relation, given by Eq. (<ref>), is necessarily present in the phononic equation through the D[u_j] operator, D[u_j]= u_j , η = 0, 2u_j - u_j+1-u_j-1, η = 1, , which introduces a crucial difference in the propagation of optical (η=0) and acoustic (η=1) phonons <cit.>, which we investigate in the next sections. §.§ Analysed observables Throughout this article we choose the initial conditions ψ_j(0) = δ_j,0 and u_j(0) = u̇_j(0) = 0 for the equations of motion, Eq. (<ref>), that correspond to a single excitation on a central lattice site and initially unperturbed lattice. Without a phonon-coupling and for F=0, these initial conditions simply correspond to a quantum particle that spreads symmetrically in both lattice directions characterized by a constant Lieb-Robinson velocity <cit.>, so that its center of mass remains localized at the initial position. Contrary to the classical case, a quantum particle on a lattice will not even move in the presence of a constant force F, but instead, it starts to perform Bloch oscillations <cit.>. The situation is different in interacting systems, either in a case of particle-particle interactions <cit.>, which may further lead to disorder-free many-body localization <cit.>, or in the presence of phonons, which can induce transient polarons at the end of Bloch oscillation periods <cit.> (see also Ref. <cit.>). In this study, we investigate how the propagation of a single excitation is influenced by the two competing phonon-coupling mechanisms under the applied, constant force. Specifically, we aim at answering the two following questions: (i) how much does the excitation spread due to the coupling with phonons, and (ii) does its center of mass move in the presence of the constant force F? In order to respond to these questions we focus on three simple observables that can be calculated based on the local density measurements. First, we consider the participation ratio (PR), defined as <cit.>: (t) = (∑_j|ψ_j(t)|^4)^-1, where we have assumed a unit normalization of the wavefunction ∑_j|ψ_j|^2=1. The participation ratio PR is equal to 1 where excitation is localized on a single lattice site and equals N when is completely delocalized over the whole lattice. The second observable is the center of mass position of the wave packet, i.e., x(t) = ∑_j=-N/2^N/2 j |ψ_j(t)|^2. Moreover, in some cases, analyzing the ratio of the two quantities mentioned above can provide valuable insights. We define this ratio, denoted as ξ, as: ξ(t) = |x(t)|/(t). ξ is a quantity ranging from 0 to ξ_max = N/2. The maximum value ξ_max corresponds to a moving, maximally-localized, non-dispersive solution that has reached the boundary of the system. As such, ξ can be viewed as an indicative measure for selecting well-localized solutions moving in one direction. Finally, it is worth mentioning that it is often not necessary to analyze the entire time range of the above observables. In fact, to discern various dynamic behaviors, it is usually sufficient to look at (t), x(t) and ξ(t) at the final evolution time t_f ≫ 1. For example, large (t_f) (relative to the system size N) suggests that excitation is not stable and has delocalized over a lattice. § POLARON DYNAMICS: EXPERIMENTAL CONSIDERATIONS In this section we elaborate on the results of the previous sections and study the dynamics of a Rydberg excitation under the presence of the external force F, solving the equations of motion for a physically relevant range of parameters. The effective Hamiltonian (<ref>) of the system relies on several effective, dimensional parameters, including m_eff = 2 m x_0^2 α^4 Δ / ħ, ω_eff =ω_0 / (2 α^4 Δ), as well as J_0, g_J, g_W, given by Eq. (<ref>) and Eqs. (<ref>). However, it is worth noting that the latter three parameters are not independent within our setup, and their values are determined by a single parameter κ = C_3 / (2ħΔ x_0^3). This provides us with significant flexibility in selecting appropriate physical parameters for our convenience. In the following, we choose the highly excited Rydberg states |s, |p of Rubidium-87 with principal quantum number n=50 and angular momentum equal to 0 or ħ, for which C_3 = 3.224 GHz×μ m^-3. We fix the lattice spacing x_0 = 2 μ, and the local trap frequency ω_0 =20 kHz. In the numerical simulations, we vary the dimensionless parameter κ between 0.80-0.86, which is equivalent to the change of the detuning Δ∼ 234-252 MHz, and corresponds to the dressing parameter α∼ 0.04. Importantly, by increasing κ we also increase the phonon coupling strength from around g_J/m_eff∼ g_W/m_eff∼-4.5 to g_J/m_eff∼ g_W/m_eff∼-8. Furthermore, we remind the reader that in our setup only the optical phonons (i.e., dispersionless vibrations) are experimentally relevant and, therefore, in this section we set η=0. Finally, we fix the value of the force at F=0.2, and we choose the system size to N=401. In order to characterize the transport properties of an excitation ψ_i(t), in the top panel of Fig. <ref> we plot its center of mass position x(t) and the corresponding participation ratio PR(t), see Eqs. (<ref>)-(<ref>) for the respective definitions. In the bottom panel, we additionally illustrate the ratio ξ = |x|/PR. All these quantities are plotted as a function of κ, at a fixed time t_f=2.1 T_B ≈ 66, where T_B = 2π/F is the Bloch oscillation period. We find that up to κ∼ 0.83 both x(t_f) and PR(t_f) are small (relative to the system size N) which corresponds to the Bloch oscillation-like dynamics where the phonon-influence is minimal. In contrast, phonons play important role above κ∼ 0.83 where the system dynamics is quite sensitive to the choice of microscopic parameters. Within the chaotic-like regime, the typical Bloch oscillation dynamics is completely disrupted, as the majority of solutions become delocalized across the lattice, leading to large values of PR(t). However, amidst this chaotic behavior, we also discover intervals of stability, characterized by peaks of ξ(t_f), where a substantial portion of the wave packet becomes well-localized and exhibits near-constant velocity motion. We illustrate those different dynamical behaviours in Fig. <ref>, where the first column, i.e., panels (a)-(d), show the time evolution of the excitation density |ψ_j(t)|^2, while the second column [panels (e)-(h)] illustrates the corresponding time evolution of the center of mass position x(t) and the participation ratio PR(t). In the first row (κ = 0.8), we observe almost perfect Bloch oscillations. However, upon closer examination, a subtle asymmetry becomes apparent, which is evident by a non-zero x(t). The asymmetry is enhanced for a higher κ = 0.83, as depicted in the second row of Fig. <ref>. Finally, the last two rows of Fig. <ref> illustrate the time evolution of the excitation density in the chaotic-like regime above κ∼ 0.83, cf. Fig. <ref>, where most of the solutions are delocalized over a lattice, as in Fig. <ref>(d) for κ = 0.86. In contrast, in Fig. <ref>(c) we illustrate a regular behaviour for κ=0.834, which lies inside one of the aforementioned stability windows. In this scenario, due to constructive interference after one Bloch oscillation period, a prominent portion of the wave function coalesces into a very narrow non-dispersive wave packet that moves with a nearly constant velocity. Overall, Fig. <ref> offers a comprehensive visual representation of the dynamic phenomena investigated in this section, shedding light on the varying dynamical behaviors and properties of the system with increasing phonon interaction. § DYNAMICAL PHASE DIAGRAMS OF THE EFFECTIVE HAMILTONIAN In the previous sections, we have derived and then analysed a microscopic Hamiltonian (<ref>), governing the dynamics of an excitation coupled to phonons through two different mechanisms, i.e., the SSH and Fröhling Hamiltonians, see Eq. (<ref>). While maintaining a close connection to the experimental platform, it is important to note that in the considered Rydberg setup, the phonon coupling strengths g_J and g_W are not independent. Instead, they can both be expressed in terms of a single parameter κ, as demonstrated in Eq. (<ref>). Consequently, investigating the interplay between these two competing phonon-coupling mechanisms within the current Rydberg platform becomes challenging. To address this limitation and explore the complete phase diagram in a more general context, in this section, we treat g_J and g_W as completely independent and fix other parameters. In the initial phase, as described in Section <ref>, our primary objective is to identify a stable polaron regime. Specifically, we aim to find a regime in which an initially localized excitation does not spread during the course of time evolution. Subsequently, in Section <ref>, we demonstrate the existence of stable islands where polarons can exhibit non-dispersive motion when subjected to a constant force, even in the presence of substantial disorder. Furthermore, in this part, we thoroughly examine the quantitative differences in dynamics of optical and acoustic phonons. In the following, we set the system size to N = 401, and solve the equations of motions in a fixed time interval t∈ [0,t_f = 16.5]. Unless explicitly stated otherwise, we also set m_ eff=0.5, ω_ eff = 10, and J_0 = 1. §.§ Polaron formation In the preceding section, we have already witnessed the emergence of a non-dispersive, self-trapped polaron through the excitation-phonon coupling. Building upon this observation, here we independently vary the two coupling strengths, g_J, and g_W, to identify a stable polaron regime. It is worth noting that the Hamiltonian of the system, as described by Eq. (<ref>), is invariant under the simultaneous transformation: u_j → - u_j, g_J → - g_J, and g_W → - g_W. Therefore, without loss of generality, we can assume g_J≥0. In Fig. <ref>, we present a phase diagram of the participation ratio PR calculated at the final evolution time for a broad range of values: g_J∈ [0,45] and g_W∈ [-16,20]. Each panel of Fig. <ref> corresponds to distinct values of m_ eff and η, as specified in the figure caption. In terms of the layout, the left (right) column corresponds to the optical (acoustic) phonons, and m_ eff increases from top to bottom. In all panels of Fig. <ref>, we observe wide regions with both extended states (warm colors) and well-localized solutions (dark blue colors), with the latter corresponding to stable, stationary polarons. We discover a non-trivial dependence of the participation ratio on both coupling strengths. Moreover, we find qualitatively similar behavior for both types of phonon, however, the acoustic phonons exhibit greater dynamic stability. This is evident from the presence of a chaotic-like region (the light blue dotted area), [compare with Fig. <ref> and see the discussion in Sec. <ref>]. Finally, we indicate that a decrease of effective mass m_ eff stabilizes the excitation supporting localized polaron formation. §.§ Robustness of coherent transport against disorder In this paragraph, we focus on the parameters regime, where a well-localized excitation can be transported over a long distance. Namely, after identifying stable polaron regimes, we proceed to apply a constant force to investigate the propagation of non-spreading solutions. For this analysis, we fix F=0.2, m_ eff=0.5 and select the coupling strengths within the range g_J ∈ [4,16] and g_W ∈ [8,20]. These regions are indicated by a dashed square in the bottom panels of Fig. <ref>. The results are presented in Fig. <ref>. The top row of Fig. <ref> illustrates the participation ratio, PR(t_f), for both optical [panel (a)] and acoustic [panel (b)] phonons. In both panels, we observe a shift in the boundary between extended and localized states due to the presence of the applied force. However, the prevalence of dark blue colors, indicating localized regimes, remains evident. The bottom row of Fig. <ref> displays ξ(t_f), as given by Eq. (<ref>). This quantity serves as a measure for selecting well-localized solutions propagating in a single direction. We observe stable transport islands of such solutions, indicated by warm colors. Panel (c) corresponds to optical phonons, while panel (d) corresponds to acoustic phonons. Finally, in Fig.<ref>, we examine the robustness of the non-dispersive moving solutions against on-site disorder, ϵ_j, as in Eq.(<ref>). The disorder is introduced by assuming ϵ_j to be a pseudorandom variable drawn from a uniform distribution in [-W/2,W/2]. Panels (a) and (b) depict the time propagation of excitations for optical and acoustic phonons, respectively. Panel Fig. <ref>(c) illustrate the center of mass position, while Fig. <ref>(d) presents the participation ratio evaluated at the final evolution time, plotted as functions of the disorder amplitude W. The results are averaged over 200 independent realizations of disorder. Notably, the participation ratio for both acoustic and optical phonons remains relatively constant, providing evidence for the robustness of the polaron self-trapping mechanism, while the center of mass positions takes place on a significant distance. § SUMMARY AND CONCLUSIONS In summary, we propose a quantum simulator with Rydbeg-dressed atom arrays for SSH-Frölich Hamiltonian allowing studies of polaron formation and dynamics. The interplay between two competing excitation-phonon coupling terms in the model results in a rich dynamical behavior, which we comprehensively analyze. In particular, our findings reveal the presence of asymmetry in Bloch oscillations allowing coherent transport of a well-localized excitation over long distances. Moreover, we compare the behavior of excitations coupled to either acoustic or optical phonons and indicate similar qualitative behavior. Finally, we demonstrate the robustness of phonon-assisted coherent transport to the on-site random potential. = Our analysis is restricted to weak lattice distortions related to a small number of phonons per lattice site, however, the proposed quantum simulator allows the studies of the excitation dynamics in strong distortion limit, as well as studies of a plethora of different scenarios, such as bi- and many-polaron dynamics, and investigation of the quantum boomerang effect <cit.> affected by the presence of phonons, both in a single-particle and many-body scenario. We believe, that our work opens up new avenues for research in Rydberg-based quantum simulators. A.K. acknowledges the support of the Austrian Science Fund (FWF) within the ESPRIT Programme ESP 171-N under the Quantum Austria Funding Initiative. S.K. acknowledges the Netherlands Organisation for Scientific Research (NWO) under Grant No. 680.92.18.05. ICFO group acknowledges support from: ERC AdG NOQIA; MICIN/AEI (PGC2018-0910.13039/501100011033, CEX2019-000910-S/10.13039/501100011033, Plan National FIDEUA PID2019-106901GB-I00, FPI; MICIIN with funding from European Union NextGenerationEU (PRTR-C17.I1): QUANTERA MAQS PCI2019-111828-2); MCIN/AEI/ 10.13039/501100011033 and by the “European Union NextGeneration EU/PRTR" QUANTERA DYNAMITE PCI2022-132919 within the QuantERA II Programme that has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No 101017733Proyectos de I+D+I “Retos Colaboración” QUSPIN RTC2019-007196-7); Fundació Cellex; Fundació Mir-Puig; Generalitat de Catalunya (European Social Fund FEDER and CERCA program, AGAUR Grant No. 2021 SGR 01452, QuantumCAT U16-011424, co-funded by ERDF Operational Program of Catalonia 2014-2020); Barcelona Supercomputing Center MareNostrum (FI-2023-1-0013); EU (PASQuanS2.1, 101113690); EU Horizon 2020 FET-OPEN OPTOlogic (Grant No 899794); EU Horizon Europe Program (Grant Agreement 101080086 — NeQST), National Science Centre, Poland (Symfonia Grant No. 2016/20/W/ST4/00314); ICFO Internal “QuantumGaudi” project; European Union’s Horizon 2020 research and innovation program under the Marie-Skłodowska-Curie grant agreement No 101029393 (STREDCH) and No 847648 (“La Caixa” Junior Leaders fellowships ID100010434: LCF/BQ/PI19/11690013, LCF/BQ/PI20/11760031, LCF/BQ/PR20/11770012, LCF/BQ/PR21/11840013). The work J.Z. was funded by the National Science Centre, Poland under the OPUS call within the WEAVE programme 2021/43/I/ST3/01142 as well as via project 2021/03/Y/ST2/00186 within the QuantERA II Programme that has received funding from the European Union Horizon 2020 research and innovation programme under Grant agreement No 101017733. A partial support by the Strategic Programme Excellence Initiative within Priority Research Area (DigiWorld) at Jagiellonian University is acknowledged. M.P. acknowledges the support of the Polish National Agency for Academic Exchange, the Bekker programme no: PPN/BEK/2020/1/00317. Views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the European Union, European Commission, European Climate, Infrastructure and Environment Executive Agency (CINEA), nor any other granting authority. Neither the European Union nor any granting authority can be held responsible for them.
http://arxiv.org/abs/2307.04171v1
20230709133401
Effect of coat-protein concentration on the self-assembly of bacteriophage MS2 capsids around RNA
[ "LaNell A. Williams", "Andreas Neophytou", "Rees F. Garmann", "Dwaipayan Chakrabarti", "Vinothan N. Manoharan" ]
physics.bio-ph
[ "physics.bio-ph" ]
fancy plain plain iblabel[1]#1 akefntext[1] [0pt][r]thefnmark #1 1.125 * § 0pt4pt4pt * §.§ 0pt15pt1pt [L] August 12, 2023 [R]1–LastPage   [ onecolumn, centering, margin=4cm Effect of coat-protein concentration on the self-assembly of bacteriophage MS2 capsids around RNA LaNell A. Williams,^a Andreas Neophytou,^b Rees F. Garmann,^c,d,e Dwaipayan Chakrabarti,^b Vinothan N. Manoharan,^∗^c,a Self-assembly is a vital part of the life cycle of certain icosahedral RNA viruses. Furthermore, the assembly process can be harnessed to make icosahedral virus-like particles (VLPs) from coat protein and RNA in vitro. Although much previous work has explored the effects of RNA-protein interactions on the assembly products, relatively little research has explored the effects of coat-protein concentration. We mix coat protein and RNA from bacteriophage MS2, and we use a combination of gel electrophoresis, dynamic light scattering, and transmission electron microscopy to investigate the assembly products. We show that with increasing coat-protein concentration, the products transition from well-formed MS2 VLPs to “monster” structures consisting of multiple partial capsids to RNA-protein condensates consisting of large networks of RNA and protein. We argue that the variation in structure arises because the assembly follows a nucleation-and-growth pathway in which the nucleation rate depends sensitively on the coat-protein concentration. At high coat-protein concentration, multiple nuclei can form on each RNA strand, leading to malformed structures. Monte Carlo simulations with coarse-grained models of capsomers and RNA validate this physical picture. Our results provide insight into an important biophysical process and could inform design rules for making VLPs for various applications. ] § ^a Department of Physics, Harvard University, Cambridge, MA 02138, USA. ^b School of Chemistry, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK. ^c Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA. ^d Department of Chemistry and Biochemistry, San Diego State University, San Diego, CA 92182, USA ^e Viral Information Institute, San Diego State University, San Diego, CA 92182 USA ^∗ [email protected] § INTRODUCTION For positive-strand RNA viruses to replicate, coat proteins must assemble around the viral RNA to form new virus particles. <cit.> Certain features of this assembly process can be replicated in vitro, in the absence of host-cell factors. <cit.> For example, virus-like particles (VLPs) can be assembled from solutions of the coat protein and RNA of bacteriophage MS2. Wild-type MS2 particles have an icosahedral capsid (triangulation number T=3, diameter about 30 nm) containing one maturation protein and 178 coat proteins surrounding an RNA strand with approximately 3600 nucleotides. By contrast, MS2 VLPs that assemble in vitro lack the maturation protein required for infectivity. Nonetheless, they can adopt the same structure and size as wild-type MS2 virus particles. <cit.> This result supports the premise that RNA virus assembly is driven by free-energy minimization. However, the assembly process itself and the conditions under which it leads to well-formed structures are not yet well understood. In MS2, most previous work on this question has focused on the role of specific interactions between coat protein and the viral RNA. Studies <cit.> on R17, a virus closely related to MS2, have shown that the the overall yield of assembled VLPs decreases if the RNA does not contain a sequence called the translational operator that has a strong and specific affinity for coat protein. <cit.> Nonetheless, assembly proceeds in the absence of the operator, perhaps due to non-specific interactions between the coat protein with the RNA <cit.>. Therefore, specific RNA-protein interactions might affect the assembly rate and yield but do not seem to be essential to the assembly process. While these studies have established the relevance of RNA-protein interactions to the assembly process, they did not directly reveal the assembly pathway itself. More recent work involving interferometric scattering microscopy, a technique that can image individual VLPs as they form, shows that MS2 VLPs assemble by a nucleation-and-growth pathway <cit.> at near-neutral pH, salt concentrations on the order of 100 mM, and micromolar coat-protein concentrations. In this pathway, a critical nucleus of proteins must form on the RNA before the capsid can grow to completion. The size of the critical nucleus, estimated to be less than six coat-protein dimers, is associated with a free-energy barrier. Taken together with the previous experiments on the role of the RNA sequence <cit.>, these results show that MS2 assembly is a heterogeneous nucleation process, in which the nucleation rate is likely controlled by two factors: RNA-protein interactions and the coat-protein concentration. Arguably, the coat-protein concentration has a larger role in controlling the morphology of the VLPs than does the nature of the RNA-protein interactions (at least in in vitro experiments, where the coat-protein concentration is typically constant). The interferometric scattering microscopy experiments <cit.> showed that very few VLPs are formed at low (1 μM) concentration of MS2 coat-protein dimers, while well-formed capsids form at higher concentrations, and so-called “monster” capsids, consisting of multiple partially formed capsids on a single strand of RNA, form at an even higher concentrations (several μM). These results suggest that the nucleation barrier, which controls the nucleation rate, depends sensitively on the coat-protein concentration. At low concentration, the nucleation rate is too small for capsids to form within the experimental time frame; at high concentration, the nucleation rate is so high that multiple nuclei can form on a single RNA strand, resulting in monster capsids. However, this study examined only a few protein concentrations, and they were performed at low RNA concentration relative to protein. Here we use bulk assembly experiments to determine the assembly products of MS2 coat protein and MS2 RNA as a function of coat-protein concentration. We characterize the assembly products using three techniques: gel electrophoresis, dynamic light scattering (DLS), and transmission electron microscopy (TEM). In comparison to the previous study,<cit.> in which protein was in large excess relative to RNA, our study examines a much wider range of coat-protein concentrations, including ones near the stochiometric ratio of coat protein to RNA. Furthermore, the three-pronged experimental approach allows us to corroborate results and test hypotheses about how the assembly products form. Gel electrophoresis and TEM provide qualitative data that we use to determine the size and structure of the assembly products, and DLS provides quantitative information about their size distributions. With these methods, we show that as the coat-protein concentration increases, the morphologies transition from well-formed VLPs to monster capsids to RNA-protein condensates consisting of large networks of RNA and protein. These results are summarized in Fig. <ref> and discussed in more detail in Section <ref>. With the insights provided by simulations of coarse-grained models of capsomers and RNA, we explain these results in terms of a nucleation-and-growth pathway for capsid assembly. § RESULTS §.§ Overview of experimental approach Briefly, our experimental procedure consists of combining 50 nM MS2 RNA with purified MS2 coat-protein dimers at concentrations ranging from 2.5 to 30 μM (see Section <ref> for full details). For reference, a full VLP has an icosahedral capsid with a triangulation number of 3 (T=3), corresponding to 180 coat proteins or 90 coat-protein dimers. At 50 nM RNA concentration, a coat-protein dimer concentration of 5 μM therefore corresponds approximately to the stoichiometric ratio of coat proteins to RNA in a full VLP. We work with dimer concentrations instead of monomer concentrations because MS2 coat proteins are thought to be dimerized in solution. <cit.> After mixing the RNA and coat protein, we then wait 10 min to allow assembly to occur, after which we add RNase to digest any excess MS2 RNA that is not encapsidated. We then characterize the resulting assembly products with gel electrophoresis, DLS, and TEM (see Section <ref>). §.§ Results from gel electrophoresis We first qualitatively characterize the size and composition of the assembly products using agarose gel electrophoresis. We use both ethidium stain to detect RNA and Coomassie stain to detect coat protein in our samples. For comparison, we also characterize wild-type MS2, MS2 RNA, and digested MS2 RNA (see Section <ref>). The most striking feature of the gel is a band that runs at the same position as wild-type MS2 but with a brightness that increases from 2.5 to 7.5 μM coat-protein dimers and then suddenly decreases at 8.7 μM (see highlighted region in Fig. <ref>). We interpret this increase and sudden decrease as follows. Near the stoichiometric ratio (approximately 5 μM dimers to 50 nM RNA), well-formed VLPs assemble, with more VLPs forming at higher protein concentration. Above 7.5 μM, the sharp decrease in brightness indicates that far fewer well-formed MS2 VLPs assemble. Instead, as indicated by the spreading of the band toward to the upper part of the gel, the assembly products at dimer concentrations greater than 7.5 μM are larger than the wild-type particles. These assembly products appear in both gels in Fig. <ref>, indicating that they contain both RNA and protein. We also see that at coat-protein dimer concentrations higher than 7.5 μM, the intensity of the diffuse band increases with increasing concentration (Fig. <ref>). The increase in brightness and change in the center position of this band suggest that the amount of large assembly products increases at the expense of the wild-type-sized products. At 15 μM, the diffuse band no longer overlaps with the band corresponding to wild-type-size VLPs. For dimer concentrations beyond 15 μM, some of the assembly products are so large that they are trapped near the top of the agarose gel. The transition from a bright to a diffuse band might represent a transition from well-formed VLPs to either malformed structures or aggregates of capsids. The gels by themselves cannot confirm either hypothesis, since they reveal only that the assembly products all contain RNA and that they increase in size with increasing coat-protein concentration. We therefore turn to dynamic light scattering and transmission electron microscopy experiments, as described below. §.§ Results from dynamic light scattering To quantify the sizes of the assembly products, we use DLS with numerical inversion methods. These methods yield the size distributions of assembly products in both number and volume bases (see Section <ref>). At coat-protein dimer concentrations 7.5 μM and below, we observe in both the number and volume distribution a peak at or near the size of wild-type MS2 particles (see shaded bands in Fig. <ref>; we expect some variation in the location of this peak because the inversion of the autocorrelation function is sensitive to noise). This peak is accompanied by peaks at larger sizes, unlike the size distribution for wild-type MS2, which consists of only one peak. At coat-protein dimer concentrations above 7.5 μM, the peak corresponding to size of wild-type MS2 particles decreases until it disappears (in the volume-basis distributions) at 12.5 μM. At concentrations of 15 and 20 μM, we observe a single peak corresponding to much larger assembly products. Overall, we observe that the average size of the assembly products increases with increasing protein concentration (Fig. <ref>). The DLS data support our interpretation of the gel-electrophoresis data. Specifically, both the DLS and gel data show that the proportion of VLPs with sizes corresponding to the wild-type size decreases with concentration above 7.5 μM, whereas only larger products form at high concentration. The DLS data additionally show that the size of these larger products is on the order of several hundred nanometers. However, the DLS data also show peaks corresponding to particles larger than wild-type at concentrations less than 10 μM. We do not see evidence of such particles in the gel data. These peaks may correspond to weakly-bound clusters of well-formed MS2 VLPs that are observable in the DLS experiments but fall apart during gel electrophoresis (see Fig. <ref>). Because DLS does not provide any structural information, we turn to TEM to test this hypothesis and characterize the structures of the assembly products. §.§ Transmission electron microscopy (TEM) experiments TEM images of negatively stained samples show that most of the assembly products at dimer concentrations 7.5 μM and below are well-formed MS2 VLPs (Figs. <ref> and <ref>), with some malformed VLPs and clusters of MS2 VLPs, consistent with the larger sizes present in the DLS-derived size distributions. At a concentration of 10 μM, we observe malformed particles that consist of partially formed capsids. These structures are similar to the so-called “monster” particles observed in turnip-crinkle-virus assemblies <cit.> and, more recently, in MS2 assembly experiments. <cit.> At concentrations above 15 μM we observe what appear to be large aggregates of partially formed capsids (Figs. <ref> and <ref>). These structures are micrometer-sized, comparable to the sizes seen in the DLS distributions (Fig. <ref>). § DISCUSSION Our measurements show that coat-protein concentration plays an important role in the morphology of the assembly products of MS2 RNA and coat protein. At low coat-protein dimer concentrations (less than 7.5 μM), gel electrophoresis, DLS, and TEM all point to the formation of MS2 VLPs that are of the same size as wild-type MS2. These structures appear to be well-formed, consistent with previous studies. <cit.> At higher concentrations (between 7.5 and 10 μM), we observe monster particles consisting of a few partial capsids. At even higher concentration (12.5 μM), results from gel electrophoresis, DLS, and TEM point to the formation of large structures several hundred nanometers in size and containing many partial capsids. Whereas the observation of well-formed VLPs and even monsters is consistent with previous studies on MS2, the observation of large structures at high protein concentrations has not, to our knowledge, been studied in detail. Large structures have been observed in the assembly of viral coat proteins around functionalized gold nanoparticles, but these structures are found at low protein concentrations. <cit.> In other viruses, large aggregates have been observed under conditions of strong interactions. <cit.> Here, however, the formation of the large structures occurs at the same buffer conditions (apart from coat-protein concentration) as those used to assemble well-formed VLPs. The large structures are interesting not only because they contain many partially formed capsids, but also because they contain RNA, as shown by our gel electrophoresis measurements. Because the structures contain both RNA and protein, we term them “condensates.” Below, we consider several hypotheses that might explain the formation of the condensates, with the aim of understanding what they reveal about the assembly pathway of the virus. One hypothesis is that the condensates arise primarily by aggregation of coat proteins. However, gel electrophoresis, DLS, and TEM experiments show no evidence of coat-protein aggregation in the absence of RNA, even at 15 μM dimer concentration. In addition, gel electrophoresis data at high coat-protein concentrations show that the condensates contain both RNA and coat protein. While such structures might arise if the aggregation of the coat proteins were rapid, trapping the RNA inside, the absence of aggregation of coat protein at high concentrations is evidence against this hypothesis. Another hypothesis is that the RNA-protein condensates arise from an en masse pathway, <cit.> in which the interactions between the coat proteins and RNA are strong compared to the inter-protein interactions. In this scenario, coat proteins would first decorate the RNA, potentially leading to a heteroaggregate of RNA and protein. This scenario would account for the presence of RNA in the condensates. However, it is at odds with the observation of a nucleation-and-growth pathway <cit.> at lower concentrations. If nucleation and growth happens at low concentrations, we expect that increasing the protein concentration should not cause a transition to an en masse pathway but should instead primarily change the nucleation rate. We therefore consider the hypothesis that a nucleation-and-growth pathway is operative at all coat-protein concentrations, and that this pathway drives condensate formation at high concentrations. At low concentrations, where the assembly products are well-formed VLPs, our study provides no direct evidence for this pathway, but as noted above, previous direct imaging measurements have shown that the assembly is nucleated. <cit.> The nucleation-and-growth pathway does, however, account for the monster particles seen at intermediate protein concentrations. These structures, which consist of multiple partial capsids, can form when more than one nucleation event happens on a single RNA strand; indeed, we expect that the probability of multiple nucleation events should increase with the coat-protein concentration. The question then is how the condensates form with the RNA trapped inside. To understand whether and how a nucleated pathway might lead to such condensates, we turn to simulations. We perform coarse-grained patchy-particle simulations in which the capsomers are represented as patchy hard disks, and the RNA is represented as a free polymer with a length approximately 14 times the diameter of a fully formed capsid (see Fig. <ref> and Section <ref>), such that each polymer can be encapsidated by 12 capsomers. Although the experimental system is more complex – specifically, an MS2 VLP consists of 90 coat-protein dimers, yielding a T=3 structure, and MS2 RNA can adopt intricate secondary and tertiary structures – the simulation is designed to test the hypothesis that nucleation and growth can lead to a condensate. To this end, we tune the interactions so that the assembly is nucleated, as seen in Fig. <ref>. Whereas at low capsomer concentrations the simulations show the assembly of well-formed capsids containing polymer, at high capsomer concentrations they show the assembly of large networks of polymers and partial capsids, just as in the experiments. Interestingly, the simulations show that these networks consist of multiple polymer strands that are bridged by networks of partial capsids (Fig. <ref>). A partial capsid attached to one polymer molecule can connect, through other capsomers, to a partial capsid attached to a different polymer molecule. This observation provides a plausible explanation for why the condensates seen in the experiments can grow to be so large even in the absence of significant coat-protein aggregation: the coat proteins may be able to bridge partial capsids on different RNA molecules. § CONCLUSIONS Our experiments and simulations show that all the morphologies we observe as a function of coat-protein concentration – well-formed capsids, monster capsids, and RNA-protein condensates – can be understood as outcomes of a nucleation-and-growth process (Fig. <ref>). Other hypotheses, including coat-protein aggregation and en masse assembly, do not account for all of our results. In a nucleation-and-growth assembly pathway, the primary effect of increasing the coat-protein concentration is to increase the nucleation rate. If the growth rate depends more weakly on concentration than does the nucleation rate, we can explain the formation of all the structures we observe as follows. If the timescale of nucleation is short compared to the time for a nucleus to grow into a full capsid, multiple nuclei can form on the same RNA strand. When these nuclei grow, they do not in general form a closed capsid, but instead form partial capsids. At moderate concentrations, these partial capsids remain disconnected as they grow, leading to monster capsids. At high concentrations, coat proteins can bridge partial capsids on different RNA molecules, leading to the formation of large RNA-protein condensates. There remain a few questions to be resolved in future studies. One question is how the RNA is spatially distributed in the condensates, and specifically whether the bridging mechanism observed in the simulations is operative in the experiments. Another question is what happens at concentrations between those at which well-formed capsids form and monster capsids form. DLS and TEM experiments suggest that at these intermediate protein concentrations, small clusters of well-formed capsids are present. The driving force for the formation of these clusters is not clear, but they might arise when a single RNA molecule spawns multiple nuclei that each form a full (or nearly full) capsid. In this situation, the RNA would connect the capsids into a “multiplet” structure. <cit.> It is still not clear why the DLS measurements show evidence for small clusters of well-formed capsids but the gel data do not. Fluorescent microscopy experiments could help resolve this question and the aforementioned ones as well. Our work might also inform models of the assembly pathway, particularly those based on the law of mass action, <cit.> in which the concentration of coat proteins plays a critical role. Further experiments that quantify how the nucleation rate depends on the coat-protein concentration would help connect these models to the morphological observations we present here. From a more practical perspective, our work helps establish constraints on concentration for the production of MS2 VLPs. Such VLPs are used to encapsulate materials for drug delivery <cit.> and to display epitopes for vaccines. <cit.> § METHODS AND MATERIALS All materials were used as received. Buffers were prepared as follows: * Assembly buffer: 42 mM Tris, pH 7.5; 84 mM NaCl; 3 mM acetic acid, 1 mM EDTA * TNE buffer: 50 mM Tris, pH 7.5; 100 mM NaCl, 1 mM EDTA * TE buffer: 10 mM Tris, pH 7.5; 1 mM EDTA * TAE buffer: 40 mM Tris-acetic acid, pH 8.3; 1 mM EDTA §.§ Virus growth, cultivation, and storage We purify wild-type bacteriophage MS2 as described by Strauss and Sinsheimer. <cit.> In brief, we grow MS2 virus particles by infecting E. coli strain C3000 in minimal LB Buffer, and we remove E. coli cell debris by centrifugation at 16700g for 30 min. We then use chloroform (warning: hazardous; use in fume hood) extraction to purify the solute containing the virus. We extract the purified virus particles by density gradient centrifugation in a cesium chloride gradient. We store the purified virus at 4 °C at a concentration of 10^11 plaque-forming units (pfu) in Tris-NaCL-EDTA or TNE buffer (50 mM Tris, 100 mM NaCL, 5 mM EDTA) at pH 7.5. We determine the concentration of virus by UV-spectrophotometry (NanoDrop 1000, Thermo Scientific) using an extinction coefficient of 8.03 mL/mg at 260 nm. §.§ Coat-protein purification and storage We purify MS2 coat-protein dimers following the method of Sugiyama, Herbert, and Hartmant. <cit.> Wild-type bacteriophage MS2 is suspended in glacial acetic acid (warning: hazardous; use in fume hood with appropriate personal protective equipment) for 30 min to denature the capsid, separate it into protein dimers, and precipitate the RNA. We then centrifuge the sample at 10000g and collect the supernatant, which contains coat-protein dimers. We filter out the glacial acetic acid with 20 mM acetic acid buffer through 3-kDa-MWCO sterile centrifugal filters (Millipore Sigma, UFC500324) five times. This process removes the glacial acetic acid to prevent further denaturing of the coat-protein dimers. We then determine the concentration of our coat-protein dimers by measuring the absorbance with the Nanodrop Spectrophotometer (Thermo Fisher) at 280 nm. We store the MS2 coat protein at 4 °C in a 20 mM acetic acid buffer. We measure the absorbance at 260 nm to detect residual RNA. In our experiments, we use only purified protein with an absorbance ratio (protein:RNA) above 1.5 to avoid RNA contamination. §.§ RNA purification and storage We purify wild-type MS2 RNA using a protocol involving a Qiagen RNeasy Purification Kit Mini (Qiagen, 7400450). We take 100 μL of MS2 stored in TNE buffer and mix with 350 μL of buffer RLT (a lysis buffer) to remove the coat-protein shell. We add 250 μL of ethanol to our sample and mix to precipitate the RNA. We then transfer our sample to a 2 mL RNeasy Mini spin column (provided by the Qiagen Purification Kit) that is placed in a collection tube. We then centrifuge at 10000g for 15 s and discard the flow-through. We add 500 μL of buffer RPE (to remove traces of salts) to the spin column and centrifuge for 15 s at 10000g. We discard the flow-through. We then add 500 μL of buffer RPE once more to the spin column and centrifuge for 2 min at 10000g. We place the spin column upside down into in a fresh 1.5 mL collection tube (provided in the purification kit) to collect the RNA trapped in the spin column. We add 50 μL of TE buffer to the spin column and centrifuge at 10000g for 1 min to collect the RNA. We measure the RNA concentration using a Nanodrop spectrophotometer by measuring the absorbance at 260 nm and using an extinction coefficient of 25.1 mL/mg. We store the purified MS2 RNA at -80°C in Tris-EDTA (TE) buffer at neutral pH (7.5). §.§ RNA and coat-protein bulk assembly experiments For assembly experiments, we mix wild-type MS2 RNA genome at a concentration of 50 nM with varying concentrations of MS2 coat-protein dimers ranging from 2.5 μM to 30 μM. We leave the mixtures at room temperature (21 °C) for 10 min. Afterward, we add 10 ng of RNase A to the sample and wait 30 min. We then characterize the assembled virus-like particles using gel electrophoresis, dynamic light scattering (DLS), and transmission electron microscopy (TEM). §.§ Gel electrophoresis and analysis For gel electrophoresis experiments, we mix 15 μL of sample with 4 μL of glycerol and load into a 1% agarose gel in assembly buffer consisting of 5 parts Tris-NaCL-EDTA (TNE) buffer (50 mM Tris, 100 mM NaCl, 10 mM EDTA, pH 7.5) to 1 part 20 mM acetic acid buffer. We use Ethidium Bromide (EtBr; warning: hazardous; use in fume hood with appropriate personal protective equipment) to stain the RNA and to detect the presence of MS2 RNA. We use Coomassie Blue R-250 to detect the presence of MS2 coat protein. The combination of these staining methods allow us to confirm the presence of both MS2 RNA and MS2 coat protein within the resulting assemblies. We place three control samples in lanes 2 through 4 that include MS2 RNA at 50 nM concentration (lane 2), wild-type MS2 at 50 nM concentration (lane 3), and 50 nM concentration of digested MS2 RNA genome (lane 4) resulting from the addition of RNase A. These controls allow us to compare the sizes of our assembly products to systems of known sizes. We can also determine whether the samples consist of MS2 VLPs formed during assembly or excess strands of MS2 RNA. We place our assembly products in lanes 6 through 19. These samples are loaded and run at 21 °C at 100 V for 40 min and visualized using a Biosystems UV Imager (Azure, AZ1280). §.§ Dynamic light scattering (DLS) and analysis We use dynamic light scattering (Malvern ZetaSizer Nano ZS by Malvern Panalytical) to determine the size distribution of particles that assemble at 50 nM MS2 RNA concentration and coat-protein dimer concentrations of 2.5, 5, 7.5, 10, 12.5, 15, and 20 μM. In each case the samples are treated with RNase as described previously. We also characterize the wild-type virus for comparison. We determine the size distributions using the regularization inversion method provided by the instrument software. <cit.> §.§ Transmission electron microscopy (TEM) and analysis For transmission electron microscopy, we negatively stain samples that have been assembled in bulk at coat-protein dimer concentrations of 5, 7.5, 10, 15, and 20 μM and treated with RNase A. We stain with 2% aqueous uranyl acetate (warning: hazardous; use with appropriate personal protective equipment) on 200 mesh carbon-coated copper TEM grids (Polyscience, TEM-FCF200CU), then image with a Hitachi 7800 TEM located at the Center for Nanoscale Systems at the Science and Engineering Complex (CNS-SEC) at Harvard University. Images are taken at 20, 50, and 100 kV. As a control, we mix 15 μM MS2 coat-protein dimers in assembly buffer. This control is done to ensure that capsid-like or VLP-like structures do not form in the absence of MS2 RNA. §.§ Coarse-grained model for capsid assembly We developed a patchy particle model for the capsomers interacting with a polymer chain, which was used to model the RNA, to investigate their assembly. A capsid is constructed from 12 subunits, each having C_5v symmetry, where the center of each subunit sits on the vertex of an icosahedron.<cit.> §.§.§ Capsomer-Capsomer Interactions. We coarse-grain the capsomeric building blocks as oblate hard spherocylinders (OHSCs) decorated with five identical circular patches conforming to C_5v symmetry. See Fig. <ref> for a schematic illustration of the model capsomer. For hard oblate spherocylinders, which were previously used as a model system to investigate the phase behavior of discotic liquid crystals, <cit.> the surface is defined by the points at a distance L/2 from an infinitely thin disc of diameter σ, giving the particle a total diameter D = σ + L and thickness L. Note that an OHSC particle, comprising a flat cylindrical core and a toroidal rim, has a uniaxial symmetry, whose orientation can be described by a unit vector normal to the central disc, 𝐞̂. The aspect ratio of the OHSC particle is then given by L^*=L/D. The pair interaction between two OHSC particles i and j, with positions of the center of mass 𝐫_i and 𝐫_j and orientations 𝐞̂_i and 𝐞̂_j, respectively, is infinite if the shortest distance between their central discs is less than L, and zero otherwise: v^ohsc_ij(𝐫_ij,𝐞̂_i,𝐞̂_j) = ∞ if d_ij<L 0 otherwise, where 𝐫_ij = 𝐫_i - 𝐫_j and d_ij is the shortest distance between the central discs for particles i and j. We compute this shortest distance using the algorithm outlined in Ref. . We model the interactions between the circular patches by adapting the Kern-Frenkel potential, <cit.> where the interactions between a pair of circular patches are described by a square-well attraction modulated by an angular factor corresponding to the relative orientations between the patches. The angular factor is unity only when the patches are oriented such that the vector connecting the centers of the two particles passes through both the patches on their surfaces, and zero otherwise. The width of the square well, δ_cap, determines the range of the attraction between the patches relative to the particle diameter. The depth of the square well, ε_cap, governs the strength of the attractions. The size of the patches is characterized by a half-angle θ. An additional parameter φ defines the inclination of the plane that contains the centers of the patches to the plane of the central cylindrical core. The total pair potential defining capsomer-capsomer interactions is then v^cap_ij(𝐫_ij,𝐞̂_i,𝐞̂_j) = v^ohsc_ij(𝐫_ij,𝐞̂_i,𝐞̂_j) + ∑^5_α,β v^sw,cap_αβ(𝐫_αβ)f(𝐫_αβ,𝐧̂_i,α,𝐧̂_j,β), where r_ij=|𝐫_ij| is the center-to-center distance between particles i and j, 𝐧̂_i,α is a unit vector defining the orientation of patch α on particle i (similarly, 𝐧̂_j,β is a unit vector corresponding to patch β on particle j), and 𝐫_αβ is the separation vector between the centers of patches α and β. The term v^sw,cap_αβ is a square-well potential: v^sw,cap_αβ(r_αβ) = -ε_cap if r_αβ≤(1+δ_cap)σ 0 otherwise, and f(𝐫_αβ,𝐧̂_i,α,𝐧̂_j,β) is the angular modulation factor, f(𝐫_αβ,𝐧̂_i,α, 𝐧̂_j,β) = 1 if 𝐧̂_i,α·𝐫̂_αβ>cosθ and 𝐧̂_j,β·𝐫̂_βα>cosθ 0 otherwise. The reference orientation of particle i is such that the normal to the flat face of the oblate spherocylinder is aligned with the z-axis of the global coordinate frame. We then define the reference position of the first patch on particle i as 𝐩_i,1=(σ/2,0,0) and the position of each other patch as a rotation about the z-axis of the local coordinate frame of the particle such that 𝐩_i,n=𝐑_ψ·𝐩_1, where 𝐑_ψ is a rotation matrix defining a clockwise rotation of angle ψ=n2π/5 about 𝐞̂_i with n=2,3,4,5. The orientation of patch α on particle i is then 𝐧̂_i,α=sin(φ)𝐞̂_i+(2cos(φ)/σ)𝐩_α, where φ is the angle between 𝐧̂_i,α and the plane containing the flat face of the oblate spherocylinder. §.§.§ Polymer-Polymer Interactions. Each RNA molecule is modeled as a flexible self-avoiding polymer – that is, as a chain of hard-spheres, where neighboring beads in the chain are connected by a harmonic spring: <cit.> v_poly(r_ij) = κ(r_ij-σ_bl_b)^2, where r_ij is the distance between beads i and j (where j=i-1,i+1), κ sets the strength of the harmonic spring, σ_b is the hard-sphere diameter of the beads in the polymer chain, and l_b is a dimensionless parameter setting the equilibrium bond length between neighboring beads. §.§.§ Capsomer-Polymer Interactions. We allow for interaction between the capsomers and the polymer via an attractive patch on the surface of the capsomer. The orientation of the patch is aligned with that of the oblate spherocylinder. The beads of the polymer and the capsomer then interact via an attractive square-well interaction, plus a hard-core repulsion between their respective cores. The pair interaction when particle i is a capsomer and particle j is a bead of a polymer chain is v^cap-pol_ij(𝐫_ij,𝐞̂_i) = v^hc_ij(𝐫_ij, 𝐞̂_i) + v^sw,cap-pol_ij(𝐫_ij)g(𝐫_ij,𝐞̂_i), where v^hc_ij is the hard-core interaction v^hc_ij(𝐫_ij,𝐞̂_i) = ∞ if d_ij<(L+σ_b)/2 0 otherwise, where d_ij is the shortest distance between the capsomer and polymer bead. We compute this distance by first computing the projection of the polymer bead onto the plane spanned by the cylindrical core of the capsomer: 𝐫^proj,i_j=𝐫_ij-(𝐫_ij·𝐞̂_i)𝐞̂_i. Then if r^proj,i_j≤σ/2, the bead lies over the cylindrical core of the capsomer, so the shortest distance vector between the two particles is 𝐝_ij=𝐫_ij-𝐫^proj,i_j. Otherwise, the closest point of the capsomer to the bead lies on its edge. The shortest distance vector between the two particles is then 𝐝_ij=𝐫_ij-(σ/2)𝐫̂^proj,i_j. The term v^sw,cap-pol_ij is the square-well interaction between the patch on the face of the capsomer and the polymer bead: v^sw,cap-pol_ij(𝐝_ij) = -ε_cap-pol if d_ij≤(1+δ_cap-pol)σ 0 otherwise, and g(𝐫_ij,𝐞̂_i) is the angular modulation factor for the attractive capsomer-polymer interaction: g(𝐫_ij,𝐞̂_i) = 1 if cos^-1(𝐫_ij·𝐞̂_i/r_ij) < π/2 0 otherwise. §.§ Monte Carlo simulations We carry out two sets of Monte Carlo simulations in the NVT ensemble using the model outlined above. For both simulations, we set the volume to be V=600000σ^3, the reduced temperature to be k_BT/ε_cap=0.12 (where k_B is the Boltzmann constant, which is taken to be equal to one), and the number of polymer chains N_poly=30, with each polymer chain consisting of l_poly=150 beads. In one simulation, there are N_cap=360 capsomers, and in the other, there are N_cap=1500 capsomers. The total number of particles is then N=N_polyl_poly+N_cap. We take σ to be the unit of length and ε_cap to be the unit of energy. We then choose the parameters defining the system to be L=0.5σ, δ_cap=0.2, θ=25^∘, φ=25^∘, κ=100ε_cap, σ_b=0.2σ, l_b=1.05σ_b, ε_cap-pol=0.2ε_cap, and δ_cap-pol=0.3σ. The geometry of the patches on the capsomers is chosen to ensure that the particles can stabilize a capsid-like structure where 12 subunits are fully connected and sit on the vertices of an icosahedron. The choice of the aspect ratio of the OHSC particles ensures that the cavity of a properly formed capsid can accommodate cargo of a reasonable size. In turn, the length of each polymer chain is chosen to be as long as possible with the constraint that it still fit inside a capsid made of 12 capsomers. Additionally, the strength of the polymer-capsomer interactions is chosen to ensure that capsid growth proceeds through a nucleated pathway. We carry out all Monte Carlo simulations with systems contained in a cubic box under periodic boundary conditions, using the minimum image convention. Each capsomer is treated as a rigid body for which the orientational degrees of freedom are represented by quaternions. The potential energy is calculated using a spherical cutoff of 1.7σ, and a cell list is used for efficiency. Each Monte Carlo cycle consists of N translational or rotational single-particle or cluster moves, chosen at random with equal probabilities. § AUTHOR CONTRIBUTIONS All authors conceived the research goals and aims. LAW and AN curated data and code. LAW and AN analyzed the results, with assistance from VNM and DC. RFG, DC, and VNM obtained funding. LAW performed the experiments and AN performed the simulations. All authors developed methods. RFG, DC, and VNM administered the project. AN developed the computer software for the simulations. RFG, DC, and VNM supervised the project. LAW and AN performed validation studies. LAW and AN prepared visualizations, edited by VNM. LAW, AN, and VNM prepared the original draft. All authors reviewed and edited the submitted work. § CONFLICTS OF INTEREST There are no conflicts to declare. § ACKNOWLEDGEMENTS We thank Amy Barker and Peter Stockley at the University of Leeds for initial stocks of MS2 and E. coli cells. We thank Tim Chiang, Amelia Paine, Aaron Goldfain, and Danai Montalvan for helpful scientific discussions. This research was partially supported by a National Science Foundation (NSF) Graduate Research Fellowship under grant number DGE-1745303, by NSF through the Harvard University Materials Research Science and Engineering Center under NSF grant number DMR-2011754, by the National Institute of General Medical Sciences of the National Institutes of Health under grant numbers K99GM127751 and R00GM127751, by the NSF-Simons Center for Mathematical and Statistical Analysis of Biology at Harvard University under NSF grant number 1764269, and by the Harvard Quantitative Biology Initiative. AN, VNM, and DC gratefully acknowledge support from the Institute of Advanced Studies of the University of Birmingham and the Turing Scheme. This work was performed in part at the Harvard University Center for Nanoscale Systems (CNS), a member of the National Nanotechnology Coordinated Infrastructure Network (NNCI), which is supported by the National Science Foundation under NSF grant number ECCS-2025158. The work was also performed in part at the Harvard University Bauer Core Facility. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. rsc
http://arxiv.org/abs/2307.05747v2
20230708141455
Integrating Curricula with Replays: Its Effects on Continual Learning
[ "Ren Jie Tee", "Mengmi Zhang" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CV" ]
Spectroscopic Devices for Asteroseismology With Small Telescopes in NARIT Saran Poshyachinda ========================================================================= Humans engage in learning and reviewing processes with curricula when acquiring new skills or knowledge. This human learning behavior has inspired the integration of curricula with replay methods in continual learning agents. The goal is to emulate the human learning process, thereby improving knowledge retention and facilitating learning transfer. Existing replay methods in continual learning agents involve the random selection and ordering of data from previous tasks, which has shown to be effective. However, limited research has explored the integration of different curricula with replay methods to enhance continual learning. Our study takes initial steps in examining the impact of integrating curricula with replay methods on continual learning in three specific aspects: the interleaved frequency of replayed exemplars with training data, the sequence in which exemplars are replayed, and the strategy for selecting exemplars into the replay buffer. These aspects of curricula design align with cognitive psychology principles and leverage the benefits of interleaved practice during replays, easy-to-hard rehearsal, and exemplar selection strategy involving exemplars from a uniform distribution of difficulties. Based on our results, these three curricula effectively mitigated catastrophic forgetting and enhanced positive knowledge transfer, demonstrating the potential of curricula in advancing continual learning methodologies. Our code and data are available: <https://github.com/ZhangLab-DeepNeuroCogLab/Integrating-Curricula-with-Replays> § INTRODUCTION Continual learning enables consecutive task acquisition without forgetting previously trained tasks <cit.>. This adaptability is vital for autonomous systems in dynamic environments, such as updating a grocery classification model with new products without retraining it on previous products. However, a significant challenge in continual learning is catastrophic forgetting, where knowledge from recent tasks interferes with earlier ones <cit.>, leading to performance degradation on earlier tasks after training on a task sequence. To resolve this problem, there are three primary types of continual learning methods commonly employed in the field: regularization-based methods introduce regularization terms to mitigate catastrophic forgetting by preserving important parameters during training <cit.>; rehearsal-based methods store and replay a subset of previous data during training to maintain knowledge from previous tasks <cit.> and parameter isolation methods isolate specific parameters for each task to prevent interference between tasks <cit.>. Rehearsal-based methods have proven highly effective in continual learning. However, existing approaches typically involve randomly selecting and rehearsing data from previous tasks. Limited research explores the incorporation of meaningful curricula into replay methods. In parallel, in the curriculum learning literature, various approaches have focused on weakly supervised <cit.>, unsupervised <cit.>, and reinforcement learning tasks <cit.>. These studies demonstrate that curricula improve generalization abilities, task performances, and convergence speed <cit.> during training. However, they primarily address intra-class difficulty and example scheduling within a single task, neglecting the impact of class presentation sequences across multiple tasks. Recent research has explored curricula in continual learning scenarios without data replays <cit.>. In complement to this work, our study investigates the role of curricula specifically during replay in continual learning, while keeping the curricula consistent for the feed-forward training process. Exploring optimal curricula offers countless possibilities, and in our study, we take initial steps to investigate a limited set of potential curricula. We draw inspiration from two sources to guide the design of these curricula. Firstly, neuroscience research has revealed that neural activity patterns associated with past experiences are replayed in specific orders during rest or sleep, which is believed to contribute to memory consolidation and spatial navigation <cit.>. Secondly, pedagogy studies indicate that repetitive practice and revisiting previous knowledge with increasing difficulty enhance long-term memory integration in students <cit.>. Specifically, we propose three types of curricula for replays and examine their impact on catastrophic forgetting and positive knowledge transfer: (1) the interleaved frequency of replayed exemplars with training data, (2) the replay sequence of exemplars, and (3) the strategy for selecting exemplars into the replay buffer. The experimental findings align with cognitive psychology principles, highlighting the advantages of frequently interleaving between training data and replayed exemplars, incorporating easy-to-hard rehearsals, and selecting exemplars from a uniform distribution of difficulties for replay. These observations present a promising avenue for advancing continual learning methods. It also provides insights into the underlying mechanisms of replay strategies in mitigating forgetting and facilitating knowledge transfer across tasks. § RELATED WORKS §.§ Replay Methods in Continual Learning Extensive research has focused on utilizing replay methods to address the issue of catastrophic forgetting. Conventional replay methods, such as iCaRL <cit.> and ER <cit.>, involve explicit training on previously saved data, while several variants, like DGR <cit.> and Pseudo-Recursal <cit.>, replay on artificially synthesized samples by generative models, resembling data from previous tasks. Although these replay methods have made significant contributions in reducing catastrophic forgetting, they paid little attention to the incorporation of meaningful curricula into replay methods. Most methods randomly interleave the replay samples with the training data, without exploring the optimal mixing strategies <cit.>. In our work, we systematically studied the effect of interleaving curricula, which involves mixing training data and replay samples within a pre-defined interleave interval. §.§ Curriculum Learning Curriculum learning methods can be broadly categorized into two groups. The first group involves manual curriculum design by humans before training <cit.>, but these methods typically rely on human expertise and struggle to generalize to new domains. The second group consists of models that can autonomously design curricula without human intervention <cit.>. However, the application of these methods to enhance model performance has received limited attention in the continual learning setting. Here, we highlight two factors to consider when applying curricula on the replay methods in continual learning. Firstly, while curriculum learning has demonstrated efficacy in enhancing generalization and training speed within a single task, the objective of curriculum learning in the context of continual learning is to retain knowledge from previous tasks while acquiring new knowledge from the current task. Secondly, unlike within-task curriculum learning, models in continual learning only have access to data from the current task, making it challenging to create a comprehensive between-task curriculum that encompasses the entire dataset. Here, we took initial steps in this direction by exploring automated methods to determine the sequence of replay samples and introducing the sample selection strategy which finds the best replay samples for building a curriculum. § EXPERIMENTS We investigated the effect of three types of replay curricula in the class incremental learning (CIL) setting. We first introduce CIL, and then elaborate on the three replay curricula individually. Problem Setting. The objective of CIL is to teach a unified classification model Θ to recognize sets of object classes incrementally over time. Specifically, an image dataset D, consisting of N object classes, is split into subsets {D_1,...,D_t,...,D_T} of images and presented over a sequence of T tasks. In each task t, the model only has access to training data in D_t, consisting of samples from distinct classes C_t, and (x_i,t,y_i,t) is the i-th (image, label) pair in D_t. The model Θ can run multiple passes over D_t in task t. The model stops training on D_t after its performance on the validation set saturates, considering the five most recent epochs. We implemented the naive replay method where some raw images and their corresponding labels are selected from previous tasks and are stored in the replay buffer R_t. These data in R_t are inter-leaved with D_t for rehearsals. There are three types of replay curricula involved in this study: (1) the interleave frequency; (2) the rehearsal sequence of R_t in CIL; and (3) the image selection for R_t. R_t is kept at a constant size of 1200 over all the tasks. See Appendix for more training details. As an upper bound, we also include the offline method where the model Θ is trained on the entire dataset Θ from {D_1,...,D_T} over multiple epochs without any continual learning. Datasets. We conducted experiments to investigate the use of these three types of curricula in replay methods on the two image datasets ciFAIR-10 and ciFAIR-100 <cit.>. ciFAIR-10 dataset contains 10 object classes. The protocol asks the model Θ to incrementally learn 2 object classes in each task. There are a total of 5 tasks. ciFAIR-100 dataset contains 100 object classes. The CIL protocol asks the model Θ to incrementally learn 5 object classes in each task. There are a total of 20 tasks. Both datasets have a total of 60,000 images, with 50,000 images used for training and 10,000 images used for testing. The conclusions drawn from the experiments on both datasets are consistent. Without loss of generality, we focus on all the experiments and result analysis in ciFAIR-100 in the main text. See Appendix for more implementation details and results on ciFAIR-10. Evaluation Metrics. To assess the continual learning performance of the model Θ, we followed <cit.> and introduce 2 standard evaluation metrics. We define Forgetfullness (F) as the percentage decrease in classification accuracy on the test instances from C_1 between the Θ_t (model after being trained on D_t) and Θ_1. An ideal Θ_t could maintain the same classification accuracy on C_1 over tasks; i.e. ∀ t, F_t=0. The higher F is, the more Θ suffers from catastrophic forgetting. To assess the overall classification performance of Θ over tasks, we also report the continual average classification accuracy (Avg. Accu.). Avg. Accu. is computed as the average accuracy on all the test instance from C_i, where i∈{1, 2, ..., t}. For simplicity, we report the averaged F and Avg. Accu. over all the tasks. Experimental Controls. Within each experiment, only one variable of interest changes while the rest of the experiment conditions are fixed as control variables. As the previous study has shown that the sequence of class presentations affects the continual learning performance <cit.>, we use the same class presentation sequence in all three experiments. The same MobileNetV3 (small) network architecture is used as the backbone for the model Θ for all experiments. In every experiment, the total number of training samples and the total number of replay samples exposed to Θ remain the same across all experiment variables. Each experiment is conducted with 4 runs initialized with 4 random seeds, where the seeds are used to vary the controlled variables. The average performance across all 4 runs is reported. §.§ Interleave Divisions during Rehearsals The number of interleaving divisions refers to the number of splits of in D_t and R_t. It indicates how often the model Θ rehearses on R_t, while learning on a subset of D_t. For example, for interleaving division number 400, D_t is split into 400 groups where each group contains an equal number of (x_i,t,y_i,t) (image, label) pairs, and these (image, label) pairs are randomly selected from D_t without replacement. Correspondingly, R_t is also split into 400 groups with the same splitting criteria as D_t. At each training epoch, the model Θ_t at task t is repeatedly trained with one group of D_t followed by one group of R_t, until the entire D_t and R_t are exhaustively seen by Θ_t. We titrate the interleave division numbers with the range of 1, 8, 60, 120, and 300. The training data is interleaved with replay data and then presented to the model in sequence. Different interleave division numbers result in different data presentation sequences; hence, different curricula. §.§ Rehearsal Sequence of Replay Samples We use the interleave divisions 1 and 600 for all the experiments in this subsection and vary the rehearsal sequence of data samples in R_t by taking into account the two factors: the sample difficulty levels and the increasing or decreasing directions of sample difficulty levels. To measure whether a sample is easy or hard to learn, we introduce two difficulty measures: (1) the confidence score difficulty metrics and (2) the distance vector difficulty metrics. The confidence score difficulty metrics were used to assess whether a teacher network with full knowledge of the entire dataset D predicted high or low confidence of the given sample belonging to its ground truth class label. Specifically, each image within R_t was input to a teacher network. The teacher network is based on a MobileNetV3 (small) architecture, pre-trained on the entire dataset D. After this, the confidence score for the ground truth class of each sample was extracted from the teacher network’s output. R_t was then sorted according to its individual sample’s confidence score, where a higher confidence score means that the sample is easier to learn for Θ. However, in CIL setting, having a teacher network with full access to the whole dataset is impractical, as the data is incrementally available over tasks. Hence, we employed the distance vector difficulty metrics, used widely in literature <cit.>. Intuitively, if the sample is closer to other samples in the memory buffer, it is easier for Θ to learn and generalize to other samples as well. The 2nd last layer from a ResNet-50 model <cit.>, pretrained on the ImageNet dataset, was used to extract the feature vector of each sample in R_t. A Euclidean distance matrix was created, where the pairwise Euclidean distance for all the samples based on their feature vectors was computed. We then compute the sum of each row of the matrix and denote this column vector as a distance vector. Each element in this distance vector represents how much a particular sample differs from all other samples in the feature space. A smaller value in the distance vector means that the particular replay sample is easier to learn for Θ. We introduce a series of rehearsal sequences in the orders of either easy-to-hard samples or hard-to-easy samples, where the difficulty levels of each sample are determined by either the confidence score difficulty metrics or the distance vector difficulty metrics. As the previous study has shown that the class orders are also essential for continual learning <cit.>, here we also explore the effect of the class orders during replays. When we design the rehearsal sequence based on class difficulties in R_t, we adapt the two sample-level difficulty measures above to compute class-level difficulty measures by taking the average over all samples of the same class in R_t. We then sort all the samples in R_t by their class difficulty metrics, regardless of their individual sample difficulty scores. Samples in R_t sorted by their class difficulties were then presented to the model Θ in either the easy-to-hard or hard-to-easy orders. §.§ Selection of Samples for Replay Buffer In common practice, selecting samples for R_t+1 from task t is often conducted in a random manner <cit.>. In contrast to the previous works, we vary the sample selection criteria for R_t+1 as follows: selecting only the easiest samples from task t for R_t+1, selecting the hardest samples from task t for R_t+1, and selecting samples that are uniformly distributed across difficulty levels from task t for R_t+1. The difficulty levels are judged based on the confidence scores and the distance vectors defined in the previous subsection. We use interleave division numbers 1 and 600 for all the experiments in this subsection. § RESULTS §.§ Frequent Replays Enhance Performances We report F and Avg. Accu. as a function of interleave divisions in Table <ref>. Notably, we observed that interleave divisions are important factors influencing the continual learning performance of the replay method with the larger interleave divisions leading to better performances, as indicated by the decreasing F and increasing Avg. Accu. over all the tasks. It is possible that the model parameters at large division numbers are updated more frequently for both the current task and all previous tasks, resulting in minimal forgetting. However, we also note that the continual learning performance saturates at interleave division number 120. This implies that increasing interleave divisions beyond optimal values brings no extra benefits in continual learning. §.§ Easy-To-Hard Rehearsal Sequences are Beneficial We studied the models trained with different rehearsal sequences sorted in easy-to-hard or hard-to-easy curricula based on sample-level or class-level difficulty measures computed from either the confidence scores or distance vectors. We reported the Avg. Accu. results in Figure <ref> and F scores in Appendix and made four key observations. First, aligning with the observations in Table <ref> and the discussion from the previous subsection, large interleave divisions benefit continual learning models with higher average accuracy and less forgetting. Second, rehearsal sequences sorted by instance-level difficulties lead to much better continual learning performances (compare red bars versus blue bars). Third, the confidence score is a better evaluation metric measuring instance-level difficulties, as shown by the bars with and without texture patterns. Finally, the models trained with the easy-to-hard rehearsal sequences outperform the ones with reversed rehearsal sequences (compare light versus dark grey bars). It is possible that easy-to-hard rehearsal sequences make the models converge faster on the previous tasks due to more stable gradient updates; hence, the sequences lead to minimal forgetting and higher classification accuracy. We also compared the continual learning performance for both the offline method and the continual learning method with various curricula and observed that there still exists a large performance gap between these two. §.§ Replays with Only Hard Data Hurt Performances Here, we explored the effect of different sample selection strategies for replay samples in terms of the sample difficulty levels based on distance vectors or confidence scores. From Figure <ref>, Our observations indicate that exclusively choosing the most challenging replay samples leads to inferior performance compared to selecting the easiest samples or incorporating samples with a balanced distribution of difficulty levels. Selecting samples with a uniform distribution of difficulty levels yields the best continual learning performance. This outcome may be attributed to the fact that difficult replay samples result in less flat loss landscapes, which in turn make the training process more challenging and slower to converge <cit.>. A curriculum for training the models to rehearse from the easiest to the hardest samples is the best, as it balances the greater precision in data fitting due to the hardest samples and the fast convergence speed during training due to the easier samples. Similar to the previous subsection, we also noted that the confidence score is a better measure of sample difficulty levels than the distance vectors. § CONCLUSION Our study examines the role of curricula during replays in the class-incremental learning setting in continual learning. We designed and conducted a series of controlled experiments to study the three key questions on replays: how often is the replay, what data should be replayed, and in what sequence to replay. Across the two common image datasets, our experimental results shed light on the underlying principles of replay methods in continual learning and reveal the good curricula design choices for replay methods. These curricula designs not only facilitate positive knowledge transfers (which has been explored in existing curriculum learning literature), but also mitigate catastrophic forgetting (a significant problem we need to solve in continual learning). Specifically, we found that (1) replays should happen frequently; (2) only rehearsing on the most difficult exemplars hurts continual learning performances; and (3) rehearsals on samples with increasing difficulty eliminate forgetting more than its reversed difficulty orders. There are numerous other possible choices of curricula designs for replay methods, such as a unified difficulty metric considering both confidence scores and distance vectors or the use of a student feedback loop to update the difficulty scores. In the future, we will look into the role of curricula under stringent continual learning conditions, such as learning with limited training time or noisy data. We will also conduct experiments on other large-scale datasets and apply our replay curriculum to existing replay-based continual learning algorithms. § ACKNOWLEDGEMENTS This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-RP-2021-025), its NRFF award NRF-NRFF15-2023-0001, Mengmi Zhang's Startup Grant from Agency for Science, Technology, and Research (A*STAR), and Early Career Investigatorship from Center for Frontier AI Research (CFAR), A*STAR. The authors declare that they have no competing interests. § APPENDIX §.§ Experimental Details For experiments on both ciFAIR-10 and ciFAIR-100, PyTorch’s default implementation of cross entropy loss was used for object classification tasks. The SGD algorithm was used as the optimizer. The learning rate was set at a constant of 0.001. Momentum was fixed at 0.9. A batch size of 32 is used. For ciFAIR-10, we employ a 2-layer 2D-convolutional network with 6 and 16 channels in the successive layers, followed by 3 fully connected layers with 400, 120 and 84 hidden units respectively. ReLU was used as the activation function. We follow the standard training and testing data splits from the original ciFAIR-10. In every task, the model is trained for 250 epochs. Each experiment is conducted with 20 runs initialized with 20 random seeds, where the seeds are used to vary the controlled variables. The average performance across all 20 runs is reported. For ciFAIR-100, PyTorch's implementation of MobileNetV3 (small) was used, including the default layers and activation function. We used a custom training, validation, and test data splits with a ratio of 9:1:2, and a stopping criteria for training depending on the validation loss. The ciFAIR-100 images were upscaled to 72x72 using PyTorch's bicubic interpolation function before training. §.§ More Results and Analysis We reported the continual learning performance on ciFAIR-10 dataset of the models trained with the three types of curricula as elaborated in Experiments Section. See Table <ref> for interleave divisions, Figure <ref> for rehearsal sequences, and Figure <ref> for sample selections. All the tables and figures on ciFAIR-10 dataset follow the same design conventions as the corresponding tables and figures on ciFAIR-100 dataset in the main text. The conclusions from the results of ciFAIR-10 dataset are consistent with the ones on the ciFAIR-100 dataset.
http://arxiv.org/abs/2307.05408v1
20230711161841
Enhanced diffusion of tracer particles in non-reciprocal mixtures
[ "Anthony Benois", "Marie Jardat", "Vincent Dahirel", "Vincent Démery", "Jaime Agudo-Canalejo", "Ramin Golestanian", "Pierre Illien" ]
cond-mat.soft
[ "cond-mat.soft", "cond-mat.stat-mech" ]
Sorbonne Université, CNRS, Physico-Chimie des Électrolytes et Nanosystèmes Interfaciaux (PHENIX), 4 Place Jussieu, 75005 Paris, France Sorbonne Université, CNRS, Physico-Chimie des Électrolytes et Nanosystèmes Interfaciaux (PHENIX), 4 Place Jussieu, 75005 Paris, France Sorbonne Université, CNRS, Physico-Chimie des Électrolytes et Nanosystèmes Interfaciaux (PHENIX), 4 Place Jussieu, 75005 Paris, France Gulliver, UMR CNRS 7083, ESPCI Paris PSL, 75005 Paris, France Université Lyon, ENS de Lyon, Université Claude Bernard, CNRS, Laboratoire de Physique, F-69342 Lyon, France Department of Living Matter Physics, Max Planck Institute for Dynamics and Self-Organization, D-37077 Göttingen, Germany Department of Living Matter Physics, Max Planck Institute for Dynamics and Self-Organization, D-37077 Göttingen, Germany Rudolf Peierls Centre for Theoretical Physics, University of Oxford, OX1 3PU, Oxford, UK Sorbonne Université, CNRS, Physico-Chimie des Électrolytes et Nanosystèmes Interfaciaux (PHENIX), 4 Place Jussieu, 75005 Paris, France We study the diffusivity of a tagged particle in a binary mixture of Brownian particles with non-reciprocal interactions. Numerical simulations reveal that, for a broad class of interaction potentials, non-reciprocity can significantly increase the effective diffusion coefficient of tracer particles, and that this diffusion enhancement is associated with a breakdown of the Einstein relation. These observations are quantified and confirmed via two different and complementary analytical approaches: (i) a linearized stochastic density field theory, which is particularly accurate in the limit of soft interactions; (ii) a reduced two-body description, which is exact at leading order in the density of particles. The latter reveals that diffusion enhancement can be attributed to the formation of transiently propelled dimers of particles, whose cohesion and speed are controlled by the non-reciprocal interactions. Enhanced diffusion of tracer particles in non-reciprocal mixtures Pierre Illien August 12, 2023 ================================================================= Introduction.— Intracellular functions are governed by the transport of ions, proteins, vesicles, or organelles, which are subject to strong thermal fluctuations, and which interact with each other through crowding, electrostatics and hydrodynamics. In theoretical approaches, such systems are typically represented by a suspension of interacting particles, embedded in a solvent that causes their stochastic motion. These particles generally evolve very far from equilibrium, and are `active' in the sense that they locally convert the chemical energy available in their environment into mechanical work. Even though a wealth of knowledge has been gathered on suspensions of single-species `polar' or self-propelled active particles <cit.>, the reality is much more complex: suspensions of biological interest are generally strongly heterogeneous, and made of particles without any established polarity on the considered timescales. Very recently, `scalar' models for active matter, where agents are apolar but whose nonequilibrium dynamics results in spontaneous symmetry breaking, have been developed. For instance, one can consider catalytic molecules, such as proteins or enzymes, that are involved in the production or consumption of smaller solute molecules. Each of them can be seen as a local source or sink responding to the chemical gradients created by the other particles. When coarse-graining the degrees of freedom associated with solute molecules, the effective interactions between particles appear to break action-reaction symmetry <cit.>, and should be modeled as non-reciprocal. This line of research has recently gained a lot of importance, and now goes well beyond the interest for active colloids, with applications ranging from the design of new field theories <cit.>, to the interpretation of active matter experiments <cit.>, and more generally phase transitions in nonequilibrium systems <cit.>. Interestingly, mixtures of particles with non-reciprocal interactions can be mapped onto multi-temperature suspensions – another class of scalar active matter that have received a lot of interest in the soft matter and biophysics communities <cit.>. This mapping was formally established for Newtonian dynamics <cit.>, and can be extended to stochastic overdamped dynamics <cit.>. The collective and structural properties of non-reciprocal mixtures have been studied rather extensively, revealing in particular their tendency to phase separate <cit.>. However, the properties of their fluctuations, as characterized by the dynamics of tracer particles (i.e. individually-tracked, tagged particles) have been left aside so far, in spite of their importance. Indeed, the properties of tagged particles generally contain key information about the microstructure of the suspension and its small-scale dynamics <cit.>. They are also of importance to quantify experiments that rely on single-particle tracking and allow accurate characterization of many intracellular processes <cit.>. In this Letter, we study the diffusivity of a tagged particle in a binary mixture of particles with non-reciprocal interactions obeying overdamped Langevin dynamics. Brownian dynamics simulations, together with two different analytical treatments of the stochastic dynamics, reveal that non-reciprocity can significantly increase the effective diffusion coefficient of tracer particles. We measure the non-reciprocal contribution to its diffusivity: D_eff= D_recip + Δ D_non-recip, and, we show that, strikingly, this diffusion enhancement is associated with a breakdown of the Einstein relation, which does not hold in this nonequilibrium case. More precisely, the effective mobility can be written as μ_eff =D_recip / T + Δμ_non-recip, with the non-reciprocal correction being generally different from Δ D_non-recip/ T. We finally show that diffusion enhancement can be attributed to the formation of transiently propelled dimers of particles, whose cohesion and speed are controlled by the non-reciprocal interactions. Model.— We consider a three-dimensional binary suspension of N+1 interacting particles, made of N_A (resp. N_B) particles of species A (resp. B), and one tracer particle (labeled 0), that can either be of type A or of type B. We denote by ρ_α=N_α/V (α=A or B) the number density of each species (excluding the tracer), where V is the volume of the system. The overall density of bath particles is ρ = N/V, and X_α=ρ_α/ρ is the fraction of α particles. We assume that each particle obeys an overdamped Langevin dynamics, in such a way that the evolution of the system is given by the set of coupled equations: _n/ t = μ_S(n)∑_m≠ n_S(m) → S(n) (_n-_m) + √(2 D_S(n))ζ_n(t), where S(n)∈{A,B} denotes the species of particle n, and _β→α () denotes the force exerted by a particle of species β on a particle of species α when the latter is located at relative to the former. The bare diffusion coefficient of a particle of species α is related to the mobility μ_α through the Einstein relation D_α = T μ_α, where T is the temperature of the thermal bath in which the particles are embedded. For simplicity, we will assume that all the particles have the same mobility μ_0. The noise terms ζ_n(t) have zero average and are delta-correlated: ⟨ζ_n,i(t)ζ_m,j(t') ⟩ = δ_nmδ_ijδ(t-t'). Importantly, we assume that the interactions between particles of different species can be non-reciprocal, namely that _A→ B ()≠ -_B→ A(-). In order to probe the existence of enhanced diffusion in such a suspension, we compute the effective diffusion coefficient of the tracer particle, defined as D_eff = lim_t→∞⟨ [ _0(t)-_0(0)]^2 ⟩/6t. For simplicity, we write the forces as deriving from potentials (or `pseudo-potentials'): _α→β(_β-_α) = -∇__βϕ_α→β (|_α-_β|). Note that we thus focus on divergence-free force fields. With this definition, the pseudo-potentials correspond to regular pair potentials when α=β, but not otherwise. The interactions between species can be defined through a matrix with elements Φ_αβ = ϕ_α→β which is split between a symmetric (reciprocal) and antisymmetric (nonreciprocal) part: Φ = [ ϕ_A→ A ϕ_AB^R; ϕ_AB^R ϕ_B→ B; ] +[ 0 -ϕ_AB^NR; ϕ_AB^NR 0; ]. For concreteness, we assume that all the (α,β) pairs interact via a purely repulsive potential ϕ_rep(r), and that non-reciprocity is incorporated by assuming that the pseudo-potential ϕ_A→ B contains an additional attractive part ϕ_att(r) (in the notations of Eq. (<ref>), this means that ϕ_AB^R = ϕ_rep + ϕ_att/2 and ϕ_AB^NR = ϕ_att/2). Numerical evidence for enhanced diffusion.— We first present results from Brownian dynamics simulations, which consist in integrating the set of coupled overdamped Langevin equations [Eq. (<ref>)], using a forward Euler-Maruyama scheme <cit.>. We consider different types of binary mixtures and the corresponding pair potentials, that represent a broad range of physical situations (for each system, the expressions of ϕ_rep and ϕ_att are given in Table <ref>): (i) suspensions of hard particles with short-range repulsion given by the Weeks-Chandler-Andersen potential and long-range Lennard-Jones attraction <cit.>; (ii) particles with softcore interactions, modeled by a `Gaussian' potential, which are relevant to describe the interactions between polymer coils <cit.>; and (iii) Yukawa interactions, that represent screened Coulomb interactions or may arise from `chemical interactions' between diffusiophoretic colloids <cit.> (the ranges λ and λ' of the Yukawa-like ϕ_rep and ϕ_att are chosen such that ϕ_A → B has an attractive part). For all these systems, the energy parameters ε and δ represent respectively the strength of the repulsive and attractive parts of the potentials. When non-reciprocity is very strong, the suspension may be unstable and phase separate – this effect was for instance evidenced in suspensions of colloids with chemically-mediated <cit.> or LJ-WCA <cit.> interactions. However, we emphasize that all our simulations are performed in the homogeneous regime, where the non-reciprocal mixture does not display any phase separation. In each of these systems, we measure numerically the diffusion coefficient of tagged B particles, as summarized in Fig. <ref>. For all three sets of simulations, diffusion is enhanced as non-reciprocity increases (a similar effect is observed for tagged A particles <cit.>). In the particular case of the softcore potentials, the relative enhancement can reach values as high as 20%. The radial distribution functions reveal a strong pairing between A and B particles <cit.>, which is interpreted as a consequence of the `predator-prey' dynamics that emerges from non-reciprocal interactions: B particles chase A particles while A tend to run away from B, resulting in enhanced dynamics at the scale of tagged particles – this effect will be described in the last section of the Letter. Breakdown of the Einstein relation.— In order to probe the validity of the Einstein relation in such mixtures, we measured the mobility of tagged particles, aiming at comparing it to the effective diffusion coefficient defined earlier. To this end, in the numerical simulations, we add a constant external force = f_x to the tagged particle, and measure its mobility defined as μ_eff = lim_f → 0⟨ v_x ⟩ /f, where ⟨ v_x ⟩ is the average velocity attained by the tagged particle along direction x, in the stationary limit. For an equilibrium system, the effective mobility of the tracer is expected to be related to its effective diffusion coefficient through the Einstein relation D_eff = T μ_eff. We compare in Fig. <ref> the effective diffusion coefficients and mobilities as measured from simulations: the increasing mismatch between their values as δ increases is a clear indication of the breakdown of the Einstein relation in this nonequilibrium situation. Analytical description in the limit of soft interactions.— In order to quantify these phenomena and to offer analytical insight, we coarse-grain the dynamics and define the density of bath particles of species α as ρ̂_α(,t) = ∑_n≠0,S(n)=αδ (_n(t)-), where the sum runs over all the particles of species α except the tracer (if of species α), so that the tracer is `taken out' of the definition of the densities <cit.>. Using Itô calculus <cit.>, and relying on the usual derivation proposed by Dean for a single-component fluid <cit.> and later extended to binary mixtures <cit.>, we obtain the coupled equations for the fields ρ̂_α: ∂_t ρ̂_α =√(2D_0)∇· [η_α√(ρ̂_α)] + D_0 ∇^2ρ̂_α -μ_0 ∇·[ ρ̂_α∑_β∈{A,B}_β→α∗ρ̂_β +ρ̂_α_ S(0)→α∗δ__0] with the space-dependent noise η_α,i (,t) of average zero, and with correlations ⟨η_α,i (,t)η_β,j(',t')⟩ = δ_αβδ_ijδ(-') δ(t-t'). In Eq. (<ref>), the symbol ∗ represents spatial convolution: (f∗ g)() = ∫' f(') g(-'), and we use the shorthand notation δ__0()=δ(-_0). The evolution of the tracer position is given by the overdamped Langevin equation, Eq. (<ref>), written for n=0. Although explicit, this joint description of the tracer-bath dynamics is quite complicated, as it involves nonlinear couplings and multiplicative noise. The dynamics of the fields can be solved perturbatively by linearizing around the homogeneous state of density ρ_α, and assuming |ρ̂_α - ρ_α| ≪ρ_α <cit.> . To treat the nonlinear coupling between the fields and the position of the tracer _0(t), we rely on a path-integral formulation <cit.> that we recently extended to the case of a binary mixture <cit.>. We finally reach an expression for the effective diffusion coefficient of the tracer particle as a Fourier integral: D_eff/D_0 = 1-∑_α,β,γ ∈{A,B}√(X_α X_γ)∫_0^∞ q/6π^2ρ q^2 ϕ̃_α→ S(0)(q) ×[ C_S(0)→γ^αβγϕ̃_S(0)→γ(q) +C_γ→ S(0)^αβγϕ̃_γ→ S(0)(q) ] where the tildes represent Fourier transforms, and where the functions C_S(0)→γ^αβγ(q) and C_γ→ S(0)^αβγ(q) are given in SM <cit.> in terms of the densities of each species, their interaction potentials, and their mobilities. Eq. (<ref>) is one of the main analytical results of this Letter, and several comments follow: (i) Up to a numerical integration, the effective diffusion coefficient is obtained as an explicit expression in terms of all the parameters of the problem; (ii) In our formalism, one can actually find a more general expression of the effective diffusion coefficient, for cases in which A and B particles have different mobilities and are connected to different thermostats <cit.>; and (iii) This result should be understood as a perturbative expansion in the limit of weak interactions between the particles. Therefore, when compared to our numerical results for the different interaction potentials considered in Table <ref>, it is only valid for the softcore interactions: the agreement between our analytical theory and numerical simulations is very good [Fig. <ref>(b)]. Strikingly, it shows that this linearization procedure remains true even very far from equilibrium. In order to discuss some consequences of Eq. (<ref>), we consider a simpler situation, where the tagged particle is coupled in a non-reciprocal way to a single bath (we will assume that the probe is of species B and the bath particles of species A). We take ϕ_A→ A(r)=ϕ_B→ A(r)=v(r) and ϕ_A→ B(r)=(1-δ) v(r), in such a way that δ measures non-reciprocity, just like in our numerical simulations. In this case, the effective diffusion coefficient of the tagged particle has the simple expression D_eff= D_recip+D_0 ∫_ρδ(2+δρṽ)ṽ^2/3(1+ρṽ)(2+ρṽ)^2, where D_recip/D_0=1-ρ/3∫_ṽ^2/(1+ρṽ)(2+ρṽ), and where we use the shorthand notation ∫_ = ∫/(2π)^3. The effective mobility of the tracer can be computed by assuming that it is driven by a harmonic trap with vanishing stiffness, and by adapting earlier calculations <cit.>. We find that the effective mobility is given by μ_eff = D_recip/ T +μ_0 ∫_ρδ(3-δ+δρṽ)ṽ^2/3(1+ρṽ)(2+ρṽ)^2, which we compare to numerical data in Fig. <ref>(b). Therefore, this confirms the breakdown of the Einstein relation, which is only retrieved in the reciprocal case δ=0 and the trivial case δ=1, where the bath has no effect on the tracer. Low-density limit.— We finally consider the low-density limit of the problem, where it actually reduces to a two-particle situation: one of them is the tagged particle, the other one is a bath particle. To ease the notation, we will assume that the tracer, at position _0, is of species α, and the considered bath particle, at position _b, is of species β. The pair correlation of the tracer with the bath particle is obtained by solving the Smoluchovski equation for the two-body probability density P_αβ(_0,_b,t). Using the variables = _b-_0 and =(_0+_b)/2, it reads ∂_tP_αβ = 2D_0∇_^2 P_αβ+μ_0∇_·[P_αβ∇_( +) ] +D_0/2∇_^2 P_αβ+μ_0/2∇_ P_αβ·∇_(-). Integrating over the position of the center of mass , and defining (r)= [(r)+(r)]/2 T, we find the stationary solution of Eq. (<ref>): g_αβ^0(r) = exp[-(r)]. Interestingly, this is analogous to the simple equilibrium pair distribution in the low-density limit, but with the interaction potential taken as the average between the two non-reciprocal pseudo-potentials. The effective diffusion coefficient of the tracer, as well as its effective mobility, can be computed using standard methods <cit.>, and the comparison between these two observables shows that the Einstein relation does not hold in this limit either. This low-density approach also reveals that, if the effective potential (r) has a deep enough minimum, an αβ pair may form a `transient dimer' that remains bound for some time. Indeed, the dynamics of their center of mass can be read from the effective equation of motion (t) = μ_0/2∇(-)+√(D_0)(t), where (t) is a Gaussian white noise with unit variance. If the interaction is non-reciprocal, the first term on the r.h.s. is non-zero and represents a self-propulsion term, which depends solely on the inner variable (t). The characteristics of self-propulsion depend on the shape of the potentials: (i) If the minimum of (r) is at r=0 and the potentials behave as (r)∼ r^2/2 around r=0, then the dynamics of (t) is linear, (t) is an Ornstein-Uhlenbeck process, and the coordinate (t) therefore behaves as an Active Ornstein-Uhlenbeck particle <cit.>; (ii) On the contrary, if the minimum of (r) is at r^*>0, then the modulus of interparticle vector remains confined close to r^*, and one can define a self-propulsion velocity V_0 ≃D_0/2 ['(r^*)-'(r^*) ]: the coordinate (t) behaves as an Active Brownian Particle <cit.>, with a rotational diffusion coefficient D_r ≃ 2D_0/r^*^2. In our numerical simulations, when the overall density ρ is small enough, we observe that D^A_eff≃ D^B_eff, even for strong non-reciprocity <cit.>. In contrast, at higher densities, these two values differ more clearly. This supports the idea that, at low density, diffusion enchancement can be related to the pairing between A and B particles. This effect is reminiscent of the self-propelled dimers observed in very dilute suspensions of chemotactic colloids <cit.>. Discussion.— In this Letter, we showed that non-reciprocal interactions between Brownian particles could significantly enhance their diffusivity. Non-reciprocity, which plays a predominant role in the interaction between chemically active particles, is thus expected to have a significant impact on the efficiency of molecular transport, and on the kinetics of diffusion-limited reactions. These observations, together with the mapping between non-reciprocal and two-temperature mixtures <cit.>, open the way to the interpretation of the rich phenomenology of non-reciprocal and multi-temperature mixtures, and to the local structures that emerge from the local energy transfers at the microscopic scale <cit.>. In the biological context, we believe that these concepts could find their applications in elucidating the role played by ATP-fueled activity in the fluidization of the intracellular medium, and of its rheological properties <cit.>. Acknowledgments.— The authors acknowledge Roxanne Berthin for her assistance with the computational resources of PHENIX laboratory, and Rodrigo Soto for discussions. 44 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Vicsek and Zafeiris(2012)]Vicsek2012 author author Tamás Vicsek and author Anna Zafeiris, title title Collective motion, 10.1016/j.physrep.2012.03.004 journal journal Physics Reports volume 517, pages 71–140 (year 2012)NoStop [Marchetti et al.(2013)Marchetti, Joanny, Ramaswamy, Liverpool, Prost, Rao, and Simha]Marchetti2013 author author M. C. Marchetti, author J. F. Joanny, author S. Ramaswamy, author T. B. Liverpool, author J. Prost, author Madan Rao, and author R. Aditi Simha, title title Hydrodynamics of soft active matter, 10.1103/RevModPhys.85.1143 journal journal Rev. Mod. Phys. volume 85, pages 1143 (year 2013)NoStop [Cates and Tailleur(2015)]Cates2015 author author Michael E Cates and author Julien Tailleur, title title Motility-Induced Phase Separation, 10.1146/annurev-conmatphys-031214-014710 journal journal Ann. Rev. Condens. Matter Phys. volume 6, pages 219 (year 2015)NoStop [Bechinger et al.(2016)Bechinger, Di Leonardo, Löwen, Reichhardt, Volpe, and Volpe]Bechinger2016 author author Clemens Bechinger, author Roberto Di Leonardo, author Hartmut Löwen, author Charles Reichhardt, author Giorgio Volpe, and author Giovanni Volpe, title title Active Particles in Complex and Crowded Environments, 10.1103/RevModPhys.88.045006 journal journal Reviews of Modern Physics volume 88, pages 045006 (year 2016)NoStop [Soto and Golestanian(2014)]Soto2014 author author Rodrigo Soto and author Ramin Golestanian, title title Self-Assembly of Catalytically Active Colloidal Molecules: Tailoring Activity Through Surface Chemistry, 10.1103/PhysRevLett.112.068301 journal journal Physical Review Letters volume 112, pages 068301 (year 2014)NoStop [Soto and Golestanian(2015)]Soto2015 author author Rodrigo Soto and author Ramin Golestanian, title title Self-assembly of active colloidal molecules with dynamic function, 10.1103/PhysRevE.91.052304 journal journal Phys. Rev. E volume 91, pages 052304 (year 2015)NoStop [Agudo-Canalejo and Golestanian(2019)]Agudo-Canalejo2019 author author Jaime Agudo-Canalejo and author Ramin Golestanian, title title Active Phase Separation in Mixtures of Chemically Interacting Particles, 10.1103/PhysRevLett.123.018101 journal journal Physical Review Letters volume 123, pages 18101 (year 2019)NoStop [Nasouri and Golestanian(2020)]Nasouri2020 author author Babak Nasouri and author Ramin Golestanian, title title Exact Phoretic Interaction of Two Chemically Active Particles, 10.1103/PhysRevLett.124.168003 journal journal Physical Review Letters volume 124, pages 168003 (year 2020)NoStop [Saha et al.(2020)Saha, Agudo-Canalejo, and Golestanian]Saha2020 author author Suropriya Saha, author Jaime Agudo-Canalejo, and author Ramin Golestanian, title title Scalar Active Mixtures: The Nonreciprocal Cahn-Hilliard Model, 10.1103/PhysRevX.10.041009 journal journal Physical Review X volume 10, pages 41009 (year 2020)NoStop [You et al.(2020)You, Baskaran, and Marchetti]You2020 author author Zhihong You, author Aparna Baskaran, and author M. Cristina Marchetti, title title Nonreciprocity as a generic route to traveling states, 10.1073/PNAS.2010318117 journal journal Proceedings of the National Academy of Sciences of the United States of America volume 117, pages 19767–19772 (year 2020)NoStop [Dinelli et al.(2022)Dinelli, O'Byrne, Curatolo, Zhao, Sollich, and Tailleur]Dinelli2022 author author Alberto Dinelli, author Jérémy O'Byrne, author Agnese Curatolo, author Yongfeng Zhao, author Peter Sollich, and author Julien Tailleur, title title Non-reciprocity across scales in active mixtures, http://arxiv.org/abs/2203.07757 , pages arXiv:2203.07757v2 (year 2022), http://arxiv.org/abs/2203.07757 arXiv:2203.07757 NoStop [Yu et al.(2018)Yu, Chuphal, Thakur, Reigh, Singh, and Fischer]Yu2018 author author Tingting Yu, author Prabha Chuphal, author Snigdha Thakur, author Shang Yik Reigh, author Dhruv P. Singh, and author Peer Fischer, title title Chemical micromotors self-assemble and self-propel by spontaneous symmetry breaking, 10.1039/c8cc06467a journal journal Chemical Communications volume 54, pages 11933–11936 (year 2018)NoStop [Grauer et al.(2020)Grauer, Löwen, Be'er, and Liebchen]Grauer2020 author author Jens Grauer, author Hartmut Löwen, author Avraham Be'er, and author Benno Liebchen, title title Swarm Hunting and Cluster Ejections in Chemically Communicating Active Mixtures, 10.1038/s41598-020-62324-0 journal journal Scientific Reports volume 10, pages 5594 (year 2020)NoStop [Niu et al.(2018)Niu, Fischer, Palberg, and Speck]Niu2018 author author Ran Niu, author Andreas Fischer, author Thomas Palberg, and author Thomas Speck, title title Dynamics of Binary Active Clusters Driven by Ion-Exchange Particles, 10.1021/acsnano.8b04221 journal journal ACS Nano volume 12, pages 10932–10938 (year 2018)NoStop [Meredith et al.(2020)Meredith, Moerman, Groenewold, Chiu, Kegel, van Blaaderen, and Zarzar]Meredith2020 author author Caleb H. Meredith, author Pepijn G. Moerman, author Jan Groenewold, author Yu Jen Chiu, author Willem K. Kegel, author Alfons van Blaaderen, and author Lauren D. Zarzar, title title Predator–prey interactions between droplets driven by non-reciprocal oil exchange, 10.1038/s41557-020-00575-0 journal journal Nature Chemistry volume 12, pages 1136–1142 (year 2020)NoStop [Fruchart et al.(2021)Fruchart, Hanai, Littlewood, and Vitelli]Fruchart2021 author author Michel Fruchart, author Ryo Hanai, author Peter B. Littlewood, and author Vincenzo Vitelli, title title Non-reciprocal phase transitions, 10.1038/s41586-021-03375-9 journal journal Nature volume 592, pages 363 (year 2021)NoStop [Ganai et al.(2014)Ganai, Sengupta, and Menon]Ganai2014 author author N Ganai, author S Sengupta, and author G I Menon, title title Chromosome positioning from activity-based segregation, 10.1093/nar/gkt1417 journal journal Nucleil Acids Research volume 42, pages 4145 (year 2014)NoStop [Grosberg and Joanny(2015)]Grosberg2015 author author A. Y. Grosberg and author J. F. Joanny, title title Nonequilibrium statistical mechanics of mixtures of particles in contact with different thermostats, 10.1103/PhysRevE.92.032118 journal journal Phys. Rev. E volume 92, pages 032118 (year 2015)NoStop [Tanaka et al.(2017)Tanaka, Lee, and Brenner]Tanaka2016a author author Hidenori Tanaka, author Alpha A. Lee, and author Michael P. Brenner, title title Hot Particles Attract in a Cold Bath, 10.1103/PhysRevFluids.2.043103 journal journal Phys. Rev. Fluids volume 2, pages 043103 (year 2017)NoStop [Smrek and Kremer(2017)]Smrek2017 author author Jan Smrek and author Kurt Kremer, title title Small Activity Differences Drive Phase Separation in Active-Passive Polymer Mixtures, 10.1103/PhysRevLett.118.098002 journal journal Physical Review Letters volume 118, pages 098002 (year 2017)NoStop [Ilker and Joanny(2020)]Ilker2020 author author Efe Ilker and author Jean-François Joanny, title title Phase separation and nucleation in mixtures of particles with different temperatures, 10.1103/physrevresearch.2.023200 journal journal Phys. Rev. Research volume 2, pages 23200 (year 2020)NoStop [Wang and Grosberg(2020)]Wang2020 author author Michael Wang and author Alexander Y. Grosberg, title title Three-body problem for Langevin dynamics with different temperatures, 10.1103/PhysRevE.101.032131 journal journal Physical Review E volume 101, pages 032131 (year 2020)NoStop [Ivlev et al.(2015)Ivlev, Bartnick, Heinen, Du, Nosenko, and Löwen]Ivlev2015 author author A. V. Ivlev, author J. Bartnick, author M. Heinen, author C. R. Du, author V. Nosenko, and author H. Löwen, title title Statistical mechanics where newton's third law is broken, 10.1103/PhysRevX.5.011035 journal journal Physical Review X volume 5, pages 011035 (year 2015)NoStop [SM()]SM @noop journal Supplementary Material NoStop [Chiu and Omar(2023)]Chiu2023 journal author author Yu Jen Chiu and author Ahmad K. Omar, title title Phase coexistence implications of violating Newton's third law, 10.1063/5.0146822 journal journal The Journal of chemical physics volume 158, pages 164903 (year 2023)NoStop [Mandal et al.(2022)Mandal, Jaramillo, and Sollich]Mandal2022 author author Rituparno Mandal, author Santiago Salazar Jaramillo, and author Peter Sollich, title title Robustness of travelling states in generic non-reciprocal mixtures, http://arxiv.org/abs/2212.05618 , pages arXiv:2212.05618v1 (year 2022), http://arxiv.org/abs/2212.05618 arXiv:2212.05618 NoStop [Höfling and Franosch(2013)]Hofling2013 author author Felix Höfling and author Thomas Franosch, title title Anomalous transport in the crowded world of biological cells, 10.1088/0034-4885/76/4/046602 journal journal Rep. Prog. Phys. volume 76, pages 046602 (year 2013)NoStop [Manzo and Garcia-Parajo(2015)]Manzo2015 author author Carlo Manzo and author Maria F Garcia-Parajo, title title A review of progress in single particle tracking: from methods to biophysical insights, 10.1088/0034-4885/78/12/124601 journal journal Reports on Progress in Physics volume 78, pages 124601 (year 2015)NoStop [Louis et al.(2000)Louis, Bolhuis, and Hansen]Louis2000 author author A A Louis, author P G Bolhuis, and author J P Hansen, title title Mean-field fluid behavior of the Gaussian core model, 10.1103/PhysRevE.62.7961 journal journal Phys. Rev. E volume 62, pages 7961 (year 2000)NoStop [Likos et al.(2001)Likos, Lang, Watzlawek, and Löwen]Likos2001 author author C. N. Likos, author A. Lang, author M. Watzlawek, and author H. Löwen, title title Criterion for determining clustering versus reentrant melting behavior for bounded interaction potentials, 10.1103/PhysRevE.63.031206 journal journal Phys. Rev. E volume 63, pages 031206 (year 2001)NoStop [Lang et al.(2000)Lang, Likos, Watzlawek, and Löwen]Lang2000 author author A. Lang, author C. N. Likos, author M. Watzlawek, and author H. Löwen, title title Fluid and solid phases of the Gaussian core model, 10.1088/0953-8984/12/24/302 journal journal Journal of Physics Condensed Matter volume 12, pages 5087–5108 (year 2000)NoStop [Démery et al.(2014)Démery, Bénichou, and Jacquin]Demery2014 author author Vincent Démery, author Olivier Bénichou, and author Hugo Jacquin, title title Generalized Langevin equations for a driven tracer in dense soft colloids: construction and applications, 10.1088/1367-2630/16/5/053032 journal journal New J. Phys. volume 16, pages 053032 (year 2014)NoStop [Gardiner(1985)]Gardiner1985 author author C W Gardiner, @noop title Handbook of Stochastic Methods (publisher Springer, year 1985)NoStop [Dean(1996)]Dean1996 author author D S Dean, title title Langevin equation for the density of a system of interacting Langevin processes, 10.1088/0305-4470/29/24/001 journal journal J. Phys. A: Math. Gen. volume 29, pages L613 (year 1996)NoStop [Poncet et al.(2017)Poncet, Bénichou, Démery, and Oshanin]Poncet2016 author author Alexis Poncet, author Olivier Bénichou, author Vincent Démery, and author Gleb Oshanin, title title Universal long ranged correlations in driven binary mixtures, 10.1103/PhysRevLett.118.118002 journal journal Phys. Rev. Lett. volume 118, pages 118002 (year 2017)NoStop [Démery and Dean(2011)]Demery2011 author author Vincent Démery and author David S. Dean, title title Perturbative path-integral study of active- and passive-tracer diffusion in fluctuating fields, 10.1103/PhysRevE.84.011148 journal journal Phys. Rev. E volume 84, pages 011148 (year 2011)NoStop [Jardat et al.(2022)Jardat, Dahirel, and Illien]Jardat2022 author author Marie Jardat, author Vincent Dahirel, and author Pierre Illien, title title Diffusion of a tracer in a dense mixture of soft particles connected to different thermostats, 10.1103/PhysRevE.106.064608 journal journal Phys. Rev. E volume 106, pages 064608 (year 2022)NoStop [Démery and Fodor(2019)]Demery2019 author author Vincent Démery and author Étienne Fodor, title title Driven probe under harmonic confinement in a colloidal bath, 10.1088/1742-5468/ab02e9 journal journal J. Stat. Mech volume 2019, pages 033202 (year 2019)NoStop [Lekkerkerker and Dhont(1984)]Lekkerkerker1984 author author H. N. W. Lekkerkerker and author J. K. G. Dhont, title title On the calculation of the self-diffusion coefficient of interacting Brownian particles, 10.1063/1.446602 journal journal J. Chem. Phys. volume 80, pages 5790–5792 (year 1984)NoStop [Ilker et al.(2021)Ilker, Castellana, and Joanny]Ilker2021 author author Efe Ilker, author Michele Castellana, and author Jean-François Joanny, title title Long-time diffusion and energy transfer in polydisperse mixtures of particles with different temperatures, http://arxiv.org/abs/2103.06659 journal journal Phys. Rev. Research volume 3, pages 023207 (year 2021)NoStop [Szamel(2014)]Szamel2014 author author Grzegorz Szamel, title title Self-propelled particle in an external potential: Existence of an effective temperature, 10.1103/PhysRevE.90.012111 journal journal Physical Review E volume 90, pages 012111 (year 2014)NoStop [Romanczuk et al.(2012)Romanczuk, Bär, Ebeling, Lindner, and Schimansky-Geier]Romanczuk2012 author author P. Romanczuk, author M. Bär, author W. Ebeling, author B. Lindner, and author L. Schimansky-Geier, title title Active Brownian particles: From individual to collective stochastic dynamics: From individual to collective stochastic dynamics, 10.1140/epjst/e2012-01529-y journal journal European Physical Journal: Special Topics volume 202, pages 1–162 (year 2012)NoStop [Schwarcz and Burov(2023)]Schwarcz2023 author author Deborah Schwarcz and author Stanislav Burov, title title Emergence of directed motion in a crowded suspension of overdamped particles, @noop (year 2023), http://arxiv.org/abs/2304.12724 arXiv:2304.12724 NoStop [Parry et al.(2014)Parry, Surovtsev, Cabeen, O'Hern, Dufresne, and Jacobs-Wagner]Parry2014 author author Bradley R. Parry, author Ivan V. Surovtsev, author Matthew T. Cabeen, author Corey S. O'Hern, author Eric R. Dufresne, and author Christine Jacobs-Wagner, title title The bacterial cytoplasm has glass-like properties and is fluidized by metabolic activity, 10.1016/j.cell.2013.11.028 journal journal Cell volume 156, pages 183–194 (year 2014)NoStop
http://arxiv.org/abs/2307.04712v1
20230710172149
Machine learning potentials with Iterative Boltzmann Inversion: training to experiment
[ "Sakib Matin", "Alice Allen", "Justin S. Smith", "Nicholas Lubbers", "Ryan B. Jadrich", "Richard A. Messerly", "Benjamin T. Nebgen", "Ying Wai Li", "Sergei Tretiak", "Kipton Barros" ]
physics.app-ph
[ "physics.app-ph" ]
[email protected] Department of Physics, Boston University, Boston, Massachusetts 02215 Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87546 Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87546 Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87546 Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87546 Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87546 NVIDIA Corp., Santa Clara, CA 95051, USA Computer, Computational, and Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, NM 87545, USA Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87546 Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87546 Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87546 Computer, Computational, and Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, NM 87545, USA Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87546 Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87546 Center for Integrated Nanotechnologies, Los Alamos National Laboratory, Los Alamos, New Mexico 87546 Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87546 Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87546 Methodologies for training machine learning potentials (MLPs) to quantum-mechanical simulation data have recently seen tremendous progress. Experimental data has a very different character than simulated data, and most MLP training procedures cannot be easily adapted to incorporate both types of data into the training process. We investigate a training procedure based on Iterative Boltzmann Inversion that produces a pair potential correction to an existing MLP, using equilibrium radial distribution function data. By applying these corrections to a MLP for pure aluminum based on Density Functional Theory, we observe that the resulting model largely addresses previous overstructuring in the melt phase. Interestingly, the corrected MLP also exhibits improved performance in predicting experimental diffusion constants, which are not included in the training procedure. The presented method does not require auto-differentiating through a molecular dynamics solver, and does not make assumptions about the MLP architecture. The results suggest a practical framework of incorporating experimental data into machine learning models to improve accuracy of molecular dynamics simulations. Machine learning potentials with Iterative Boltzmann Inversion: training to experiment Kipton Barros August 12, 2023 ====================================================================================== § INTRODUCTION Molecular dynamics simulations are ubiquitous in chemistry <cit.> and materials modeling <cit.>. Several methods exist for calculating interatomic forces from first principles <cit.> underpinning ab initio molecular dynamics. The cost of such atomistic quantum-mechanical (QM) calculations grows rapidly with system size, and this limits the scale of molecular dynamics simulations. Machine learning potentials (MLPs) offer a path towards achieving the fidelity of QM calculations at drastically reduced cost <cit.>. MLP performance strongly depends on the quality of training data. Active learning is commonly used to ensure diversity of structural configurations and wide coverage of the relevant chemical space <cit.>. MLPs trained on active learned data tend to yield more stable molecular dynamics simulations <cit.>. MLPs have been successfully applied to predicting potential energy surfaces <cit.>, and have been extended to charges <cit.>, spin <cit.>, dispersion coefficients <cit.>, and bond-order quantities <cit.>. Training data sets are typically obtained with Density Functional Theory (DFT), which serves as reasonably accurate and numerically accessible reference QM approach. MLPs impose symmetry constraints (rotation, translation, permutation) and typically assume that energy can be decomposed as a sum of local atomic contributions (nearsightedness cutoff of order 10 ) but are otherwise extremely flexible, and may involve up to 10^5 fitting parameters <cit.>. This is in stark contrast to classical force fields <cit.>, which involve simple and physically-motivated functional forms, with fewer fitting parameters. A limitation of classical force fields is that they can be system-specific, and may require tuning or even re-fitting for new applications. In contrast to classical force-fields, MLPs trained to a sufficiently broad training dataset can exhibit remarkable accuracy and transferability <cit.>. A recent study of aluminum used an active learning procedure to train a MLP for bulk aluminum, without any hand-design of the training data <cit.>. The resulting model was capable of accurately reproducing low-temperature properties such as cold curves, defect energies, elastic constants and phonon spectra. Despite these successes, there is still room for improvement. The MLP predicts an overstructured radial distribution function (RDF) in the liquid phase <cit.> relative to experiment <cit.>, and the deviation grows with increasing temperature. Such overstructured RDFs have been previously reported for ab initio calculations <cit.>. This suggests that error in MLP-driven simulations may be due to limitations of the DFT reference calculations <cit.>, rather than training-set diversity. To test this hypothesis, in this work we verify that the overstructured RDFs appear for two distinct MLP architectures, providing evidence that the limitation is either in the DFT training data itself, or in a fundamental assumption of both MLP architectures [It seems possible that the nearsightedness assumption of traditional MLPs excludes information that would be important to make a determination about the electronic structure of the global many-body quantum state.]. Therefore, an important question is how to improve MLPs by explicitly including experimental liquid-phase data, such as RDFs, into the training procedure. While MLPs trained to large datasets of high-fidelity QM calculations <cit.> have seen explosive growth, training to experimental data remains underutilized <cit.> This is partly because there are well-established workflows, such as stochastic gradient descent, for training MLPs directly to their target output, i.e., the energy and forces of a microscopic atomic configuration. Such atomistic-level data cannot typically be accessed experimentally <cit.>, where measurements frequently provide information on quantities averaged over some characteristic length and time scales. Sparsity and frequently unknown errors (e.g. introduced by defects) in experimental data further complicate the problem. An alternative method for training to experimental observables would involve auto-differentiating through a molecular dynamics simulation that is used to measure statistical observables <cit.>. This direct automatic-differentiation approach may be impractical in various situations: it requires memory storage that grows linearly with the trajectory length, and will also exhibit exploding gradients when the dynamics is chaotic. To address the latter, Ref. thaler2021learning introduced the differentiable trajectory re-weighting method, which MLPs use re-weighting to avoid costly automatic differentiation when training. An alternative approach is the inverse modeling methods of statistical mechanics, which optimize microscopic interactions to match macroscopic time-averaged targets such as equilibrium correlation functions <cit.>. Thus targets obtained from experiments can be readily utilized by inverse methods <cit.>, which do not require a differentiable molecular dynamics solver. Inverse methods have been successfully applied to both fluid <cit.> and solid state targets <cit.> as well as designing systems with specific self-assembly objectives <cit.>. In particular, Iterative Boltzmann Inversion (IBI) is a popular inverse method, which optimizes an isotropic pair potential to match target RDF data <cit.>. In this paper, we use IBI to construct a corrective pair potential that is added to our MLP to match experimental RDF data. To highlight the generality of this approach, we report results for two distinct neural network models, namely the Accurate NeurAl networK engINe for Molecular Energies (ANI for short) <cit.>, which uses modified Behler-Parrinello Atom-Centered Symmetry Functions with nonlinear regression, and Hierarchically-Interacting-Particle Neural Network (HIP-NN) <cit.>, a message-passing graph neural network architecture. Trained to the same aluminum data set <cit.>, the two MLPs behave qualitatively similarly. They accurately reproduce low-temperature properties such as cold curves and lattice constants in the solid phase. In the liquid phase, however, both MLPs predict overstructured RDFs and underestimate diffusion in the liquid phase. To address these MLP errors, we use IBI to design temperature dependent pair potentials that correct the MLP, such that simulated RDFs match the liquid phase experimental targets. Although the IBI only trains to the RDF (a static quantity), the corrective pair potentials also improve predictions of the diffusion constant, which is a dynamical observable. We find that the IBI corrective potentials become smaller with temperature, which is consistent with the fact that the uncorrected MLP is already accurate at low temperatures. An MLP with a temperature-dependent corrective potential leverages both atomistic DFT data and macroscopic experimental training targets to achieve high accuracy at given temperatures. Future work might consider interpolating between corrective, temperature-dependent potentials to achieve high accuracy over a continuous range of a range of temperatures. § METHODS We train MLPs on the condensed phase aluminum data set from Ref. smith2021automated. The data set was generated using automated active learning framework, which ensures adequate coverage of the configurational space <cit.>. Active learning is an iterative procedure. At each stage, non-equilibrium molecular dynamics simulations are performed using the MLP under construction. A “query by committee” metric measures the disagreement between the predictions of an ensemble of MLPs to identify gaps in the training dataset. If an atomic configuration is identified for which there is large ensemble variance, then a new reference DFT calculation is performed, and resulting energy and forces are added to the training data. The final active learned dataset consists of about 6,000 DFT calculations, over a range of non-equilibrium conditions, with periodic boxes that contain between 55 and 250 aluminum atoms. The dataset is available online. <cit.>. Here, we use two different MLPs: ANI and HIP-NN. The ANI MLP <cit.> uses modified Behler-Parrinello atom-centered symmetry functions <cit.> to construct static atomic environment vectors from the input configurations. Feed-forward neural network layers map the atomic environment vectors to the output energy and forces predictions. HIP-NN uses a message passing graph neural network architecture <cit.>. In contrast to ANI, HIP-NN uses learnable atomic descriptors. Additionally, HIP-NN can use multiple message passing (interaction) layers to compute hierarchical contributions to the energy and forces <cit.>. Despite the striking differences between the two architectures, our results are consistent across both MLPs. This highlights the generality of this approach. The ANI and HIP-NN MLPs are trained from data that is available online <cit.>. The hyper-parameters for both ANI and HIP-NN are discussed in Appendix <ref>. HIP-NN achieves an out-of-sample root mean-squared error of 4.1 meV/atom for energy, comparable to the ANI MLP with error of 3.5 meV/atom <cit.>. Additionally, ANI and HIP-NN predict the ground state FCC lattice constants 4.054 and 4.037 respectively, which are consistent with the experimental value 4.046 ± 0.004 <cit.>. The lattice constants are computed using the Lava package <cit.>. After training, the molecular dynamics is performed using Atomic Simulation Environment (ASE) codebase <cit.>. We initialize the system with 2048 atoms with FCC lattice structure and use the NPT ensemble in all MD simulations with a time-step of 1 femtosecond. The initial melt and density equilibrations are performed for 200 ps. Then the RDFs are computed by averaging over 100 snapshots, each 0.1 ps apart. The RDF data is collected in bins of width 0.05 . § ITERATIVE BOLTZMANN INVERSION IBI builds a pair potential u(r) such that molecular dynamics simulations match a target RDF <cit.>. Distinct from previous works <cit.>, the present study uses IBI to generate a pair potential u(r) that is a correction on top of an existing MLP. Whereas the original MLP was trained to DFT energies and forces, the corrective potential is trained to experimental RDF data. The corrective potential is built iteratively. At each iteration t, an updated pair potential is calculated using the IBI update rule, u^t+1(r) = u^t(r) + α k_B T w(r) ln[g^t(r)/g_E(r)], which we have modified to include an arbitrary weight function w(r). We select a relatively small learning rate α=2×10^-4 which aids in the smoothness of the corrective potential. g_E(r) is the experimental RDF. g^t(r) is the simulated RDF, generated using the sum of the MLP and corrective potential u^t(r). At the zeroth generation there is not yet a correction, u^t=0=0, such that g^t=0 corresponds to the simulated RDF for the original MLP. For our numerical implementation of u(r), we use Akima splines <cit.>. The Akima interpolation method uses a continuously differentiable sub-spline built from piece-wise cubic polynomials so that both u(r) and its first derivative are continuous. For every iteration step t when corrective potential is updated, then the MD is performed in the NPT ensemble to allow the system to equilibrate to the new density. Then RDFs for the next iteration are averaged over 100 configurations over a 10 ps trajectory to ensure smoothness. In the original IBI method, the weight function is w(r)=1. In our variant of this method, which we call the weighted Iterative Boltzmann Inversion (wIBI), we select w(r)=g_E(r). In the limit t →∞, both IBI and wIBI should converge to the same corrective potential u(r) that yields a perfect simulated RDF <cit.>. At early iterations t, however, there can be significant differences. By design, the wIBI method effectively ignores errors in the RDF at very small r, which may be associated with experimental uncertainty <cit.>, and favors corrections at the RDF peaks. We further truncate u(r) beyond 10  because g_E(r)→ 1 for all temperatures considered. Other functional forms for the weight w(r) may be used, provided that w(r) is positive semi-definite and w(r)>0 for all r where the experimental RDF is non-zero. Figure <ref> shows how the corrective potential u^t(r) generated using the wIBI method results in improved match with the experimental RDF at 1023 K <cit.>. The overstructured ANI-MLP simulated RDF (zeroth generation) is evident in the inset of Fig. <ref>. By the 15th generation, the first shell peak matches the experimental results. Figure <ref> compares the wIBI (w(r)=g_E(r)) and IBI (w(r)=1). For r>3 , u(r) for both wIBI and IBI is similar. The shape of u(r) reflects the initial differences between the original MLP RDF and the experimental one, Δ g(r)= g_r^t=0(r)-g_E(r). Compared to the experimental RDFs, the ANI and HIP-NN simulated RDFs are overstructured for all temperatures between 943 K up to 1323 K obtained from different experiments <cit.>. The RDFs for 1023 K and 1323 K show overstructuring in the first shell, whereas for 1148 K and 198 K, the second shell is overstructured as well. In Fig. <ref>, the corrective potentials u^t=15(r) at the 15th generation of wIBI for both ANI and HIP-NN for 1023 K, 1148 K, 1198 K, and 1323 K highlight that larger corrections are needed at higher temperatures. In Fig. <ref>, the shape of u(r) reflects the corresponding Δ g(r), namely, the differences between the MLP RDF and the experimental one. Given that the correction required is very similar for ANI and HIP-NN, we attribute the MLP simulated overstructured RDF to limitations of the DFT method used for the training data. Similar overstructured RDFs for DFT and other ab initio methods have been observed in metals <cit.> and water <cit.>. Typically, DFT functionals are well parameterized for near-equilibrium configurations and may perform poorly for the high temperature liquid phase. While improved DFT functionals can potentially alleviate some of these issues, our work presents a data driven correction, which may be readily applied to other systems <cit.>. § OUT OF SAMPLE VALIDATION To validate our results, we compare the diffusion constants calculated for both the ANI and HIP-NN MLPs with and without the IBI corrective potential u(r) at different temperatures. We measure the diffusion constants by averaging over 30 trajectories of length 1 ps, and 1000 snapshots. We fit the simulated trajectory to the Einstein Equation to infer the diffusion constant using Atomic Simulation Environment codebase <cit.>. Figure <ref> shows that the ANI MLP diffusion constants are underestimated compared to the data from two different experiments <cit.> for all temperatures. The ANI and HIP-NN (not shown) MLPs underestimate relation between the diffusion and temperature; the slope relating the diffusion constant D to temperature T differs from the experiment by approximately a factor of two. The underestimated diffusion constant is physically consistent with MLPs' overstructured RDFs prediction. As expected the deviation between the DFT-based MLP prediction and experimental diffusion constant decreases with temperature. For each temperature, u(r) improves the MLP simulated over-structured RDF and underestimated diffusion constant. We find that that u(r) improves the predictions of both MLPs. The ANI MLP overestimate of equilibrium densities in the melt phase, as seen in Fig. <ref>, are partially corrected by the Note that the u(r) is trained only to the RDF, which is an equilibrium statistic, independent of dynamics. In contrast, the diffusion constant is a dynamical property. Noticeable improvements in the predictions of an `out-of-sample' dynamical observable is strong evidence that the IBI corrective potential is physically meaningful. We find that extrapolating the corrective pair potential to temperatures beyond the experimental training data can lead to incorrect predictions. The high temperature corrective potentials we trained at u(r;T=1023) and u(r;T=1323) are ineffective when extrapolated to the solid phase. Either of these corrections worsens MLP predictions of zero-temperature properties such as lattice constants, and cold curves. Near the aluminum melting point of 933 K, the corrective potentials have a more neutral effect. The original ANI MLP predicts a melting temperature of 920±5 K, and adding the u(r;T=1023) corrective potential does not alter this. However, adding the u(r;T=1323) correction lowers the melting temperature to 905±5 K, which is further from the true experimental value. The corrections derived for the higher-temperature melt phase are not found to be helpful at lower temperatures. As such, care should be taken when applying the IBI rectifications, which are only applicable to the MLP simulations for a relevant temperature regime. § CONCLUSIONS This study reports a method for generating a corrective pair potential for two distinct MLPs to match target experimental RDFs using the modified IBI technique. Compared to the traditional IBI method with a uniform w(r)=1 weight, our wIBI uses a distance dependent weight w(r)=g_E(r), which avoids unphysical corrections at small distances. Trained on to a DFT dataset alone <cit.>, both ANI and HIP-NN accurately reproduce DFT energies and forces, as well as cold-curves and lattice constants in the low-temperature solid crystalline phases. Adding a temperature dependent, corrective pair potential fixes the overstructured RDFs in the high temperature liquid phase. The improved predictions for diffusion constants indicate that the corrective potential is physically valid. These out of sample validation tests, such as diffusion constant predictions in this case, are important for any framework to incorporate experimental results into MLPs. Our work does not require auto-differentiation through a MD solver and can be applied any MLP. Furthermore, the results are interpretable because the form of the pair potential relates to the differences in between the RDFs from simulation and experiment. If more experimental data is collected, it can be readily incorporated to further improve our results. Here, the wIBI potential u(r) makes small corrections on top of an existing MLP. Future work could explore the efficacy of the method when there are significant deviations between the MLP and experimental RDFs. Another important consideration is that each wIBI corrective potential has been derived from experimental data for a specific temperature. Naive application of such a corrected MLP to new temperature regimes may yield poor accuracy. This is particularly evident when applying high temperature corrections to simulations at low temperature, as was shown in the aluminum examples. In the future, we will extend our methods by using other inverse methods such as relative entropy minimization <cit.>, or re-weighting techniques <cit.>. By combining the differentiable trajectory re-weighting technique <cit.> with our current methods, we may be able avoid long MD simulations when learning from experimental targets. Additionally, training to the three-body angular distribution function would be relevant for MLPs of water <cit.>. We intend to explore how multi-state IBI methods <cit.> may be used to fit to RDFs from different temperatures simultaneously and ideally provide continuous corrections to QM based MLPs. Ultimately, this work is an example of how the experimental data can complement MLPs trained on ab initio data. By incorporating temperature-dependent corrective pair potentials, the resulting models allow for accurate simulations of aluminum in the melt phase. The magnitude of the learned corrections decreases monotonically with decreasing temperature. We did not, however, find any benefit in extrapolating these corrections to the solid phase, where the original MLP is known to be accurate <cit.>. §.§ Acknowledgments We acknowledge support from the US DOE, Office of Science, Basic Energy Sciences, Chemical Sciences, Geosciences, and Biosciences Division under Triad National Security, LLC (“Triad”) contract Grant 89233218CNA000001 (FWP: LANLE3F2). The research is performed in part at the Center for Nonlinear Studies (CNLS) and the Center for Integrated Nanotechnologies Nanotechnologies (CINT), a U.S. Department of Energy, Office of Science user facility at Los Alamos National Laboratory (LANL). This research used resources provided by the LANL Institutional Computing (IC) Program and the CCS-7 Darwin cluster at LANL. LANL is operated by Triad National Security, LLC, for the National Nuclear Security Administration of the U.S. Department of Energy (Contract No. 89233218NCA000001). § HYPER-PARAMETERS The ANI MLPs are implemented in the NeuroChem C++/CUDA software packages. Pre-compiled binaries for the ensemble of ANI-MLP is available for download <cit.>. The he loss function is ℒ = w_energy (Ê-E)^2 + w_force^2 ∑_j=1^3N (f̂_j - f_j)^2, where N is the number of atoms in a configuration or sample. Weight of 1.0 and 0.01 is used for the energy term and force terms respectively. Batch-size of 128 was used. The ADAM update is used during training. The learning rate was initialized at 0.001 and ultimately converged to 0.00001, following the annealing schedule in Ref. smith2017ani All ANI-Al model symmetry function parameters are provided below: Radial Cutoff (Radial): 7.0 Radial Cutoff (Angular): 5.0 Radial Eta: [43.9] Radial Shift: [1.2500000, 1.4296875, 1.6093750, 1.7890625, 1.9687500, 2.1484375, 2.3281250, 2.5078125, 2.6875000, 2.8671875, 3.0468750, 3.2265625, 3.4062500, 3.5859375, 3.7656250, 3.9453125, 4.1250000, 4.3046875, 4.4843750, 4.6640625, 4.8437500, 5.0234375, 5.2031250, 5.3828125, 5.5625000, 5.7421875, 5.9218750, 6.1015625, 6.2812500, 6.4609375, 6.6406250, 6.8203125] Angular Zeta: [69.4] Angular Shift: [0.19634954, 0.58904862, 0.98174770, 1.3744468, 1.7671459, 2.1598449, 2.5525440, 2.9452431] Angular Eta: [6.5] Angular Radial Shift: [1.2500000, 1.7187500, 2.1875000, 2.6562500, 3.1250000, 3.5937500, 4.0625000, 4.5312500] The HIP-NN <cit.> MLP is implemented in PyTorch software package and is available for download <cit.>. The loss function is ℒ = 100 ×RMSE_energy-per-atom + 100 ×MAE_energy-per-atom + RMSE_forces + MAE_forces + 10^-6×∑_iw_i^2, where the last term corresponds to the L2 regularization with respect to the weights of the network. We use a network with 1 interaction layer, 3 atom layers (feed-forward layer) with a width of 15 features. For the sensitivity functions, 20 radial basis functions are used with a soft-min cutoff of 1.25, the soft maximum cutoff of 7.0, and hard maximum cutoff of 5. We used the Adam Optimizer, with an initial learning rate of 0.001, which is halved with a patience of 25 epochs.
http://arxiv.org/abs/2307.04864v1
20230710192410
Hyperelliptic and trigonal modular curves in characteristic $p$
[ "Maarten Derickx", "Filip Najman" ]
math.NT
[ "math.NT" ]
Let X_Δ(N) be an intermediate modular curve of level N, meaning that there exist (possibly trivial) morphisms X_1(N)→ X_Δ(N) → X_0(N). For all such intermediate modular curves, we give an explicit description of all primes p such that X_Δ(N)__p is either hyperelliptic or trigonal. Furthermore we also determine all primes p such that X_Δ(N)__p is trigonal. This is done by first using the Castelnuovo-Severi inequality to establish a bound N_0 such that if X_0(N)__p is hyperelliptic or trigonal, then N ≤ N_0. To deal with the remaining small values of N, we develop a method based on the careful study of the canonical ideal to determine, for a fixed curve X_Δ(N), all the primes p such that the X_Δ(N)__p is trigonal or hyperelliptic. Furthermore, using similar methods, we show that X_Δ(N)__p is not a smooth plane quintic, for any N and any p. Social inequalities that matter for contact patterns, vaccination, and the spread of epidemics Adriana Manna^1 Júlia Koltai^2,3,4Márton Karsai^1,4,5* ^1Central European University, Quellenstraße 51, 1100 Vienna, Austria ^2Computational Social Science - Research Center for Educational and Network Studies, Centre for Social Sciences, Tóth Kálmánutca 4,Budapest, 1097, Hungary ^3Department of Social Research Methodology, Faculty of Social Sciences, Eötvös Loránd University, Pázmány Péter s étány 1/A, Budapest, 1117, Hungary. ^4 National Laboratory for Health Security, Hungary. ^5 Rényi Institute of Mathematics, Reáltanodautca 13-15, Budapest, 1053, Hungary. ^*Corresponding author: [email protected] =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Let k be a field and C a curve over k. Throughout the paper we will assume all curves are geometrically integral and proper over k. The gonality _k C of C over k is the least degree of a non-constant morphism f:C→^1_k. Equivalently, it is the least degree of a non-constant function f∈ k(C). We will study the gonality of modular curves, and in particular of ”intermediate" modular curves X_Δ(N). Let N be a positive integer and let Δ be a subgroup of (/ N)^× /-1; we will say that Δ is of level N. Let X_Δ(N) be the modular curve, defined over , associated to the modular group Γ_Δ=Γ_Δ(N) defined by Γ_Δ={[ a b; c d ]∈_2() | c ≡ 0 mod N, (a mod N) ∈Δ}. If Δ is the trivial subgroup of (/ N)^× /-1, then X_Δ(N)=X_1(N) and for Δ=(/ N)^× /-1 we have X_Δ(N)=X_0(N). The morphisms X_1(N)→ X_0(N) factor through X_Δ(N): X_1(N)→ X_Δ(N) → X_0(N). Gonalities of modular curves over fields of characteristic 0, most commonly or , have received considerable attention. The modular curves that have been most studied are X_0(N) and X_1(N). For X_0(N) the results are as follows. Ogg <cit.> determined the hyperelliptic modular curves X_0(N). Hasegawa and Shimura <cit.> determined the X_0(N) that are trigonal over and over , and Jeon and Park <cit.> determined the X_0(N) that are tetragonal over . More recently, Najman and Orlić <cit.> determined the X_0(N) that are tetragonal and pentagonal over and determined the gonality of X_0(N) for all N<135. All the curves X_1(N) with _ X_1(N)=d were determined for d=2 by Ishii and Momose <cit.> (they even determined all hyperelliptic X_Δ(N)), for d=3 by Jeon, Kim and Schweizer <cit.> and for d=4 by Jeon, Kim and Park <cit.> and for 5 ≤ d ≤ 8 by Derickx and van Hoeij <cit.>. Derickx and van Hoeij <cit.> also determined _ X_1(N) for all N≤ 40 and gave upper bounds for N≤ 250. More generally, Abramovich <cit.> gave a lower bound for the gonality of any modular curve over (which is usually not sharp). From this result, it easily follows that there are only finitely many modular curves X with _ X ≤ d for some fixed positive integer d. In this paper we consider the gonality of the modular curves X_Δ(N) over _p and _p. Poonen <cit.> showed that if one fixes the prime p, then the set of Γ such that __pX_Γ≤ d is finite, and gave lower bounds on __p^2X_Γ and __pX_Γ, depending on p and the index of Γ in _2() <cit.>. In <Ref> and <Ref> we will explicitly find all the pairs (Γ_Δ,p), with p not dividing the level of Γ_Δ such that __pX_Γ=2 and __pX_Γ=3. It is easy to see that, for any curve X, __pX≤_X. It is natural to ask, in the case of (some sets of) modular curves, when does equality hold? A question in this direction, which was also one of the motivations for this paper, was asked by David Zureick-Brown on MathOverflow[<https://mathoverflow.net/questions/132618/hyperelliptic-modular-curves-in-characteristic-p>]. Are there any N such that X_0(N)_ is not hyperelliptic but for some p not dividing N, X_0(N)__p is hyperelliptic? We show in <Ref> below that the answer to this question is negative. We consider the following more general questions. A) Given some family of modular curves S and a positive integer d, can we determine all X∈ S and primes p of good reduction for X such that __pX = d? B) Given S and d, can we also determine the X and p as above such that __pX = d? For d=2 both versions of the question are equivalent if X() ≠∅ for all X ∈ S, since a curve with an _p point is hyperelliptic over _p if and only if it is hyperelliptic over _p. We answer these questions for d=2 and 3 where S is the set of all intermediate modular curves X_Δ(N). Let N be a positive integer, p a prime not dividing N, and X_Δ(N) an intermediate modular curve of level N. Then 2=__pX_Δ(N) < _X_Δ(N). if and only if N=37, Δ=⟨ 4 ⟩≤ (/ N)^× and p=2. Let N be a positive integer, p a prime not dividing N, and X_Δ(N) an intermediate modular curve of level N. Then 3=__pX_Δ(N) < _X_Δ(N). if and only if X_Δ(N)=X_0(73) and p=2. To answer <Ref> A) for d=3, in <Ref> we determine the fields of definition of all trigonal maps X_Δ(N) →^1 in characteristic p, for all p not dividing N. Anni, Assaf and Lorenzo García recently proved <cit.>, among other results, that there are no modular curves X_Δ(N) that have a smooth plane model of degree 5. We prove the same result for all X_Δ(N) over all _p. Let N be a positive integer, p a prime not dividing N, and X_Δ(N) an intermediate modular curve of level N. Then X_Δ(N)__p is not a smooth plane quintic. We prove our results in 2 steps. First, we show that the set of N we need to consider is such that the class number h(-4N) is not too large (see <Ref> and <Ref>). The (now proven) Gauss conjecture then reduces the problem to dealing with only finitely many N. This strategy could in principle also be used to classify all hyperelliptic and trigonal modular curves over ℚ and ℂ. Second, to deal with the remaining N, we develop explicit computational criteria (see <Ref>), based on Petri's theorem, to check whether, for a given N, there exists an X_Δ(N) of level N and a prime p not dividing N such that X_Δ(N)__p is hyperelliptic/trigonal/a smooth plane quintic. All the code used to obtain our results can be found at <cit.>. §.§ Acknowledgements We thank John Voight and David Zureick-Brown for their helpful discussion and pointers to relevant literature. § BACKGROUND AND NOTATION We now set up notation that will be used throughout the paper. By X_0(N) we denote the classical modular curve parametrizing isomorphism classes of pairs (E, C) of generalized elliptic curves together with a cyclic subgroup C of order N. We will call a divisor d>1 of N an Atkin-Lehner divisor if (d,N/d)=1. For any Atkin-Lehner divisor d of N the Atkin-Lehner involution w_d acts on (E,C) by sending it to (E/C,E[N]/C). The Atkin-Lehner involutions form a subgroup of (X_0(N)) isomorphic to (/2)^ω(N), where ω(N) is the number of prime divisors of N. The curve X_0(N) and all its Atkin-Lehner involutions are defined over . The quotient X_0(N)/w_N is denoted by X_0^+(N) and the quotient of X_0(N) by the whole group of Atkin-Lehner involutions is denoted by X_0^*(N). The number of the ramification points ν(d, N) of the map X_0(N)→ X_0(N)/w_d will be given in terms of class numbers of imaginary quadratic fields (see <Ref> and <Ref>). We give the relevant data, which is taken from <cit.>, about quadratic imaginary fields of class number up to 100 in <Ref> By X_1(N) we denote the modular curve whose k-rational points parameterize pairs (E,P) where E is a generalized elliptic curve and P∈ E(k) is a point N, up to k-isomorphism. For a d∈ (/N)^× /-1, the diamond operator ⟨ d⟩ acts on X_1(N) by sending (E,P) to (E,dP). For a subgroup Δ≤ (/N)^× /-1, X_Δ(N) is the quotient of X_1(N) by the automorphism group of diamond operators ⟨ d ⟩, for d∈Δ. We note that for Δ=(/N)^×, we have X_Δ(N) =X_0(N) and for Δ=-1, we have X_Δ(N)=X_1(N). For any Δ≤ (/N)^× /-1, the curve X_Δ(N) lies "in between" X_0(N) and X_1(N), in the sense that there are (Galois) morphisms X_1(N)→ X_Δ(N)→ X_0(N); hence the curves X_Δ(N) are called intermediate modular curves. Let X/k be a curve over a number field k of genus g≥ 2. The canonical ring (or as it is often called, the homogenous coordinate ring) of X is R(X):=⊕_d=0^∞H^0(X, Ω ^⊗ d_X/k). Let V:=H^0(X,Ω_X/k) and (V):=⊕_d=0^∞^d(V). The identity map ^1(V)→ R(X)_1 (which just sends V to V) induces a map of graded rings f_can:(V)→ R(X). Hence we obtain the canonical map X≃ (R(X)) → ( (V)) ≃^g-1_k. The ideal I_can:= f_can⊆ (V) is called the canonical ideal. Recall Petri's theorem (<cit.>, see also <cit.> for a historical overview): for a nonsingular projective curve X of genus ≥ 2 that is neither hyperelliptic, trigonal (possessing a map X→^1 of degree 3) nor a smooth plane curve of degree 5, the canonical ring R(X) is generated in degree 1 and I is generated in degree 2. More precisely, we have the following proposition. Let X/k be a curve of genus g ≥ 3. By V· (I_can)_2 we denote the image of V⊗ (I_can)_2 in (I_can)_3. a) X is hyperelliptic over k if and only if (I_can)_2 ⊆^2 (V) is of dimension g-12. b) Suppose X is not hyperelliptic and not a smooth a plane quintic. Then X is trigonal over k if and only if the dimension of (I_can)_3/(V· (I_can)_2) is g-3. c) Suppose X is a smooth plane quintic over k. Then g=6 and the dimension of (I_can)_3/(V· (I_can)_2) is g-3=3. This follows directly from <cit.>. § BOUNDS FOR HYPERELLIPTIC AND TRIGONAL CURVES We say that a curve is subhyperelliptic if it is of gonality ≤ 2. Let k be a prefect field, and let X, Y, Z be curves over k. Let non-constant morphisms π_Y:X→ Y and π_Z:X→ Z over k be given, and let their degrees be m and n, respectively. Assume that there is no morphism X→ X' of degree >1 through which both π_Y and π_X factor. Then the following inequality hold: g(X)≤ m · g(Y)+n· g(Z) +(m-1)(n-1). Let, as before, N be a integer, d an Atkin-Lehner divisor of N and let f_d:X_0(N)→ X_0(N)/w_d be the quotient map by the Atkin-Lehner w_d involution and denote by ν(d;N) the number of complex ramification points of f_d. Let N > 4 be an integer. The number of ramification points ν(N;N) of f_N satisfies ν(N;N)= h(-4N) +h(-N) if N ≡ 3 4, h(-4N) otherwise. In fact in <cit.> there is also a description of v(d;N) when d ≠ N. While we used that formula in some explicit calculations, it is not necessary for the main argument. Let X be a curve over a field k of genus g, let f:X→^1 be a morphism of prime degree p, and suppose G ≤_k̅ X is a Galois stable subgroup such that X/G is of genus 0. If g > (p-1)(#G-1) then f is cyclic and there is an automorphism σ of order p in G such that f is quotienting by σ. Since g > (p-1)(#G-1) the Castelnuovo-Severi inequality applied to g:X → X/G and f tells us that f and g have a common factor. But since f is of prime degree, it follows that g factors through f. Since, by Galois theory, all intermediate curves X→ X' → X/G are of the form X'=X/H for some subgroup H of G, it follows that there exists a subgroup G' of G of order p such that f is quotienting by G', proving the claim. Let p be a prime, N an integer, and k an algebraically closed field of characteristic coprime to N. Let Δ be a subgroup of (/N)^×/-1, such that w_d Δ w_d^-1 = Δ. Suppose there is a d | N with (d,N/d)=1 such that X_0(N)/w_d is of genus 0, and that genus(X_Δ(N)) > (p-1)(ϕ(N)/#Δ-1). Then any morphism f:X_Δ(N)→^1 of degree p is cyclic and there is an automorphism σ is in the group generated by w_d and the diamond operators, such that f is quotienting by σ. The condition w_d Δ w_d^-1 = Δ ensures that the map X_1(N)/Δ→ X_0(N)/w_d is Galois. The result now follows by applying <Ref> to the subgroup of X_Δ(N) generated by w_d and the diamond operators. Suppose k is a field such that p= k does not divide N. Let d>1 be an Atkin-Lehner divisors of N and f_d:X_0(N)→ X_0(N)/w_d the quotient map. The following hold: * if X_0(N)_k is hyperelliptic, then either (X_0(N)/w_d)_[1/N] is of genus 0 or ν(d;N) ≤ 4. * if f: X_0(N)_k →^1_k is a map of odd degree m then ν(d;N) ≤ 2m. * if f: X_0(N)_k →^1_k is a map of even degree m then either ν(d;N) ≤ 2m or f factors via f_d. We prove (1); part (2) and (3) are proven analogously, where in (2) we use that an odd degree map cannot factor via f_d. First observe that since X_0^+(N)_[1/N] is smooth, the genus is preserved under base change to k. It follows that X_0(N)/w_d has genus 0 over k if and only if it has genus 0 over [1/N]. Hence if its genus is nonzero, then f_d is not the hyperelliptic map. Note that since the degree of a map doesn't change under field extensions the proposition for k follows from the proposition for k̅, so we may assume k is algebraically closed and hence perfect. Applying the Castelnuovo-Severi inequality to the maps f_d and the hyperelliptic h:X_0(N)→^1, we get g(X_0(N))≤ 2g(X_0(N)/w_d)+1 . Applying the Riemann-Hurwitz formula to the map f_d we get 2g(X_0(N))=4 g(X_0(N)/w_d)-2+ν(d;N) . Combining (<ref>) and (<ref>) yields the claimed result. Suppose p and N are different primes and that N is such that X_0(N)_[1/N] is not hyperelliptic, while X_0(N)__p is. Then ν(N)≤ 4. Let N and p such that they satisfy our assumptions. First observe that since X_0^+(N)/[1/N] is smooth, and hence the genus is preserved under reduction modulo p, it follows that X_0^+(N) has genus 0 over _p if and only if it has genus 0 over [1/N]. Hence f_N is not the hyperelliptic map. Apllying the Castelnuovo-Severi inequality to the maps f_N and the hyperelliptic h:X_0(N)→^1, we get g(X_0(N))≤ 2g(X_0^+(N))+1 . Applying the Riemann-Hurwitz formula to the map f_N we get 2g(X_0(N))=4 g(X_0^+(N))+ν(N) . Combining (<ref>) and (<ref>) yields the claimed result. §.§ Hyperelliptic curves Let N be an integer, Δ≤ ( /N )^×/-1 and p a prime not dividing N. We will say that a pair (Δ,p) is an exceptional hyperelliptic pair if X:=X_Δ(N) is not hyperelliptic over [1/N], but X__p is hyperelliptic. We define S_0:={ 34,43,45,52,57,64,67,72,73,85,93,97,163,193}, H:={ 22, 23, 26, 28, 29, 30, 31, 33, 35, 37, 39, 40, 41, 46, 47, 48, 50, 59, 71 }, and SH:=H ∪{N≤ 32, 36, 49}. The set H is the set of N such the X_0(N) is hyperelliptic and SH is the set of N such that X_0(N) is subhyperelliptic. The set S_0 consists of the values of N such that X_0(N)_[1/N] is not subhyperelliptic, and v(d;N)≤ 4 for all Atkin-Lehner divisors d of N. Let Δ=( /N )^×/-1, i.e. X_Δ(N)=X_0(N). If (Δ,p) is an exceptional hyperelliptic pair, then N ∈ S_0. Suppose X_0(N)__p is hyperelliptic. By <Ref> (1), it follows that either X_0(N)_[1/N] is hyperelliptic (in which case (Δ,p) is not exceptional) or v(d;N)≤ 4 for every Atkin-Lehner divisor d of N. Using <Ref> we compute that v(d;N)≤ 4 for all Atkin-Lehner divisors d of N and X_0(N)_[1/N] is not subhyperelliptic only if N∈ S_0 ∪{88,148,232}. To rule out the values N∈{88,148,232} we note that for each of these N, there exists a divisor n|N such that X_0(n)_[1/n] is not subhyperelliptic and v(d;n)> 6 for some Atkin-Lehner divisor d of n. Hence for all primes p not dividing N (and hence not dividing N), it follows by the same argument as above that X_0(n)__p is not hyperelliptic. Now it follows by <cit.> that X_0(N)__p is not hyperelliptic. §.§ Trigonal curves Let N be an integer, Δ≤ ( /N )^×/-1 and p a prime not dividing N. We will say that a pair (Δ,p) is an exceptional trigonal pair if X_Δ(N) is not trigonal over [1/N], but is trigonal over _p. We will say (N,p) is an exceptional pair if X_0(N) is not trigonal over [1/N], but is trigonal over _p. Let S_1 be the following set S_1:={ 34, 37, 38, 40, 43, 44, 45, 48, 50, 52, 53, 54, 57, 58, 61, 64, 67, 72, 73, 76, 81, 85, 88, 93, 97, 106, 108, 109, 121, 157, 162, 163, 169, 193, 277, 397}. Let X=X_0(N). If (N,p) is an exceptional trigonal pair, then N ∈ S_1. Suppose X_0(N)__p is trigonal. By <Ref> (2), it follows that either X_0(N)_[1/N] is trigonal (in which case (N,p) is not exceptional) or v(d;N)≤ 6 for an Atkin-Lehner divisors d of N. Using <Ref> we compute that v(d;N)≤ 6 and X_0(N)_[1/N] is not trigonal only if N∈ S_1∪{148, 172, 232, 268, 652}. To rule out the values N∈{148, 172, 232, 268, 652} we note that for each of these N, there exists a divisor n|N such that X_0(n)_[1/n] is of gonality >3 and v(d;n)> 6 for some Atkin-Lehner divisor d of n. Hence for all primes p not dividing N (and hence not dividing n), it follows by the same argument as above that X_0(n)__p is not trigonal. Now it follows by <cit.> that X_0(N)__p is not trigonal. In <Ref> we will show that there is exactly one exceptional trigonal pair (N,p), namely (73,2). Using the same arguments as at the end of <Ref>, we see that the proof of <Ref> is now reduced to checking finitely many cases. If (N,p) is a hyperelliptic pair and q is a divisor of N, then q ≤ 253. Let p,q and N be such that they satisfy our assumptions and let d:X_0(N)→ X_0(q) be the degeneracy map. Since X_0(N)__p is by assumption hyperelliptic, then by <cit.> it follows that __p(X_0(q))≤ 2. Hence either _(X_0(q))≤ 2 or X_0(q) satisfies the assumptions of <Ref>. The previous lemma does not seem to help at all and trivially follows from the Corollary. Should we remove it? The set S_2 of primes q such that _(X_0(q))≤ 2 are known by the work of Ogg <cit.>. We get that the set S=S_1∪ S_2 of possible prime q divisors of N such that X_0(N)__p is hyperelliptic, while X_0(N)_[1/N] is not is contained in S:=TODO. § CHECKING HYPERELLIPTICITY AND TRIGONALITY OF A GIVEN X_Δ(N) OVER _P FOR ALL P In <Ref> we showed that to find all exceptional hyperelliptic pairs (Δ,p) we need to consider only finitely many subgroups Δ, i.e. only those for which either the level is in S_0 or such that X_0(N) is subhyperelliptic. Similarly, to determine all exceptional trigonal pairs (Δ,p) we need to consider only the finitely many subgroups Δ for which either the level is in S_1 or such that X_0(N) is of gonality ≤ 3 over . In this section we explain how to find, for a given Δ, all the p such that (Δ, p) is an exceptional hyperelliptic or trigonal pair. We will first need the following lemma. Let R be a discrete valuation ring with residue field k, fraction field K and uniformizer π. Let X be a nice (meaning smooth, projective, and geometrically integral) curve over R and suppose ℒ is a line bundle on X such that ℒ(X_k) and ℒ(X_K) have the same dimension, then the map ℒ(X)⊗_R k →ℒ(X_k) is an isomorphism. This is done by taking global sections of the exact sequence 0 →ℒℒ→ℒ/πℒ→ 0. Since ℒ/πℒ≅ℒ⊗ k, taking global sections of this sequence gives an injection ℒ(X)⊗_R k →ℒ(X_k). Comparing dimensions shows that it has to be an isomorphism. Let X be nice curve over [1/N] of genus g>2. We use the notation set up in <Ref>, in particular, V:=H^0(X,Ω_X_[1/N]). We have that V, ^2(V) and R(X)_2=H^0(X, Ω ^⊗ 2_X_[1/N]) are free [1/N]-modules of rank g, g+12 and 3g-3, respectively. From the previous lemma we get V⊗_p ≃ H^0(X__p, Ω_X__p), ^m(V) ⊗_p ≃^m(V⊗_p), R(X)_m ⊗_p ≃ R(X__p)_m =H^0(X__p, Ω ^⊗ m_X__p). The degree m part (I_can, _p)_m of the canonical ideal of X__p can be represented as (I_can, _p)_m=( f^m_can,_p: ^m(H^0(X__p, Ω_X__p))→ H^0(X__p, Ω ^⊗ m_X__p) ). §.§ Hyperelliptic curves It follows by <Ref> and <cit.> that X__p is hyperelliptic if and only if (I_can, _p)_2= g-12. Now the map f^2_can,_p is the reduction mod p of the map f^2_can: ^m(V) → R(X)_2. By putting a matrix representing the map f^2_can into Smith normal form it is easy to see modulo which primes the dimension of the kernel is g-12. So we have a criterion to detect the primes of hyperelliptic reduction. To explicitly determine H^0(X_Γ, Ω _X_Γ) for a modular curve X_Γ corresponding to the congruence group Γ⊃Γ(N), we use the isomorphism <cit.> H^0(X_Γ, [1/N], Ω _X_Γ,[1/N]) ≅ S_2(Γ,[1/N]), where for a ring R,we denote by S_2(Γ,R) is the space of cusp forms of weight 2 with coefficients in R. The map f^2_can,[1/N]: ^2(H^0(X, Ω_X_[1/N]))→ H^0(X_[1/N], Ω ^⊗ 2_[1/N]) can be computed on Sym^2 S_2(Γ,[1/N]) by multiplying q-expansions. On the other hand, (Ω^1_X/[1/N]⊗Ω^1_X/[1/N]) can be identified with the subspace of S_4(Γ,[1/N]) of cusp forms of weight 4 that have a double zero at all cusps. In particular, after obtaining a matrix representing the [1/N]-module homomorphism f^2_can,[1/N] as explained above, and putting it in Smith normal form, one can easily read out the exact primes p where the rank of this matrix will change upon reduction modulo p. So we have translated everything into ranks of matrices that can easily be computed in terms of cusp forms. §.§ Trigonality of nonhyperelliptic curves For the entirety of this section let X be nice curve over [1/N] of genus g>3 such that furthermore X__p is not hyperelliptic for any prime p coprime to N. Then we can also detect whether X__p is either trigonal or a smooth plane quintic. Namely by <Ref> this happens if and only if ((I_can,_p)_3/(V ⊗_p · (I_can,_p)_2)) = g-3. The assumption that X__p is not hyperelliptic for all primes p coprime to M implies that R(X__p)_m is generated in degree 1. In particular the map f^m_can,_p in (<ref>) is surjective and the ranks of the matrices associated to f_can,^m and f_can,_p^m are the same, from which it also follows that the dimension of (I_can, _p)_m is the rank of (I_can)_m as a [1/N]-module. As a consequence we have (I_can)_m ⊗_p ≅ (I_can, _p)_m The importance of the above isomorphism is that for all primes p one has that the linear map μ__p: (V ⊗_p) ⊗ (I_can,_p)_2 → (I_can,_p)_3 is just the reduction modulo p of the linear map: μ: V ⊗ (I_can)_2 → (I_can)_3. From the surjectivity of (<ref>) one can also compute (I_can,_p)_m. Indeed (I_can,_p)_m = ^m(H^0(X__p, Ω_X__p)) - H^0(X__p, Ω ^⊗ m_X__p), = g+m-1m - (2m-1)(g-1). Putting the above equalities together one has that the primes of trigonal or smooth plane quintic reduction are exactly the primes such that the matrix μ has rank g+3-13 - 5(g-1) - (g-3) modulo p. Again, these primes can easily be read of from the matrix μ after one puts μ into Smith normal form. However, the matrix μ has a domain and codomain whose dimension grow as a cubic polynomial in g so computing this matrix and putting it in Smith normal form might become computationally very expensive once g becomes large. And in fact it does become too expensive for some of the curves for which we wanted to compute the primes of trigonal reduction. However if X([1/N]) contain an integral point then the computation can be significantly sped up, as we will describe now. From the discussion in the first paragraph of <cit.> we directly get the following lemma. Let X_k be a non-hyperelliptic curve of genus ≥ 4 over an algebraically closed field k and X_k,2 be the variety cut out by (I_can,k)_2, i.e. the quadrics vanishing on X_k. Then (1) X_k is trigonal or a smooth plane quintic if and only if X_k,2 is a surface. (2) X_k is not trigonal or a smooth plane quintic if and only if X_k,2 =X_k. Now let P ∈ X([1/N]) be a point. From <Ref> it follows that the tangent space T_P X_k,2 is 1-dimensional if and only if X is not trigonal or a smooth plane quintic. Since X is not hyperelliptic modulo any prime, the canonical embedding allows one to see X as a subvariety of ^g-1_[1/N]. Let x_0, x_1,…,x_g-1 be the coordinates on ^g-1_[1/N] of some affine neighborhood of P and f_i be generators of (I_can)_2 on this neighborhood. Because of (<ref>) the reductions f_i modulo p are also generators of (I_can,_p)_2; we compute T_P__p X__p,2 as the kernel of the Jacobian matrix J=(δ f_i(P)/δ x_i)_i,j modulo p. This is a matrix that can itself be written down over [1/N], and as before putting it in Smith normal form allows us to easily read out the possible trigonal or smooth plane quintic primes, by computing the primes such that this matrix has rank < g-2. we might want to remove the below since it is now outdated? (I_can)_3= g-3 + (V·(I_can)_2)= Check whether the above generalizations are correct. The isomorphisms (<ref>) - (<ref>) show that (I_can)_m can be obtained by information about the map M:^m(V)→ R(X)_m. In the hyperelliptic case, since the dimension of Sym^2(H^0(X_Γ, Ω ^⊗ 2_X_Γ)) is g+12, it follows that that if M = g-12, then M = g+12-g-12= 2g-1. In the trigonal and plane quintic case, we have M = g+23-?= . §.§ Smooth plane quintics We note that the methods of our paper do not distinguish between smooth plane quintics and trigonal curves, hence the methods of <Ref> are also used to detect possible smooth plane quintics. However, since plane smooth quintics necessarily have genus 6, this greatly reduces the number of curves that need to be considered. It fortunately turns out that no intermediate modular curves of genus 6 satisfy the necessary and sufficient conditions of being either trigonal or a smooth plane quintic. §.§ Explicit computations with modular curves To explicitly determine H^0(X_Γ, Ω _X_Γ) for a modular curve X_Γ corresponding to the congruence group Γ⊃Γ(N), we use the isomorphism <cit.> H^0(X_Γ, [1/N], Ω _X_Γ,[1/N]) ≅ S_2(Γ,[1/N]), where for a ring R,we denote by S_2(Γ,R) is the space of cusp forms of weight 2 with coefficients in R. The map M can be computed on Sym^2 S_2(Γ,k) by multiplying q-expansions. On the other hand (Ω^1_X/k⊗Ω^1_X/k) can be identified with the subspace of S_4(Γ,k) of cusp forms of weight 4 that have a double zero at all cusps. In particular, after obtaining a matrix representing the [1/N]-module homomorphism M as explained above, and putting it in Smith normal form, one can easily read out the exact primes p where the rank of this matrix will change upon reduction modulo p. So we have translated everything into ranks of matrices that can easily be computed in terms of cusp forms. § PROOF OF THEOREMS <REF> <REF> AND <REF> We first prove that there are no exceptional pairs (Δ,p), where Δ is the trivial group; this is done as explained in <Ref>. By <Ref> the values that need to be checked are N∈ S_0. We obtain that p∤ N, the curve X_0(N)__p is hyperelliptic if and only if X_0(N)_[1/N] is. It remains to consider the nontrivial subgroups Δ. Suppose Δ is nontrivial and (Δ, p) is an exceptional hyperelliptic pair. If there exists a morphism of curves X_k→ Y_k defined over a field k and X_k is hyperelliptic, then it follows that Y_k has to be subhyperelliptic <cit.>. It follows, that X_Δ(N) can be hyperelliptic over _p only if X_0(N)_[1/N] is hyperelliptic. Since for any N there are finitely many Δ of level N, we are reduced to checking the hyperellipticity of finitely many X_Δ(N) to complete the proof of <Ref>. All the computations to verify this take 116 seconds, and we find the unique exceptional pair Δ:=⟨ 4 ⟩≤ (/ 37)^× and p=2. We first determine the exceptional trigonal pairs for the trivial groups Δ, i.e. X_Δ(N)=X_0(N). By <Ref>, the values that need to be checked are N∈ S_1. We follow the procedure described in <Ref> and get the exceptional trigonal pair X_0(73) and p=2. Using the arguments as in the proof of of <Ref>, it follows that it remains to nontrivial subgroups Δ, and the ones that need to be considered are the ones such that X_0(N)_[1/N] is of gonality ≤ 3, and in addition the Δ of level 73 for p=2. We check this, and after 23 minutes of computation obtain that there are no additional exceptional pairs. Using the arguments as in the proof of <Ref> (and the fact that we need only consider curves of genus 6), it turns out there are no intermediate modular curves that are smooth plane quintics in any characteristic. § FIELDS OF DEFINITION OF TRIGONAL MAPS ON NON HYPERELLIPTIC CURVES While <Ref> completely solves <Ref> B) for intermediate modular curves and d=3, it remains to consider <Ref> A) for trigonal curves over finite fields _q. Let X/k be a curve of genus g with X(k)≠∅ that is trigonal over k and not subhyperelliptic. Then X is trigonal over k if g=3 (<cit.>) or g>5 <cit.>. Hence the only case of interest here is g=4. Let X be a curve of genus 4 over [1/N] of non-hyperelliptic reduction at all primes of good reduction with X([1/N])≠∅. Then the canonical model of X is a smooth complete intersection of a cubic and a quadric Q in ^3_[1/N]. Let D be the discriminant of an extension of this quadric Q to ; we assume that D is not a perfect square and let Ø(D) be the order of discriminant D. Then * X is trigonal over the quadratic field Ø(D) ⊗ℚ, but not over ℚ. * For all positive odd integers n and all rational primes p coprime to N at which the order Ø(D) is maximal, X is trigonal over _p^n if and only if p splits or ramifies in Ø(D). * For all positive even integers n and all rational primes p coprime to N, X is trigonal over _p^n. Let k be a field of characteristic coprime to N with X:=X_k smooth and non-hyperelliptic. Furthermore, let W_d^r(X_k) ⊂^d X_k denote the Brill-Noether variety corresponding to line bundles that have at least r+1 linearly independent global sections. Note that since X([1/N]) ≠∅ we also have X(k) ≠∅ and hence any point in ^d X (k) actually comes from a k-rational line bundle of degree d [on the other hand, if X(k) = ∅ and k is perfect, then a point in ^d X (k) only gives a line bundle over k that is isomorphic to all its Galois conjugates]. In particular, every k-rational point on the Brill-Noether variety W_3^1(X_k) comes from a line bundle defined over k of degree 3 with two linearly independent global sections, and hence gives rise to a non-constant k-rational function degree ≤ 3, which is actually of degree 3 by the non-hyperellipticity assumption. Conversely, every k-rational function of degree 3 gives a rational point on W_3^1(X_k). In conclusion, X_k is trigonal if and only if W_d^r(X_k)(k) ≠∅. By <cit.>, the Brill-Noether variety W_3^1(X_k) either has 1 or 2 points over k. The case W_3^1(X) being isomorphic to X mentioned there cannot happen since we assumed X to not be hyperelliptic. If #W_3^1(X_k)(k)=1 then #W_3^1(X_k)(k)=1 and X_k is trigonal by the above discussion. On the other hand if #W_3^1(X_k)(k)=2, then there are exactly two possibilities. Namely: (a) #W_3^1(X_k)(k)=2 and hence X_k is trigonal, (b) #W_3^1(X_k)(k)=0 and hence X_k is not trigonal but becomes trigonal over the quadratic extension of k over which the two points of #W_3^1(X) are defined. We let F_X_k denote the smallest field extension of k for which #W_3^1(X_k)(F_X_k) > 0. So either F_X_k = k or it is the quadratic field over which the two points of #W_3^1(X) are defined in case (b) above. Since X_k is not hyperelliptic, the canonical model of X is a smooth complete intersection of a cubic and a quadric Q_k in ^3_k. Each g_3^1 corresponding to one of the points in W_3^1(X)(k) is a family of lines in ^3_k intersecting X_k three times, counting multiplicities. Let L denote one of these lines. By Bezout's theorem this line L lies on the quadric Q_k. In particular, if Q_k is nonsingular then the lines of the g_3^1 actually form a ruling of Q (see <cit.> again). In the nonsingular case, the field F_X_k defined above is actually the field of definition of these rulings. If k>2, then this field is obtained by adjoining to k the square root of the discriminant of the polynomial defining Q (see e.g <cit.>). Part (1) immediately follows from this discussion. Indeed, F_X_ = (√(D)) = Ø(D) ⊗ℚ. So for a field extension K of we have # W_3^1(X_)(K) > 0 if and only if K contains (√(D)). Both of these equivalent conditions are equivalent to X_K being trigonal. Part (3) follows similarly since F_X__p is either _p or _p^2. So in particular, _p^n contains F_X__p if n is even. Now we prove part (2) in the case where p splits or ramifies in Ø(D). Let 𝔭 be a prime of Ø(D) lying over p. Then by the maximality assumption on Ø(D) at p, the ring Ø(D)_𝔭 is a discrete valuation ring and the residue field of Ø(D)_𝔭 is _p. By part (1) we have X_Ø(D) ⊗ = 3. Since the gonality of a curve can only decrease under specialization and field extensions we have X__p^n≤ X__p≤ X_Ø(D)_𝔭⊗≤ X_Ø(D) ⊗ =3. Hence X__p^n=3 by the non-hyperellipticity assumption. What remains is to show part (2) when p is inert. Assume for the moment p>2. Since p is inert in Ø(D) we know that F_X__p = _p(√(D)) is a quadratic extension of _p and hence isomorphic to _p^2. It follows that X is trigonal over _p^n if and only if _p^n contains _p^2, which happens exactly when n is even. So if n is odd, X__p^n is not trigonal. The case p=2 is slightly more subtle. While it might be possible to deal with this case abstractly, we found it clearer to take a more explicit approach. Let's write the quadric Q in ^3 as Q = ∑_i=0^3 ∑_j=0^i a_i,jx_ix_j. For this quadric we will also use the matrix form notation Q = [ a_0,0 a_0,1 a_0,2 a_0,3; a_1,1 a_1,2 a_1,3; * a_2,2 a_2,3; * * a_3,3 ]. Let P ∈ X([1/N]) ⊆^3([1/N]) be a point. By choosing a suitable set of coordinates on ^3 we may assume P = (0:0:0:1) and hence a_3,3=0. By a further change of coordinates we may assume a_1,3=a_2,3=0 as well, so that Q looks like: [ a_0,0 a_0,1 a_0,2 a_0,3; a_1,1 a_1,2 0; * a_2,2 0; * * 0 ]. With Q as above, we have D = a_0,3^2(a_1,2^2 - 4a_1,1a_2,2). The assumption for p to be inert in Ø(D) is equivalent to D ≡ 5 8. In particular Q__p is nonsingular and a_0,3≡ a_1,2≡ a_1,1≡ a_2,2≡ 1 2. Since Q__2 is nonsingular it has two rulings. These two rulings both contain a line passing through P. Additionally, starting from P one can let T ⊂ P^3__2 be the tangent space to Q__2 at P__2, and T ∩ Q__2 will be exactly the union of these same two lines. In particular if these two lines are interchanged by the action of Galois, then these two rulings will be interchanged as well. Now let's compute T on the affine chart where x_3 ≠ 0 and let X_0,X_1,X_2 be the affine coordinates on ^3__2. Then the tangent space T is given by X_0=0, so that the union of two lines on Q__2 passing through P__2 can be described by X_0=0 and a_1,1X_1^2 + a_1,2X_1X_2 +a_2,2X_2^2. Since a_1,1≡ a_2,2≡ a_1,2 = 1 2, the polynomial a_1,1X_1^2 + a_1,2X_1X_2 +a_2,2X_2^2, is irreducible and hence so is also the scheme T ∩ Q__2. This can only happen if the geometric lines generating T ∩ Q__2 are Galois conjugates. In particular, the two rulings of Q__2 are swapped by the action of Galois, W_3^1(X__2) consist of two points whose field of definition is _2^2 and hence X__2^n is not trigonal when n is odd. In case X_k is of genus 4 with X_k(k)≠∅ and is neither hyperelliptic or trigonal over k, then it is necessarily tetragonal over k, see e.g. <cit.>. The table below lists all genus 4 intermediate modular curves by specifying their level N, the group Δ(N), and the disciriminant D of the corresponding order Ø. Note that Δ(N)=(/N)^*/± 1 means X_Δ(N)=X_0(N) and D a perfect square means that the curve is trigonal over . We note that the cases X_Δ(N)=X_0(N) had already been previously solved in <cit.>. § §.§ Discriminants of class number at most 100 Let D<0 be a fundamental discriminant and f a positive integer. Then the following formula relates class numbers of (not necessarily fundamental) discriminants to those of fundamental discriminants (see <cit.>) h(Df^2) = h(D)f/u_D,f∏_p | f( 1- (D/p) 1/p), where w_D,f = 3 if D=-3 and f ≠ 1, w_D,f = 2 if D=-4 and f ≠ 1 and w_D,f=1 otherwise. Using this formula and the list of negative fundamental discriminants of class number ≤ 100 from <cit.>, it is straightforward to compile a list of all negative discriminants of class number ≤ 100. For each class number h ≤ 100 we record the number of discriminants Df^2 < 0 such that h(Df^2)=h in the second column, and the smallest Df^2 such that h(Df^2)=h in the third column. The full list of all discriminants of class number ≤ 100 can be found at <cit.> in the file. §.§ Ramification degrees of X0(N) -> X0(N)+ at most 100 Using <Ref> and <Ref> one can easily compile a list of all integers N such that the ramification degree of X_0(N) → X_0(N)^+ is at most 100. For each degree d ≤ 100 we record the number of integers N that the ramification degree of X_0(N) → X_0(N)^+ equals d, as well at the maximum of these N. Note that by the Riemann-Hurwitz formula the ramification degree is always even. The full list of all integers N of ramification degree ≤ 100 can be found at <cit.> in the file. siam
http://arxiv.org/abs/2307.04372v1
20230710070518
New results on the dynamics of critical collapse
[ "Jun-Qi Guo", "Yu Hu", "Pan-Pan Wang", "Cheng-Gang Shao" ]
gr-qc
[ "gr-qc" ]
[email protected] of Physics and Technology, University of Jinan, Jinan 250022, Shandong, China [email protected] Key Laboratory of Fundamental Physical Quantities Measurement, Hubei Key Laboratory of Gravitation and Quantum Physics, PGMF, and School of Physics, Huazhong University of Science and Technology, Wuhan 430074, Hubei, China [email protected] [email protected] We study the dynamics of critical collapse of a spherically symmetric scalar field. Approximate analytic expressions for the metric functions and matter field in the large-radius region are obtained. It is found that because of the boundary conditions at the center, in the central region, the equation of motion for the scalar field is reduced to the flat-spacetime form. On the other hand, due to the connection to its neighbouring region where gravity plays an important role, the scalar field in the central region feels the gravitational effects indirectly. New results on the dynamics of critical collapse Cheng-Gang Shao2 August 12, 2023 ================================================ § INTRODUCTION The critical phenomena in gravitational collapse discovered by Choptuik demonstrate the rich dynamics of the Einstein equations <cit.>. Consider gravitational collapse of generic families of massless scalar field, which initial data are parameterized by p. The parameter p measures the strength of the gravitational interaction. Strong interactions (high p) result in black hole formation, while weak interactions (low p) will make the matter field disperse to infinity and flat spacetime will be left. By fine tuning p to the threshold of black hole formation, p=p_*, critical collapse occurs. In supercritical collapse, a tiny black hole will form, which mass has a scaling relation, m_BH∝|p-p_*|^γ, where γ≃ 0.37. The critical collapse solution shows universality feature. Namely, the spacetime produced by different families of critical initial data approach the same solution after a finite time in a finite region. The solution also displays discrete self-similarity: it is invariant under rescaling the spacetime by a certain factor. After the discovery, similar results have been obtained in many other models (see Ref. <cit.> for review). Recently, further results on simulations were reported in Refs. <cit.>. Analytic interpretations are important for understanding the dynamics of gravitational collapse. In Refs. <cit.>, critical collapse was treated as an eigenvalue problem. By imposing discrete self-similarity, the global structure of the critical collapse spacetime was constructed with the pseudo-Fourier method. The rescaling factor Δ becomes an eigenvalue and was solved with high precision. The scaling law of the black hole mass in supercritical collapse was recovered analytically via perturbation approach in Ref. <cit.>. Critical collapse was analyzed with a renormalization group method in Refs. <cit.>. In Ref. <cit.>, with an explicit approximate solution, a true solution was shown to exist. In Ref. <cit.>, using one typical log-periodic formula in discrete scale invariance systems, the authors obtained one approximate analytic solution for the spacetime near the center. Approximate analytic expressions for the metric functions and matter field near the central singularity in black hole formation were obtained in Refs. <cit.>. In Ref. <cit.>, the equations for the matter field in critical collapse were analyzed with certain terms in the equations being dropped. Approximate expressions for certain combinations of the metric functions and derivatives of the scalar field were obtained. In this paper, considering the significance of analytic results, with numerical data, we obtain approximate analytic expressions for the metric functions and matter field in the large-radius region. We also investigate the dynamics in the central region. We find that due to the boundary conditions at the center, the equation of motion for the scalar field in the central region is reduced to the flat-spacetime form. This paper is organized as follows. In Sec. <ref>, we describe the methodology for simulating critical collapse. In Secs. <ref> and <ref>, we study the dynamics in the large-radius and central regions, respectively. The results are summarized in Sec. <ref>. § METHODOLOGY We consider critical collapse of a spherically symmetric massless scalar field ϕ. Take the polar coordinates, ds^2=-A(r,t)e^-2δ(r,t)dt^2+1/A(r,t)dr^2+r^2dΩ^2. Then the equations can be written as A_,r=1-A/r-4π rA(P^2+Q^2), δ_,r=-4 π r(P^2+Q^2), Q_,t=(A e^-δ P)_,r, P_,t=1/r^2(r^2 A e^-δ Q)_,r, A_,t=-8π rA^2e^-δPQ, where Q(r,t)≡ϕ_,r, and P(r,t)≡ A^-1 e^δϕ_,t. The (_,r) and (_,t) denote partial derivatives with respect to the coordinates r and t, respectively. The Misner-Sharp mass is defined as <cit.> m≡r/2(1-g^μνr_,μr_,ν)=r/2(1-A). The initial conditions for ϕ are set up as ϕ|_t_i=aexp[-(r/σ)^2] and ϕ_,t|_t_i=0. The regularity of Eq. (<ref>) at the center requires that A(r=0,t)=1. We choose δ(r=0,t)=0, which implies that the coordinate time is equal to its proper time at the center. In the simulation, we integrate Eqs. (<ref>)-(<ref>) by the fourth-order Runge-Kutta method. Mesh refinement algorithm is implemented. For details on the numerics, see Ref. <cit.>. § RESULT I: DYNAMICS IN THE LARGE-RADIUS REGION Rewrite the metric (<ref>) as ds^2=-α^2(r,t)dt^2+β^2(r,t)dr^2+r^2dΩ^2. For convenience, we adjust the time coordinate, such that t=0 when the naked singularity forms. Define the variables, X(r,t)≡√(2π)(r/β)ϕ_,r, Y(r,t)≡√(2π)(r/α)ϕ_,t, ρ≡ln r, T≡ln(-t), and u≡ t/r. Then the equations for ϕ (<ref>) and (<ref>) can be respectively rewritten as (β X)_,u=-α Y + (α Y)_,ρ -u(α Y)_,u, (β Y)_,u=α X + (α X)_,ρ-u(α X)_,u. In critical collapse, the period in terms of the coordinate time t is exponentially decreasing. Consequently, the metric functions and matter field in the late stage of collapse and large-radius region for which |t/r|≪ 1, appear to be frozen, rather than propagating <cit.>. In Ref. <cit.>, the authors made one ansatz that in this region the last terms in Eqs. (<ref>) and (<ref>) are negligible in comparison with the first ones. Moreover, treating α and β as constants, the authors obtained the following solutions: X≈ Bsin[ω(ρ-α u)-γ], Y≈ Bsin[ω(ρ-α u)], where 1+ω^-2=β^2, sinγ=(ωβ)^-1, and cosγ=-β^-1. The expressions (<ref>) match well with the numerical results. However, some treatments in the above have not been fully justified. In addition, although the approximate expressions for X and Y were obtained, the results for the metric functions and scalar field remain absent. We address such issues below. In Ref. <cit.>, some terms in Eqs. (<ref>) and (<ref>), -u(α Y)_,u, -u(α X)_,u, β_,uX, α_,ρY, β_,uY and α_,ρX, were dropped. Actually, as shown in Figs. <ref> and <ref>, in the large-radius region (r>10^-3), the absolute values of the terms, -uα_,uY, -uα_,uX, α_,ρY and α_,ρX, can sometimes be greater than the absolute values of other terms. On the other hand, the terms dropped approximately cancel. Consequently, the equations constructed by the remaining terms roughly hold, β X_,u≈-α Y + α Y_,ρ, β Y_,u≈α X + α X_,ρ. So from this point of view, the treatments in Ref. <cit.> are effectively valid. Motivated by the expressions for X and Y (<ref>) and the numerical results for ϕ, we find that the field ϕ admits the following approximate expression: ϕ(r,t)≈ C_1(1+C_2[H(r,t)])cos(ωln r + C_3[H(r,t)] + φ_0 ). The quantity [H(r,t)] has the following features: * For [H(r,t)], there is [H(r,t)]=H(r,t)≡ωα t/r=ω A^1/2e^-δt/r. * For ϕ_,t, there is ϕ_,t≈ C_1√(C_2^2+C_3^2)[H]_,tcos(ωln r +C_3[H]+φ_0+φ_1), where tanφ_1≡ C_3/C_2. Regarding the quantity H_,t(=ωα/r+ωα_,tt/r), the numerical results show that |ωα_,tt/r| is sometimes greater than ωα/r. However, comparing the expression (<ref>) with the numerical results for ϕ_,t, we always obtain [H]_,t≈ωα/r=ω A^1/2e^-δ1/r. This implies that in [H]_,t the contribution from ωα_,tt/r is negligible. This should be related to the fact that the respective reductions of Eqs. (<ref>) and (<ref>) to (<ref>) and (<ref>) are equivalent to treating α and β as constants. * The numerical results in Fig. <ref>(a) show that in the large-radius region, the equation of motion for ϕ (<ref>) is reduced to A^-1e^δϕ_,tt≈-(A^-1e^δ)_,tϕ_,t. Using Eq. (<ref>) and the numerical results of |δ_,t|≫ |A_,t|, we have ϕ_,tt≈-δ_,tϕ_,t. Combination of Eqs. (<ref>), (<ref>), (<ref>) and the numerical results of |δ_,t|≫ H_,t generates [H]_,tt≈ωα_,t/r≈ -δ_,t[H]_,t. Namely, the dynamical feature of α begins to take effect since [H]_,tt. * At the late stage of critical collapse, in the large-radius region for which |t/r|≪ 1, there are |H|≪|ωln r| and |H_,r|≪ 1/r. Therefore, with Eq. (<ref>), [H] mainly contributes to the temporal derivatives of ϕ, rather than to the field ϕ and its spatial derivatives. The numerical results show that C_1≈0.058, C_2^2+C_3^2≈ 1, and φ_1≈ 1.08. As shown in Figs. <ref>(a) and <ref>(b), the expressions for ϕ (<ref>), ϕ_,t (<ref>) and ϕ_,tt (<ref>) agree well with the numerical results. With Eqs. (<ref>) and (<ref>), one can rewrite Eq. (<ref>) as 1/A∂ A/∂ t =-8π rϕ_,tϕ_,r ≈ C_4[H]_,t[sin(2ωln r + 2C_3[H] + 2φ_0 + φ_1 ) - sinφ_1], where C_4=4πωC_1^2√(C_2^2+C_3^2). Via integration, we have ln A≈ -C_4/2C_3cos(2ωln r + 2C_3[H] + 2φ_0 + φ_1 ) - C_4sinφ_1[H] + C_5. Then using Eq. (<ref>) and the fact that |H|≪ 1, we obtain m/r≈ C_6cos(2ωln r+2C_3[H]+2φ_0+φ_1)+C_7, where C_6≈ e^C_5C_4/(4C_3)≈ e^C_5πωC_1^2√(C_2^2+C_3^2)/C_3, and C_7=(1/2)(1-e^C_5). As shown in Fig. <ref>(c), the expression for m/r (<ref>) matches well with the numerical results. The fitting results are C_6=0.013360± 0.000009≈ 1/75, and C_7≈ 0.065480± 0.000007≈ 1/15. With Eq. (<ref>), one can rewrite Eqs. (<ref>) and (<ref>) as m_,r=2π r^2 A(P^2+Q^2), rδ_,r=∂δ/∂ln r=-2/1-2m/rm_,r. Then the solution for δ can be expressed as δ≈ C_8ln r +ln(1-2m/r) +C_9sin(2ωln r+2C_3[H]+2φ_0+φ_1)+δ_0(t), where C_8≈-2C_7/(1-2C_7)-2C_6^2, and C_9≈-(C_6+8C_6C_7)/ω. As shown in Fig. <ref>(d), the expression for δ (<ref>) match well with the numerical results. In Ref. <cit.>, the quantities α and β were treated as constants. The approximate expressions for X and Y obtained in this way agree well with the numerical results. Then it was stated that in this circumstance the spacetime is effectively flat. Actually, X and Y are combinations of the metric functions and derivatives of the scalar field, rather than the scalar field. In order to check whether the spacetime is effectively flat, it may be more appropriate to investigate directly the behavior of the equation of motion for the scalar field (<ref>). As shown in Fig. <ref>(a), in the large-radius region, Eq. (<ref>) is reduced to Eq. (<ref>), which is clearly different from the flat-spacetime form, ϕ_,tt=r^-2(r^2ϕ_,r)_,r. So the spacetime in this region is not effectively flat. § RESULT II: DYNAMICS IN THE CENTRAL REGION As shown in Fig. <ref>(a), in the central region, the absolute values of the terms of (A^-1e^δ)_,tϕ_,t and (A^-1e^δ)_,rϕ_,r in Eq. (<ref>) are much less than the absolute values of A^-1e^δϕ_,tt, Ae^-δϕ_,rr, and (2/r)Ae^-δϕ_,r. Moreover, in this region, A≈1, and δ≈ 0. Consequently, Eq. (<ref>) is reduced to the flat-spacetime form, ϕ_,tt≈1/r^2(r^2ϕ_,r)_,r. Regarding Eq. (<ref>), we make the following discussions: * Equation (<ref>) implies that in the central region, the scalar field ϕ evolves almost as in flat spacetime, not directly feeling the gravitational effects. On the other hand, as shown in Fig. <ref>, in the transition region locating between the central and large-radius ones, the gravitational effects are important on the dynamics of the scalar field. Therefore, due to the connection between the central and transition regions, in the central region, gravity affects the evolution of the scalar field indirectly. * Besides critical collapse, we also check the evolution for the scalar field in another two types of collapse (dispersion and black hole formation), and obtain similar results as (<ref>). * The result (<ref>) is closely related to the asymptotic behaviors of the metric functions and scalar field near the center. Under the smoothness requirement at the center, the metric functions and scalar field have the following power series expansions near the center <cit.>: A≈ 1+A_2(t)r^2, dddδ≈δ_2(t)r^2, dddϕ≈ϕ_0(t) + ϕ_2(t)r^2. With Eqs. (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), we obtain the following asymptotic expressions: [ A_,t≈ -16πϕ_,tϕ_2r^2, δ_,t≈ -4πϕ_,ttϕ_,tr^2,; ; ϕ_,t≈ϕ_0'(t)+ϕ_2'(t)r^2, A_,r≈ -8π/3(ϕ_,t)^2 r,; ; δ_,r≈ -4π(ϕ_,t)^2 r, ϕ_,r≈ 2ϕ_2(t)r,; ] which are also shown in Fig. <ref>. With Eqs. (<ref>) and (<ref>), one can straightforwardly simplify Eq. (<ref>) to (<ref>). * It is known that in critical collapse, the Ricci curvature scalar R in the central region is very high and will diverge eventually. This fact is not in contradiction with the result (<ref>). For the metric (<ref>), the Ricci curvature scalar can be written as R= 4Aδ_,r/r - 4A_,r/r + 2Aδ_,rr + 2(1-A)/r^2 - A_,rr - A_,tte^2δ/A^2 + 3A_,rδ_,r - 2A(δ_,r)^2 + 2(A_,t)^2 e^2δ/A^3 - A_,tδ_,te^2δ/A^2. With Eqs. (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), we obtain the asymptotic expressions for all the terms on the right-hand side of Eq. (<ref>): 4Aδ_,r/r≈ -2D,- 4A_,r/r≈4/3D,2Aδ_,rr≈ -D, 2(1-A)/r^2≈- A_,rr≈D/3, D=8π(ϕ_,t)^2. -A_,tte^2δ/A^2≈ 16πϕ_,ttϕ_2r^2, 3A_,rδ_,r≈ 32π^2 (ϕ_,t)^4 r^2, -2A(δ_,r)^2≈ -32π^2 (ϕ_,t)^4 r^2, 2(A_,t)^2 e^2δ/A^3≈ 512π^2 (ϕ_,t)^2 (ϕ_2)^2 r^4, - A_,tδ_,te^2δ/A^2≈ -64π^2 ϕ_,tt(ϕ_,t)^2 ϕ_2 r^4. The first five terms are dominant and have the same order of magnitude as 8π(ϕ_,t)^2 which will diverge eventually; and the rest terms are proportional to r^2 or r^4 and are negligible. As shown in Fig. <ref>(b), the transition region between the central and large-radius regions can be expressed as r∈ [r_1, r_2]. At r=r_1, there is |C_3H|∼|ωln r|; and at r=r_2, there is |C_3H_,r|∼ω/r. § SUMMARY Analytic solutions are important for understanding the dynamics of gravitational collapse. Due to the complexity of the Einstein equations, seeking the analytic solutions to the equations has been a very difficult task. In the successful circumstances, the equations are usually reduced to ODEs. In critical collapse, the equations remain PDEs, while in the large-radius region and late stage of the evolution, the spatial and temporal contributions are separate to some extent. This enables us to obtain approximate analytic expressions for the metric functions and matter field. The boundary conditions at the center play a key role on the dynamics in the central region. In this region, due to the boundary conditions, in the equation of motion for the scalar field, the terms related to the gravitational effects are negligible, such that the equation is reduced to the flat-spacetime form. On the other hand, in the transition region, gravity is important for the evolution of the scalar field. Consequently, due to the connection between the central and transition regions, the scalar field in the central region feels the gravitational effects indirectly. § ACKNOWLEDGMENTS The authors are very thankful to Xiao-Kai He, Junbin Li and Cheng-Yong Zhang for the helpful discussions. JQG is supported by Shandong Province Natural Science Foundation under grant No. ZR2019MA068. YH and CGS are supported by the National Natural Science Foundation of China (Grant No. 11925503). 99Choptuik:1992jv M. W. Choptuik, Phys. Rev. Lett. 70, 9 (1993). Gundlach:2007gc C. Gundlach and J. M. Martin-Garcia, Living Rev. Rel. 10, 5 (2007). [[gr-qc]0711.4620] Bizon:2011gg P. Bizon and A. Rostworowski, Phys. Rev. Lett. 107, 031102 (2011). Deppe:2018uye N. Deppe, L. E. Kidder, M. A. Scheel and S. A. Teukolsky, Phys. Rev. D 99, 024018 (2019). [[gr-qc]1802.08682] Baumgarte:2019fai T. W. Baumgarte, C. Gundlach and D. Hilditch, Phys. Rev. Lett. 123, 171103 (2019). [[gr-qc]1909.00850] Kelson-Packer:2020hbb C. Kelson-Packer and J. Belz, Phys. Rev. D 102, 084050 (2020). [[gr-qc]2008.06774] Mendoza:2021nwq M. F. P. Mendoza and T. W. Baumgarte, Phys. Rev. D 103, 124048 (2021). [[gr-qc]2104.03980] Zhang:2021nnn C.-Y. Zhang, Q. Chen, Y. Liu, W.-K. Luo, Y. Tian and B. Wang, Phys. Rev. Lett. 128, 161105 (2022). [[gr-qc]2112.07455] Gundlach:1995kd C. Gundlach, Phys. Rev. Lett. 75, 3214 (1995). [gr-qc/9507054]. Gundlach:1996eg C. Gundlach, Phys. Rev. D 55, 695 (1997). [gr-qc/9604019] Martin-Garcia:2003xgm J. M. Martin-Garcia and C. Gundlach, Phys. Rev. D 68, 024011 (2003). [gr-qc/0304070] Koike:1995jm T. Koike, T. Hara and S. Adachi, Phys. Rev. Lett. 74, 5170 (1995). [gr-qc/9503007] Hara:1996mc T. Hara, T. Koike and S. Adachi, gr-qc/9607010Reiterer:2012hnr M. Reiterer and E. Trubowitz, Commun. Math. Phys. 368, 143 (2019). [[gr-qc]1203.3766] Guo:2018yyt J.-Q. Guo and H. Zhang, Eur. Phys. J. C 79, 625 (2019). [[gr-qc]1808.09826] Guo:2013dha J.-Q. Guo, D. Wang and A. V. Frolov, Phys. Rev. D 90, 024017 (2014). [[gr-qc]1312.4625] Guo:2020jfa J.-Q. Guo, J. Phys. Comm. 5, 075015 (2021). [[gr-qc]2011.14853] Price:1996sk R. H. Price and J. Pullin, Phys. Rev. D 54, 3792 (1996). [gr-qc/9601009] Misner_1964 C. W. Misner and D. H. Sharp, Phys. Rev. 136, B571 (1964). Zhang:2016kzg C.-Y. Zhang, Z.-Y. Tang and B. Wang, Phys. Rev. D 94, 104013 (2016). [[gr-qc]1608.04836] Choptuik_workshop_1993 M. W. Choptuik, Critical Behaviour in Scalar Field Collapse, in Proceedings of a NATO Advanced Research Workshop on Deterministic Chaos in General Relativity, Springer Science+Business Media, LLC. Editors: D. Hobill and A. Burd and A. Coley, 155-175, 1993. Choptuik:1997mq M. W. Choptuik, The (Unstable) threshold of black hole formation, 15th International Conference on General Relativity and Gravitation (GR15), 67–86, 1997.
http://arxiv.org/abs/2307.04387v1
20230710074833
Classification of metric fibrations
[ "Yasuhiko Asao" ]
math.AT
[ "math.AT", "math.CT", "math.MG" ]
Relieving the S_8 Tension: Exploring the Surface-type DBI Model as a Dark Matter Paradigm Huanyuan Shan August 12, 2023 ========================================================================================= In this paper, we study `a fibration of metric spaces' that was originally introduced by Leinster (<cit.>) in the study of the magnitude and called metric fibrations. He showed that the magnitude of a metric fibration splits into the product of those of the fiber and the base, which is analogous to the Euler characteristic and topological fiber bundles. His idea and our approach is based on Lawvere's suggestion of viewing a metric space as an enriched category (<cit.>). Actually, the metric fibration turns out to be the restriction of the enriched Grothendieck fibrations (<cit.>) to metric spaces (<cit.>). We give a complete classification of metric fibrations by several means, which is parallel to that of topological fiber bundles. That is, the classification of metric fibrations is reduced to that of `principal fibrations', which is done by the `1-Čech cohomology' in an appropriate sense. Here we introduce the notion of torsors in the category of metric spaces, and the discussions are analogous to the sheaf theory. Further, we can define the `fundamental group π^m_1(X)' of a metric space X, which is a group object in metric spaces, such that the conjugation classes of homomorphisms (π^m_1(X), ) corresponds to the isomorphism classes of `principal -fibrations' over X. Namely, it is classified like topological covering spaces. § INTRODUCTION The idea of metric fibration is first introduced by Leinster in the study of magnitude (<cit.>). The magnitude theory that he coined can be considered as a promotion of Lawvere's suggestion of viewing a metric space as a [0, ∞]-enriched category. The magnitude of a metric space is defined as the `Euler characteristic of enriched categories'. In fact, he showed that the magnitude of a metric fibration splits into the product of those of the fiber and the base (Theorem 2.3.11 of <cit.>), which is analogous to the case of topological fiber bundles. Later, the author (<cit.>) pointed out that it is actually a restriction of the enriched Grothendieck fibration (<cit.>) to metric spaces, by dealing with small categories and metric spaces from a unified view point, namely as filtered set enriched categories. By this approach, we can expect to obtain novel ideas to the one side that is well-studied on the other side. As an example, the following Figure 1 is one of the simplest non-trivial metric fibrations. Note that we consider connected graphs as metric spaces by taking the shortest path metric (see also Proposition <ref>). Both graphs are metric fibrations over the complete graph K_3 with the fiber K_2 as shown in Example 5.29 of <cit.>. Further, they have the same magnitude as pointed out in Example 3.7 of <cit.>. In Proposition 5.30 of <cit.>, it is shown that the right one is the only non-trivial metric fibration over K_3 with the fiber K_2. Here, `trivial' means that it is the cartesian product of graphs. On the other hand, any metric fibration over a four cycle graph C_4, or more generally an even cycle graph, is shown to be trivial in the same proposition. In this paper, we give a complete classification of metric fibrations by several means, which is parallel to that of topological fiber bundles. Namely, we define `principal fibrations', `fundamental groups' and `a 1-Čech cohomology' for metric spaces, and obtain the equivalence between categories of these objects. Roughly speaking, we obtain an analogy of the following correspondence in the case of topological fiber bundles with a discrete structure group. Fiber bundles over X with structure group G@<->[d] Principal G-bundles over X (G-torsors)@<->[d] [X, BG] ≅(π_1(X), G)/ conjugation@<->[d] H^1(X, G) We explain more in detail in the following. First recall that any usual Grothendieck fibration over a small category C can be obtained from a lax functor C, which is called the Grothendieck construction (<cit.>). In <cit.>, it is shown that any metric fibration over a metric space X can be obtained from a `lax functor' X that is called metric action (Definition <ref>). Here is the category of metric spaces and Lipschitz maps. We can consider the Grothendieck and the metric fibration as the definition of fibrations via `the lifting property', while the lax functor and the metric action is the one via `the transformation functions'. More precisely, we have the following. The Grothendieck construction gives a category equivalence _X ≃_X, where we denote the category of metric actions X by _X and the category of metric fibrations over X by _X (Definitions <ref>, <ref>). We can define a subcategory _X^ of _X that consists of `principal -fibrations' (Definition <ref>). We call it a category of -torsors. On the other hand, we can also define a subcategory _X^ of _X^ that is the counterpart of _X^ (Definition <ref>). The category _X^ consists of a metric action X that takes a group , not just a metric space, as the value. Then we have the following. The Grothendieck construction gives a category equivalence _X^≃_X^. Here, a group is not just a group but is a group object of , which we call a metric group (Definition <ref>). As an example of a metric group, we construct the fundamental group π_1^m(X) of a metric space X (Definition <ref>). We also define a category (π_1^m(X), ) of homomorphisms π_1^m(X), where a morphism between homomorphisms is defined as a conjugation relation (Definition <ref>). Then we have the following. We have a category equivalence (π^m_1(X, x_0), ) ≃^_X. As a corollary, we reprove Proposition 5.30 of <cit.> in the following form. We note that the notion of a metric group is equivalent to that of a `normed group' (Proposition <ref>). For a metric group , we denote the corresponding norm of an element g ∈ by |g| ∈_≥ 0. Let C_n be an undirected n-cycle graph. Then we have π^m_1(C_n) ≅ with |1| = 1 n : odd, 0 n : even. Hence we have that _C_n^≃ (, ) n : odd, 0 n : even, for any metric group , which implies that there is only a trivial metric fibration over C_2n and that there is at most one non-trivial metric fibration over C_2n+1. Now, similarly to the topological case, we can define an `associated bundle construction' from a torsor and a metric space Y (Corollary <ref>). This construction gives the following. Suppose that Y is a bounded metric space. Then we have a category equivalence _X^ Y≃ core_X^Y, where _X^Y is the full subcategory of _X that consists of metric fibrations with the fiber Y (Definition <ref>), and we denote the core of a category by core (Definition <ref> (4)). Here, we equip the group Y of isometries on Y with a metric group structure by d_ Y(f, g) = sup_y ∈ Yd_Y(fy, gy) (Example <ref>). However, we should suppose that Y is a bounded metric space so that d_ Y is indeed a distance function. For the case of general metric fibrations, we should extend our arguments concerning extended metric group that allows ∞ as values of a distance function (Definition <ref>), and we obtain an essentially same but extended result (Proposition <ref>). Finally, we define a `1-Čech cohomology' ^1(X, ), which is a category, of a -torsor X (Definition <ref>). This is an analogy from the Čech cohomology constructed from the local sections of a principal bundle. Similarly to the topological case, we can construct a cocycle from a family of local sections (Proposition <ref>), and conversely we can construct a -torsor by pasting copies of 's along a cocycle (Proposition <ref>). Then we have the following from this correspondences. We have a category equivalence ^1(X; ) ≃^_X. §.§.§ Acknowledgements The author is grateful to Luigi Caputi for fruitful and helpful comments and feedbacks on the first draft of the paper. He also would like to thank Masahiko Yoshinaga for valuable discussions and comments. § CONVENTIONS In this section, we prepare terms for categories, graphs, weighted graphs and metric spaces that are well-known but may not be commonly used. §.§ Categories In this article, we suppose that categories are locally small. We denote the object class of a category C by C, and the set of all morphisms from a to b by C(a, b) for any objects a, b ∈ C. We also denote the class of all morphisms in C by C. Let C and D be categories, and F : C D be a functor. * We say that F is faithful if the map F : C(a, b) D(Fa, Fb) is injective for any objects a, b ∈ C. We say that F is full if the map F : C(a, b) D(Fa, Fb) is surjective for any objects a, b ∈ C. We also say that F is fully faithful if it is faithful and full. * We say that F is split essentially surjective if there is a family of isomorpshisms {Fc ≅ d | c ∈ C}_d ∈ D. * We say that F is a category equivalence if there exists a functor G : D C and natural isomorpshisms GF ≅ id_C and FG ≅ id_D. When there exists a category equivalence C D, we say that C and D are equivalent. * We define a groupoid C by C = C and C(a, b) = {f ∈ C(a, b) |f is an isomorphism} for any a, b ∈ C. The following are standard. If a functor F : C D is fully faithful and split essentially surjective, then it is a category equivalence. A category equivalence F : C D induces a category equivalence F : C D. For a classification of objects of a category, we often want to consider `isomorphism classes of objects' and compare it with another category. However, in general, we can't do that since the class of objects is not necessarily a set. Instead, we consider a category equivalence C D that implies a bijection between isomorphism classes of objects if they are small. §.§ Metric spaces * A quasi metric space (X, d) is a set X equipped with a function d : X _≥ 0 satisfying that * d(x, x) = 0, * d(x, x') = d(x', x), * d(x, x') + d(x', x”) ≥ d(x, x”), for any x, x', x”∈ X. * A Lipschitz map f : X Y between quasi metric spaces X and Y is a map satisfying that d_Y(fx, fx') ≤ d_X(x, x') for any x, x' ∈ X. We denote the category of quasi metric spaces and Lipschitz maps by . We call an isomorphism in an isometry. * A metric space (X, d) is a quasi metric space satisfying that * d(x, x') = 0 if and only if x = x'. We denote the full subcategory of that consists of metric spaces by . * A graph G is a pair of sets (V(G), E(G)) such that E(G) ⊂{e ∈ 2^V(G)|# e = 2}, where we denote the cardinality of a set by #. We call an element of V(G) a vertex, and an element of E(G) an edge. A graph homomorphism f : G H between graphs G and H is a map f : V(G) V(H) such that fe ∈ E(H) or # fe = 1 for any e ∈ E(G). We denote the category of graphs and graph homomorphisms by . * A path on a graph G is a tuple (x_0, …, x_n) ∈ V(G)^n+1 for some n≥ 0 such that {x_i, x_i+1}∈ E(G) for any 0≤ i ≤ n-1. A connected graph G is a graph such that there exists a path (x_0, …, x_n) with x_0 = x and x_n = x' for any x, x' ∈ V(G). We denote the full subcategory of that consists of connected graphs by _ conn. * A weighted graph (G, w_G) is a graph G equipped with a function w_G : E(G) _≥ 0. A weighted graph homomorphism f : G H between weighted graphs G and H is a graph homomorphism such that w_H(fe) ≤ w_G(e) for any e ∈ E(G), where we abuse that w_H(fe) = 0 if # fe = 1. We denote the category of weighted graphs and weighted graph homomorphisms by . We also denote the full subcategory of that consists of weighted graphs (G, w_G) such that the graph G is connected by _ conn. We define functors and _ conn_ conn by forgetting additional structures. We also define the functor _ conn that sends a quasi metric space (X, d) to a weighted graph (X, w_X) defined by V(X) = X, E(X) = {e ∈ 2^X|# e = 2} and w_X {x, x'} = d(x, x'). The above functors have left adjoints. We describe each functor F in the following, and they are the left adjoint functors of each functor G of the above since the unit and the counit give that FGF = F and GFG = G. * We define a functor _ conn_ conn by sending a connected graph to a weighted graph with w = 0. * We define a functor _ conn by sending a weighted graph (G, w_G) to a quasi metric space (V(G), d_G) defined by d_G(x, x') = inf∪_n≥ 0{∑_i=0^n-1w_G{x_i, x_i+1}| (x = x_0, …, x_n = x') is a path on G}. * We define a functor by sending a quasi metric space (X, d) to a metric space ( KQX, d) defined as follows. We define an equivalence relation ∼ on X by x ∼ x' if and only if d(x, x') = 0. We also define a function KQX := X/∼_≥ 0 by d([x], [x']) = d(x, x'). For a quasi metric space X, we call the metric space KQX the Kolmogorov quotient of X. * For quasi metric spaces (X, d_X) and (Y, d_Y), we define a metric space called the L^1-product (X× Y, d_X× Y) by d_X× Y((x, y), (x', y')) = d_X(x, x') + d_Y(y, y') for any x, x' ∈ X and y, y' ∈ Y. * For graphs G and H, we define a graph called the cartesian product G× H by V(G× H) = V(G)× V(H), and {(x, y), (x', y')}∈ E(G× H) if and only if one of the following holds : * x = x' and {y, y'}∈ E(H), * {x, x'}∈ E(G) and y = y', for any x, x' ∈ V(G) and y, y' ∈ V(H). * For weighted graphs (G, w_G) and (H, w_H), we define a weighted graph (G× H, w_G× H) by w_G× H{(x, y), (x', y')} = w_G{x, x'} + w_H{y, y'} for any {(x, y), (x', y')}∈ E(G× H), where G× H is the cartesian product of graphs and we abuse that w_G{x, x} = w_H{y, y} = 0. These products make each category a symmetric monoidal category. The functors _ conn_ conn and their left adjoints are strong monoidal except for the functor _ conn that is lax monoidal. For the functors and _ conn_ conn, it is obvious since they are inclusions. It is also obvious for the functor _ conn_ conn by the definition. For the functor , we define a map KQ(X× Y) KQX× KQY by [(x, y)] ↦ ([x], [y]). Then it is obviously natural and is an isometry since we have that [(x, y)]∼ [(x', y')] if and only if [x]∼ [x'] and [y]∼ [y']. For the functor F : _ conn, the identity on the set F(G× H) = F(G)× F(H) is an isometry since d_w_G× H((x, y), (x', y')) = inf∪_n≥ 0{∑_i=0^n-1w_G× H{(x_i, y_i), (x_i+1, y_i+1)}| ((x, y) = (x_0, y_0), …, (x_n, y_n) = (x', y')) is a path on G× H} = inf∪_n≥ 0{∑_i=0^n-1w_G{x_i, x_i+1} + w_H{y_i, y_i+1}| ((x, y) = (x_0, y_0), …, (x_n, y_n) = (x', y')) is a path on G× H} = inf∪_n≥ 0{∑_i=0^n-1w_G{x_i, x_i+1}| (x = x_0, …, x_n = x')} + inf∪_m≥ 0{∑_i=0^m-1 w_H{y_i, y_i+1}| (y = y_0, …, y_m = y')} = d_w_G(x, x') + d_w_H(y, y') = d_F(G)× F(H)((x, y), (x', y')), for any x, x' ∈ V(G) and y, y' ∈ V(H). It is obviously natural. Finally, for the functor G : _ conn, the identity on the set G(X)× G(Y) = G(X× Y) is a weighted graph homomorphism since it is an inclusion of graphs and preserves weightings. It is obviously natural. This completes the proof. * An extended quasi metric space is a set X equipped with a function d : X [0, ∞] that satisfies the same conditions for quasi metric spaces. Namely, it is a quasi metric space admitting ∞ as a value of distance. A Lipschitz map between extended quasi metric spaces is a distance non-increasing map. We denote the category of extended quasi metric spaces and Lipschitz maps by . We similarly define extended metric spaces and we denote the full subcategory of that consists of them by . * For extended quasi metric spaces X and Y, we define the L^1-product of them similarly to that of quasi metric spaces. It makes the category a symmetric monoidal category. * We define functors and by forgetting additional structures. We also define the functor similarly to the functor _ conn except that {x, x'} does not span an edge for x, x' ∈ X with d(x, x') = ∞. The following is immediate. * The functors have left adjonts. Further, all of these functors are commutative with the inclusions , , _ conn and _ conn. * The functors of (1) are strong monoidal except for the functor that is lax monoidal. § _X ≃_X In this section, we introduce two notions, the metric action and the metric fibration, and show the equivalence between them. The notion of metric fibation is originally introduced by Leinster (<cit.>) in the study of magnitude. The other was introduced by the author in <cit.>, which is the counterpart of lax functors in category theory, while the metric fibration is a generalization of the Grothendieck fibration. As written in the introduction, we can consider the Grothendieck (or metric) fibration as the definition of fibrations via `the lifting property', while the lax functor is the one via `the transformation functions'. Let X be a metric space. * A metric action F : X consists of metric spaces Fx ∈ for any x ∈ X and isometries F_xx' : Fx Fx' for any x, x' ∈ X satisfying the following for any x, x', x”∈ X : * F_xx = id_Fx and F_x'x = F_xx'^-1, * d_Fx”(F_x'x”F_xx'a, F_xx”a) ≤ d_X(x, x') + d_X(x', x”) - d_X(x, x”) for any a ∈ Fx. * A metric transformation θ : F ⟹ G consists of Lipschitz maps θ_x : Fx Gx for any x ∈ X satisfying that G_xx'θ_x = θ_x'F_xx' for any x, x' ∈ X. We can define the composition of metric transformations θ and θ' by (θ'θ)_x = θ'_xθ_x. We denote the category of metric actions X and metric transformations by _X. * Let π : E X be a Lipschitz map between metric spaces. We say that π is a metric fibration over X if it satisfies the following : For any ∈ E and x ∈ X, there uniquely exists _x ∈π^-1x such that * d_E(, _x) = d_X(π, x), * d_E(, ') = d_E(, _x) + d_E(_x, ') for any ' ∈π^-1x. We call the point _x the lift of x along . * For metric fibrations π : E X and π' : E' X, a morphism φ : ππ' is a Lipschitz map φ : E E' such that π'φ = π. We denote the category of metric fibrations over X and morphisms by _X. For a product of metric spaces E = X× Y, the projection X× Y X is a metric fibration. We call it a trivial metric fibration. Let π : E X be a metric fibration, and x, x' ∈ X. Then the correspondence π^-1x ∋ a ↦ a_x'∈π^-1x' is an isometry, where we equip the sets π^-1x and π^-1x' with the induced metric from E. Note that the statement is obviously true if E = ∅. We suppose that E ≠∅ in the following, and then any fiber π^-1x is non-empty. For a ∈π^-1x, we have d_E(a_x', a) = d_E(a_x', (a_x')_x) + d_E((a_x')_x, a) = d_X(x', x) + d_E((a_x')_x, a). We also have d_E(a, a_x') = d_X(x, x'). Hence we obtain that d_E((a_x')_x, a) = 0, hence (a_x')_x = a for any x, x' ∈ X. This implies that the correspondence is a bijection. Further, we have d_E(a, b_x') = d_E(a, a_x') + d_E(a_x', b_x') = d_X(x, x') + d_E(a_x', b_x') and d_E(b_x', a) = d_E(b_x', b) + d_E(b, a) = d_X(x', x) + d_E(b, a) for any a, b ∈π^-1x. Hence we obtain that d_E(a, b) = d_E(a_x', b_x') for any x, x' ∈ X and a, b ∈π^-1x, which implies that the correspondence is an isometry. This completes the proof. Let φ : ππ' be a morphism of metric fibrations. For any x, x' ∈ X and a ∈π^-1x, we have (φ a)_x' = φ a_x'. We have d_E'((φ a)_x', φ a_x') = d_E'(φ a, φ a_x') - d_X(x, x') ≤ d_E(a, a_x') - d_X(x, x') = 0, hence we obtain that (φ a)_x' = φ a_x'. This completes the proof. Let F : X be a metric action. We define a metric fibration π_F : E(F) X as follows : * E(F) = {(x, a) | a ∈ Fx, x ∈ X}, * d_E(F)((x, a), (x', b)) = d_X(x, x') + d_Fx'(F_xx'a, b), * π_F(x, a) = x. We call the above construction the Grothendieck construction. The Grothendieck construction gives a functor E : _X _X. Let θ : F ⟹ G be a metric transformation. Then we construct Lipschitz maps φ_θ : E(F) E(G) by φ_θ (x, a) = (x, θ_x a) for any x ∈ X and a ∈ Fx. It is checked that φ_θ is a Lipschitz map as follows : d_E(G)(φ_θ (x, a), φ_θ (x', b)) = d_E(G)((x, θ_x a), (x', θ_x' b)) = d_X(x, x') + d_Gx'(G_xx'θ_x a, θ_x'b) = d_X(x, x') + d_Gx'(θ_x' F_xx' a, θ_x'b) ≤ d_X(x, x') + d_Fx'( F_xx' a, b) = d_E(F)((x, a), (x', b)). Next we show that the correspondence θ↦φ_θ is functorial, that is, we have φ_ id_F = id_E(F) and φ_θ'θ = φ_θ'φ_θ for any metric transformations θ : F ⟹ G and θ' : G ⟹ H. The former is obvious and the latter is checked as follows : φ_θ'θ(x, a) = (x, (θ'θ)_x a) = (x, θ'_xθ_x a) = φ_θ'φ_θ(x, a). Finally, φ_θ is obviously a morphism of the metric fibration. This completes the proof. We have a functor F : _X _X. Let π : E X be a metric fibration. We define a metric action F_π : X by F_π x = π^-1x and (F_π)_xx'a = a_x' for any x, x' ∈ X and a∈π^-1x, where we equip the set π^-1x with the induced metric from E. It follows that (F_π)_xx = id_F_π x by the uniqueness of the lifts, and that (F_π)_xx' defines an isometry F_π x F_π x' with (F_π)_xx'^-1 = (F_π)_x'x by Lemma <ref>. Further, we have that d_F_π x”((F_π)_x'x”(F_π)_xx'a, (F_π)_xx”a) = d_F_π x”((a_x')_x”, a_x”) = d_E(a, (a_x')_x”) - d_X(x, x”) ≤ d_E(a, a_x') + d_E(a_x', (a_x')_x”) - d_X(x, x”) = d_X(x, x') + d_X(x', x”) - d_X(x, x”), for any x, x', x”∈ X and a ∈ F_π x. Hence F_π certainly defines a metric action X. Next, let φ : ππ' be a morphism of metric fibrations. We define a metric transformation θ_φ : F_π⟹ F_π' by (θ_φ)_x a = φ a for any x ∈ X and a ∈ F_πx. Then it satisfies that (F_π')_xx'(θ_φ)_x a = (F_π')_xx'φ a = (φ a)_x' = φ a_x' = (θ_φ)_x'(F_π)_xx', where the third line follows from Lemma <ref>, hence θ_φ certainly defines a metric transformation F_π⟹ F_π'. Note that we have θ_ id_π = id_F_π and (θ_ψφ)_xa = ψφ a = (θ_ψ)_x(θ_φ)_xa for morphisms φ and ψ, which implies the functoriality of F. This completes the proof. The following is the counterpart of the correspondence between lax functors and the Grothendieck fibrations (B1 <cit.>), and enhances Corollary 5.26 of <cit.>. The Grothendieck construction functor E : _X _X is a category equivalence. We show that FE ≅ id__X and EF ≅ id__X. It is immediate to verify FE ≅ id__X by the definition. We show that EF_π≅π for a metric fibration π : E X. Note that EF_π is a metric space consists of points (x, a) with x ∈ X and a ∈π^-1x, and we have d_EF_π((x, a), (x', a')) = d_X(x, x') + d_π^-1x'(a_x', a'). We define a map f : EF_π E by f(x, a) = a for any x ∈ X and a ∈π^-1x. Then it is obviously an isometry and preserves fibers, hence an isomorphism of metric fibrations. The naturality of this isomorphism is obvious. This completes the proof. Note that the trivial metric fibration corresponds to the constant metric action, that is F_xx'= id for any x, x' ∈ X. § THE FUNDAMENTAL METRIC GROUP OF A METRIC SPACE In this section, we give a concise introduction to metric groups. We also give a definition of metric fundamental group, which plays a role of π_1 for metric space in the classification of metric fibrations. §.§ Metric groups * A metric group is a group object in . That is, a metric space equipped with Lipschitz maps · : ×, (-)^-1 : and a point e ∈ satisfying the suitable conditions of groups. * For metric groups 𝒢 and ℋ, a homomorphism from to $̋ is a Lipschitz map$̋ that commutes with the group structure. * We denote the category of metric groups and homomorphisms by . Let (, d) be a metric group. Then * we have d(kg, kh) = d(g, h) = d(gk, hk) for any g, h, k ∈. * we have d(g, h) = d(g^-1, h^-1) for any g, h ∈. * Since the map : g kg is a Lipschitz map for any k ∈, we have d(kg, kh) ≤ d(g, h) and d(k^-1(kg), k^-1(kh)) ≤ d(kg, kh). Hence we obtain that d(kg, kh) = d(g, h). The other can be proved similarly. * By (1), we have d(g^-1, h^-1) = d(e, gh^-1) = d(h, g) = d(g, h). This completes the proof. Let (X, d) be a metric space, and let ^u X be the set of isometries f on X such that sup_x∈ Xd_X(x, fx)< ∞. We equip ^u X with a group structure by compositions. We also define a distance function on ^u X by d_^u X(f, g) = sup_x∈ X d_X(fx, gx). Then it is immediate to verify the conditions that (^u X, d_^u X) is a metric group. Note that, if the metric space X is bounded, namely we have sup_x,x'∈ X d_X(x, x')< ∞, then the group ^u X consists of all isometries on X, by which we denote X. * A normed group is a group G equipped with a map |-| : G _≥ 0 satisfying that * |g| = 0 if and only if g = e, * |gh| ≤ |g| + |h| for any g, h ∈ G. Here we denote the unit of G by e. * A normed group G is called conjugation invariant if it satisfies that |h^-1gh| = |g| for any g, h ∈ G. * A normed group G is called inverse invariant if it satisfies that |g^-1| = |g| for any g ∈ G. * For normed groups G and H, a normed homomorphism from G to H is a group homomorphism φ : G H satisfying that |φ g|≤ |g|. * We denote the category of conjugation and inverse invariant normed groups and normed homomorphisms by _ conj^-1. The categories and _ conj^-1 are equivalent. For a metric group , we define a conjugation and inverse invariant normed group N by * N = as a group, * |g| = d_(e, g) for any g ∈ N. Note that this construction is functorial. Conversely, we define a metric group MG from a conjugation and inverse invariant normed group G by * MG = G as a group, * d_ MG(g, h) = |h^-1g|. This construction is also functorial. It is straightforward to verify that the compositions of these functors are naturally isomorphic to the identities. This completes the proof. §.§ The fundamental metric group Let X be a metric space and x ∈ X. * For each n ≥ 0, we define a set P_n(X, x) by P_n(X, x) := {(x, x_1, …, x_n, x) ∈ X^n+2}. We also define that P(X, x) := ⋃_nP_n(X, x). * We define a connected graph G(X, x) with the vertex set P(X, x) as follows. For u, v ∈ P(X, x), an unordered pair {u, v} spans an edge if and only if it satisfies both of the following : * There is an n ≥ 0 such that u ∈ P_n(X, x) and v ∈ P_n+1(X, x). * There is a 0 ≤ j ≤ n such that u_i = v_i for 1 ≤ i ≤ j and u_i = v_i+1 for j+1 ≤ i ≤ n, where we have u = (x, u_1, …, u_n, x) and v = (x, v_1, …, v_n+1, x). * We equip the graph G(X, x) with a weighted graph structure by defining a function w_G(X, x) on edges by w_G(X, x){u, v} = d_X(v_j, v_j+1) + d_X(v_j+1, v_j+2) - d_X(v_j, v_j+2) v_j≠ v_j+2, 0 v_j = v_j+2, where we use the notations in (2). * We denote the quasi-metric space obtained from the weighted graph G(X, x) by Q(X, x). We also denote the Kolmogorov quotient of Q(X, x) by π_1^m(X, x). Let X be a metric space and x ∈ X. * The metric space π^m_1(X, x) has a metric group structure given by the concatenation defined as [(x, u_1, …, u_n, x)]∙ [(x, v_1, …, v_k, x)] = [(x, u_1, …, u_n, v_1, …, v_k, x)]. The unit is given by [(x, x)] ∈π^m_1(X, x). * For any x' ∈ X, we have an isomorphism π^m_1(X, x) ≅π^m_1(X, x') given by [(x, u_1, …, u_n, x)] ↦ [(x', x, u_1, …, u_n, x, x')]. * We first show that the weighted graph G(X, x) is a monoid object in _ conn by the concatenation. Let (u, v), (u', v') ∈ G(X, x)× G(X, x), and suppose that {(u, v), (u', v')} spans an edge. Then we have that u = u' and v ∈ P_n(X, x), v' ∈ P_n+1(X, x), or v = v' and u ∈ P_n(X, x), u' ∈ P_n+1(X, x) for some n. We also have that w_G(X, x)× G(X, x){(u, v), (u', v')} = w_G(X, x){u, u'} + w_G(X, x){v, v'}. Note that {u∙ v, u'∙ v'} spans an edge in G(X, x). Further, we have w_G(X, x){u∙ v, u'∙ v'} = w_G(X, x){u, u'} + w_G(X, x){v, v'}. Hence the concatenation map ∙ : G(X, x)× G(X, x) G(X, x) is a weighted graph homomorphism. It is immediate to verify that the identity is the element (x, x) and that the product is associative. Thus the weighted graph G(X, x) is a monoid object in _ conn, and by Proposition <ref>, π^m_1(X, x) is a monoid object in . Now we show that it is a group object, namely, any element [(x, x_0, …, x_n, x)] has the inverse [(x, x_n, …, x_0, x)]. It reduces to show that d_Q(X, x)((x, x_n, …, x_0, x_0, …, x_n x), (x, x)) = 0. However, it is obvious that the elements (x, x_n, …, x_0, x_0, …, x_n x) and (x, x) can be connected by a path that consists of edges with weight 0 in G(X, x), that implies the desired equality. This completes the proof. * It is straightforward. Let X be a metric space and x ∈ X. We call the metric group π_1^m(X, x) the fundamental metric group of X with the base point x. We sometimes omit the base point and denote it by π_1^m(X). As just a group, π_1^m(X) is obtained as the fundamental group of a simplicial complex S_X whose n-simplices are subsets {x_0, …, x_n}⊂ X such that any distinct 3 points x_i, x_j, x_k satisfy that |Δ(x_i, x_j, x_k)| = 0 (see Definition <ref>). Note that our fundamental group π_1^m(X) is not functorial with respect to Lipschitz maps. However, it is functorial with respect to Lipschitz maps that preserve coline'ness ( |Δ(x_i, x_j, x_k)| = 0 ), in particular embedding of metric spaces. § _X^≃_X^≃ (Π^M_1(X, X_0), ) In this section, we introduce the notion of `principal -bundles' for metric spaces. We define it from two different view points, namely as a metric action and as a metric fibration, which turn out to be equivalent. As a metric action, we call it a -metric action, and as a metric fibration, we call it a -torsor. Then we show that they are classified by the conjugation classes of homomorphisms π^m_1(X, x_0). §.§ _X^≃_X^ Let X be a metric space and be a metric group. * A -metric action F : X is a metric action satisfying the following : * F_x = for any x ∈ X. * F_xx' is a left multiplication by some f_xx'∈ for any x, x' ∈ X. * Let F, G : X be -metric actions. A -metric transformation θ : F ⟹ G is a metric transformation such that each component θ_x : Fx Gx is a left multiplication by an element θ_x ∈. We denote the category of -metric actions X and -metric transformations by _X^. Apparently, _X^ is a subcategory of _X and is also a groupoid. Let G be a group and X be a metric space. We say that X is a right G-torsor if G acts on X from the right and satisfies the following : * It is free and transitive. * g : X X is an isometry for any g ∈ G. * we have d_X(x, xg) = d_X(x', x'g) for any x, x' ∈ X and g ∈ G. Let (X, d_X) be a metric space and G be a group. Suppose that X is a right G-torsor. Then there exist a distance function d_G on G and a metric group structure ·_x on X for each x ∈ X such that the map G X ; g ↦ xg gives an isomorphism of metric groups (G, d_G) ≅ (X, ·_x). Furthermore, the unit of the metric group (X, ·_x) is x. Fix a point x ∈ X. We define a map d_G : G × G _≥ 0 by d_G(f, g) = d_X(xf, xg), which is independent from the choice of x ∈ X. It is immediate to check that (G, d_G) is a metric space. Further, we have d_G(ff', gg') = d_X(xff', xgg') ≤ d_X(xff', xgf') + d_X(xgf', xgg') ≤ d_X(xf, xg) + d_X(xf', xg') = d_G(f, g) + d_G(f', g'), and d_G(f^-1, g^-1) = d_X(xf^-1, xg^-1) = d_X(x, xg^-1f) = d_X(xg, (xg)g^-1f) = d_X(xg, xf) = d_X(xf, xg) = d_G(f, g), for any f, f', g, g' ∈ G. Hence (G, d_G) is a metric group. Now we define a map G X by g ↦ xg. Then this map is an isometry by the definition. Hence we can transfer the metric group structure on G to X by this map. With respect to this group structure ·_x on X, we have x·_x x' = eg' = x' and x'·_x x = g'e = x', where we put x' = xg'. Hence x ∈ X is the unit of the group (X, ·_x). This completes the proof. Let G be a group. A metric fibration π : E X is a G-torsor over X if it satisfies the following : * G acts isometrically on E from the right, and preserves each fiber of π. * each fiber of π is a right G-torsor with respect to the above action. Let π : E X be a G-torsor, and x, x' ∈ X. Then the metric group structures on G induced from the fibers π^-1x and π^-1x' are identical. Note that, for any ∈π^-1x and f ∈, we have d_E(( f)_x', _x'f) = d_E( f, _x'f) - d_E( f, ( f)_x') = d_E(, _x') - d_E( f, ( f)_x') = d_X(x, x') - d_X(x, x') = 0, hence we obtain that ( f)_x' = _x'f. Let d_x and d_x' be the distance function on G induced from the fibers π^-1x and π^-1x' respectively. Namely, for ∈π^-1x and f, g ∈ G, we have d_x(f, g) = d_E( f, g) and d_x'(f, g) = d_E(_x'f, _x'g). Therefore we obtain that d_x'(f, g) = d_E(_x'f, _x'g) = d_E(( f)_x', ( g)_x') = d_E( f, g) = d_x(f, g) by Lemma <ref>. This completes the proof. For a G-torsor π : E X, we can consider the group G as a metric group that is isometric to a fiber of π by Lemma <ref>. Further, such a metric structure is independent from the choice of the fiber by Lemma <ref>. Hence, in the following, we write `G-torsors' by `-torsors', where is the metric group that is the group G equipped with the above metric structure. Let π : E X and π' : E' X be -torsors. A -morphism φ : ππ' is a G-equivariant map E E' that is also a morphism of metric fibrations. We denote the category of -torsors over X and -morphisms by _X^. Note that the category _X^ is a subcategory of _X. Further, we can show that any -morphism is an isomorphism as follows : Note that for any ∈ E, x ∈ X and g ∈, we have d_E(, _xg) = d_X(π, x) + |g| by the definitions. Then the -equivariance of φ and Lemma <ref> implies that d_E'(φ, φ (_xg)) = d_E'(φ, (φ)_xg) = d_X(π' φ, x) + |g| = d_X(π, x) + |g| = d_E(, _xg), which implies that φ preserves distances. The invertibility of φ is immediate from the G-equivariance. Now we show the equivalence of -metric actions and -torsors in the following. The Grothendieck construction functor E : _X _X of Proposition <ref> restricts to a functor _X^_X^. Let F : X be a -metric action. Let E(F) be the metric fibration given by the Grothendieck construction. Note that we have d_E(F)((x, g), (x', g')) = d_X(x, x') + d_(g_xx'g, g'). We define a action on E(F) by (x, g)h = (x, gh) for any g, h ∈ and x ∈ X. Then it is obviously compatible with the projection, and also free and transitive on each fiber. We also have that d_E(F)((x, g)h, (x', g')h) = d_E(F)((x, gh), (x', g'h)) = d_X(x, x') + d_(g_xx'gh, g'h) = d_X(x, x') + d_(g_xx'g, g') = d_E(F)((x, g), (x', g')), hence it acts isometrically. Further, we have that d_E(F)((x, g), (x, g)h) = d_E(F)((x, g), (x, gh)) = d_(g, gh) = d_(e, h), hence each fiber is a right -torsor. Therefore, we obtain that E(F) is a -torsor. Let θ : F ⟹ F' be a -metric transformation. The Grothendieck construction gives a map φ_θ : E(F) E(F') by φ_θ (x, g) = (x, θ_x g), which is a morphism of metric fibrations. It is checked that φ_θ is -equivariant as follows : (φ_θ (x, g))h = (x, θ_x gh) = φ_θ (x, gh). Hence it is a -morphism. This completes the proof. The functor F : _X _X of Proposition <ref> restricts to a functor _X^_X^. Let π : E X be a -torsor. We fix points x_0 ∈ X and ∈π^-1x_0. For any x ∈ X, we equip each set π^-1x with a metric group structure isomorphic to with the unit _x by Lemma <ref>. Hence we can identify each fiber with by the map g ↦_xg for any x ∈ X. Now we put (_x)_x' = _x'g_xx'∈π^-1x' for x, x' ∈ X and g_xx'∈. Then, for any h ∈, we have d_X(x, x') = d_E(_xh, (_xh)_x') = d_E(_x, (_xh)_x'h^-1) = d_E(_x, _x'g_xx') + d_E(_x'g_xx', (_xh)_x'h^-1) = d_X(x, x') + d_E(_x'g_xx', (_xh)_x'h^-1), hence we obtain that (_xh)_x' = _x'g_xx'h. This implies that the map π^-1x π^-1x' given by lifts _xh ↦ (_xh)_x' is the left multiplication by g_xx' when we identify each fiber with as above. Hence the functor F gives a -metric action. Next, let φ : ππ' be a -morphism between -torsors π : E X and π' : E' X. It induces a Lipschitz map φ_x : π^-1x π'^-1x. Since fibers π^-1x and π'^-1x are idetified with and φ_x is -equivariant, we can identify φ_x with the left multiplication by φ_x_x. This implies that the functor F sends the -morphism φ to a -metric transformation between F_π and F_π'. This completes the proof. The Grothendieck construction functor _X^_X^ is a category equivalence. By Proposition <ref>, we have natural isomorphisms EF ≅ id__X and FE ≅ id__X. We should show that these isomorphisms are obtained in _X^ and _X^ when restricted to them, which is immediate. This completes the proof. §.§ _X^≃ (π^m_1(X, x_0), ) First we define the category of homomorphisms of metric groups '. Let and ' be metric groups, and let (, ') be the set of all homomorphisms '. We equip (, ') with a groupoid structure by defining (, ')(φ, ψ) = {h ∈' |φ = h^-1ψ h} for any homomorphisms φ, ψ : '. The identity on φ∈ (, ') is the unit e ∈', and the composition of morphisms h ∈ (, ')(φ, ψ) and h' ∈ (, ')(ψ, ξ) is defined by h'∘ h = h'h. Let X be a metric space and be a metric group. For each x_0 ∈ X, we have a functor A : (π^m_1(X, x_0), ) ^_X. Let φ : π^m_1(X, x_0) be a homomorphism. We define a -metric action F_φ : X by F_φ x = and (F_φ)_xx' = φ[(x_0, x', x, x_0)]· :, where we denote the left multiplication by (-)·. It is verified that this certainly defines a -metric action as follows. For any x, x' ∈ X, we have (F_φ)_xx = φ[(x_0, x, x, x_0)]· = e· = id_, and (F_φ)_x'x = φ[(x_0, x, x', x_0)]· = (φ[(x_0, x', x, x_0)])^-1· = (F_φ)_xx'^-1. Further, we have d_((F_φ)_x'x”(F_φ)_xx'g, (F_φ)_xx”g) = d_(φ[(x_0, x”, x', x_0)]φ[(x_0, x', x, x_0)], φ[(x_0, x”, x, x_0)]) = d_(φ[(x_0, x”, x', x, x_0)], φ[(x_0, x”, x, x_0)]) = d_((φ[(x_0, x”, x, x_0)])^-1φ[(x_0, x”, x', x, x_0)], e) = d_(φ[(x_0, x, x”, x', x, x_0)], e) ≤ d_π^m_1(X, x_0)([(x_0, x, x”, x', x, x_0)], [x_0, x_0]) ≤ d_X(x, x') + d_X(x', x”) - d_X(x, x”), for any x, x', x”∈ X and g∈. Let h : φψ be a morphism in (π^m_1(X, x_0), ), namely we have φ = h^-1ψ h with h ∈. Then we can construct a -metric transformation θ : F_φ⟹ F_ψ by θ_x = h· :. It satisfies that (F_ψ)_xx'θ_x = θ_x'(F_φ)_xx' since we have ψ[(x_0, x', x, x_0)]h = hφ[(x_0, x', x, x_0)]. This completes the proof. Let X be a metric space and be a metric group. For each x_0 ∈ X, we have a functor B : ^_X (π^m_1(X, x_0), ). Let F : X be a -metric action. Then we can define a homomorphism φ_F : π^m_1(X, x_0) by φ_F [(x_0, x_1, …, x_n, x_0)] = F_x_1x_0F_x_2x_1… F_x_nx_n-1F_x_0x_n, for any [(x_0, x_1, …, x_n, x_0)] ∈π^m_1(X, x_0). It is immediate to check the well-defined'ness. Let F, F' : X be -metric actions and θ : F ⟹ F' be a -metric transformation. Then we have θ_x_0^-1φ_F'[(x_0, x_1, …, x_n, x_0)]θ_x_0 = θ_x_0^-1F'_x_1x_0F'_x_2x_1… F'_x_nx_n-1F'_x_0x_nθ_x_0 = F_x_1x_0F_x_2x_1… F_x_nx_n-1F_x_0x_n = φ_F[(x_0, x_1, …, x_n, x_0)], for any [(x_0, x_1, …, x_n, x_0)] ∈π^m_1(X, x_0). Hence θ_x_0∈ gives a morphism θ_x_0 : φ_F φ_F'. This correspondence is obviously functorial. This completes the proof. The functor A : (π^m_1(X, x_0), ) ^_X of Lemma <ref> is a category equivalence. We show the natural isomorphisms BA ≅ id_ (π^m_1(X, x_0), ) and AB ≅ id_^_X. For a homomorphism φ : π^m_1(X, x_0), we have φ_F_φ[(x_0, x_1, …, x_n, x_0)] = (F_φ)_x_1x_0(F_φ)_x_2x_1… (F_φ)_x_0x_n = φ[(x_0, x_0, x_1, x_0)]φ[(x_0, x_1, x_2, x_0)]…φ[(x_0, x_n, x_0, x_0)] = φ[(x_0, x_0, x_1, x_1, … x_n, x_n, x_0, x_0)] = φ[(x_0, x_1, … x_n, x_0)], for any [(x_0, x_1, …, x_n, x_0)] ∈π^m_1(X, x_0). Hence we obtain an isomorphism BA φ = φ that is obviously natural. Conversely, let F : X be a -metric action. Then we have (F_φ_F)_x = and (F_φ_F)_xx' = φ_F[(x_0, x', x, x_0)] = F_x'x_0F_xx'F_x_0x. Now we define a -metric transformation θ : F_φ_F⟹ F by θ_x = F_x_0x. It is obvious that we have F_xx'θ_x=θ_x'(F_φ_F)_xx', hence it is well-defined and obviously an isomorphism. For a -metric transformation τ : F ⟹ F', we have (ABτ)_x = τ_x_0· : (F_φ_F)_x (F'_φ_F')_x by the construction. Hence the condition τ_xF_x_0x = F'_x_0xτ_x_0 of the -metric transformation implies the naturality of this isomorphism. This completes the proof. §.§ Example We give the following example of fundamental metric group. Let C_n be an undirected n-cycle graph. Then we have π^m_1(C_n) ≅ with |1| = 1 n : odd, 0 n : even. Hence we have that _C_n^≃ (, ) n : odd, 0 n : even, for any metric group , which implies that there is only a trivial metric fibration over C_2n and that there is at most one non-trivial metric fibration over C_2n+1. Let V(C_n) = {v_1, …, v_n} be the vertex set whose numbering is anti-clockwise. For C_2n, it reduces to show that [(v_1, v_2, …, v_2n, v_1)] = [(v_1, v_1)]. Since we have d_C_2n(v_i, v_j) = d_C_2n(v_i, v_k) + d_C_2n(v_k, v_j) for any i≤ k ≤ j with j-i≤ n, we obtain that [(v_1, v_2, …, v_2n, v_1)] = [(v_1, …, v_n+1, …, v_2n, v_1)] = [(v_1, v_n+1, v_1)] = [(v_1, v_1)]. For C_2n+1, the possible non-trivial element of π^m_1(C_2n+1) is a concatenation or its inverse of the element [(v_1, …, v_2n+1, v_1)]. Now we have [(v_1, …, v_2n+1, v_1)] = [(v_1, v_n+1, v_n+2, v_1)], by the same argument as above, and d_Q(C_2n+1, v_1)((v_1, v_n+1, v_n+2, v_1), (v_1, v_n+1, v_1)) = d_C_2n+1(v_n+1, v_n+2) + d_C_2n+1(v_n+2, v_1) - d_C_2n+1(v_n+1, v_1) = d_C_2n+1(v_n+1, v_n+2) = 1. Hence we obtain that |[(v_1, …, v_2n+1, v_1)]| = 1. This completes the proof. Note that the cycle graph C_n is a metric group /n with |1| = 1. Hence the examples in Figure 1 are /2-torsors, which are classified by (, /2) ≅/2. § CLASSIFICATION OF METRIC FIBRATIONS In this section, we classify general metric fibrations by fixing the base and the fiber. It is analogous to that of topological fiber bundles, namely it reduces to classifying principal bundles whose fiber is the structure group of the concerned fibration. We divide it into two cases, whether the fiber is bounded or not, since we need to consider expanded metric spaces for the unbounded case, which are essentially same although. §.§ The functor (-)^x_0 Before we show the classification, we introduce a technical functor that will be used later. For any metric action F : X and a point x_0 ∈ X, we define a metric action ^x_0 : X as follows. We define that ^x_0 x = Fx_0 and ^x_0_xx' = F_x'x_0F_xx'F_x_0x : Fx_0 Fx_0 for any x, x' ∈ X. Then it is verified that this defines a metric action as follows : We have ^x_0_xx = F_xx_0F_xxF_x_0x = id_Fx_0 = id_^x_0 x. We also have (^x_0_x'x)^-1 = (F_xx_0F_x'xF_x_0x')^-1 = F_x'x_0F_xx'F_x_0x = ^x_0_xx' and d_^x_0 x”(^x_0_x'x”^x_0_xx'a, ^x_0_xx”a) = d_Fx_0(F_x”x_0F_x'x”F_x_0x'F_x'x_0F_xx'F_x_0xa, F_x”x_0F_xx”F_x_0xa) = d_Fx”(F_x'x”F_xx'F_x_0xa, F_xx”F_x_0xa) ≤ d_X(x, x') + d_X(x', x”) - d_X(x, x”), for any x, x', x”∈ X and a ∈^x_0 x. The correspondence F ↦^x_0 defines a fully faithful functor (-)^x_0: _X _X. Further, it is restricted to a fully faithful functor _X^_X^ for any metric group . Let θ : F ⟹ G be a metric transformation. We define a metric transformation θ^x_0 : ^x_0⟹G^x_0 by θ^x_0_x = θ_x_0 : ^x_0x G^x_0x ; a ↦θ_x_0a. Then we have G^x_0_xx'θ^x_0_x = G_x'x_0G_xx'G_x_0xθ_x_0 = G_x'x_0G_xx'θ_xF_x_0x = G_x'x_0θ_x'F_xx'F_x_0x = θ_x_0F_x'x_0F_xx'F_x_0x = θ^x_0_xF^x_0_xx', hence this certainly defines a metric transformation. It is obvious that id_F^x_0 = id_^x_0 and (θ' θ)^x_0 = θ'^x_0θ^x_0. It is a faithful functor because G_xx_0θ_x = θ_x_0F_xx_0 implies that θ_x = θ'_x for any x ∈ X if two metric transformation θ, θ' satisfies θ_x_0 = θ'_x_0. By the definition, it is restricted to a faithful functor _X^_X^ for any metric group . Next we show the fullness. Let η : ^x_0⟹G^x_0 be a metric transformation. Then we have G^x_0_x_0xη_x_0 = η_xF^x_0_x_0x and F^x_0_x_0x = id_F_x_0, G^x_0_x_0x = id_G_x_0. Hence we obtain that η_x_0 = η_x for any x ∈ X. Now we define a metric transformation η : F ⟹ G by η_x = G_x_0xη_x_0F_xx_0 : Fx Gx. Then we have G_xx'η_x = G_xx'G_x_0xη_x_0F_xx_0 = G_x_0x'G^x_0_xx'η_xF_xx_0 = G_x_0x'η_x'F^x_0_xx'F_xx_0 = G_x_0x'η_x'F_x'x_0F_xx'F_x_0xF_xx_0 = G_x_0x'η_x_0F_x'x_0F_xx' = η_x'F_xx', hence this certainly defines a metric transformation. We obviously have (η)^x_0 = η, which implies that the functor (-)^x_0 is full. The restriction to _X^_X^ is immediate. This completes the proof. The functor (-)^x_0: _X _X is split essentially surjective. Its restriction _X^_X^ is also split essentially surjective for any metric group . Let F : X be a metric action. We define a metric transformation θ : ^x_0⟹ F by θ_x = F_x_0x : F^x_0x Fx ; a ↦ F_x_0xa. It certainly satisfies that F_xx'θ_x = F_xx'F_x_0x = F_x_0x'F_x'x_0F_xx'F_x_0x = θ_x'F^x_0_xx'. Further, we define a metric transformation θ^-1 : F ⟹^x_0 by θ^-1_x = F_xx_0 : Fx ^x_0x for any x ∈ X. Then we have _xx'^x_0θ^-1_x = θ^-1_xF_xx' similarly to the above, hence it certainly defines a metric transformation. It is obviously an isomorphism. The restriction to _X^_X^ is immediate. This completes the proof. The functor (-)^x_0: _X _X and its restriction _X^_X^ for any metric group are category equivalences. * We denote the image of the functor (-)^x_0: _X _X by _X^x_0. * We denote the full subcategory of _X that consists of metric actions F : X such that Fx ≅ Y for any x ∈ X and a metric space Y by _X^Y. * We denote the image of (-)^x_0 restricted to _X^Y and _X^ by _X^Y, x_0 and _X^, x_0 respectively. * We denote the full subcategory of _X that consists of metric fibrations π : E X such that π^-1x ≅ Y for any x ∈ X and a metric space Y by _X^Y. * We have category equivalences _X^Y _X^Y, x_0 and _X^_X^, x_0. * The Grothendieck construction functor E : _X _X is restricted to the category equivalence _X^Y _X^Y. (1) follows from Corollary <ref>, and (2) follows from the proof of Proposition <ref>. §.§ Classification for the case of bounded fibers In this subsection, we suppose that X and Y are metric spaces and Y is bounded. Note that we have a metric group Y (Example <ref>). We have a faithful functor -↷ Y : _X^ Y_X^ Y. Let F ∈_X^ Y. We define a metric action F ↷ Y : X by (F ↷ Y)x = Y and (F ↷ Y)_xx' = F_xx' : Y Y. It is immediate to verify that this certainly defines a metric action. For an Y-metric transformation θ : F ⟹ G, we define a metric transformation θ↷ Y : F↷ Y ⟹ G↷ Y by (θ↷ Y)_x = θ_x : Y Y ; y ↦θ_xy. Then it is also immediate to verify that it is a metric transformation. Further, this obviously defines a faithful functor. This completes the proof. The functor -↷ Y : _X^ Y_X^Y is split essentially surjective. Let F ∈_X^Y and fix isometries φ_x : Y Fx by the axiom of choice. We define an Y-metric action F by ( F)x = Y and ( F)_xx' = φ_x'^-1F_xx'φ_x· that is a left multiplication. Then we can verify that it is an Y-metric action as follows. Note that we have ( F)_xx = φ_x^-1F_xxφ_x· = id_ Y and ( F)_xx'^-1 = φ_x^-1F_x'xφ_x'· = ( F)_x'x. We also have that d_ Y(( F)_x'x”( F)_xx', ( F)_xx”) = d_ Y(φ_x”^-1F_x'x”φ_x'φ_x'^-1F_xx'φ_x, φ_x”^-1F_xx”φ_x) = d_ Y(φ_x”^-1F_x'x”F_xx'φ_x, φ_x”^-1F_xx”φ_x) = sup_a ∈ Yd_Y(φ_x”^-1F_x'x”F_xx'φ_xa, φ_x”^-1F_xx”φ_xa) = sup_a ∈ Fxd_Fx”(F_x'x”F_xx'a, F_xx”a) ≤ d_X(x, x') + d_X(x', x”) - d_X(x, x”). Now we define a metric transformation φ : F↷ Y ⟹ F by φ_x : ( F↷ Y)x = Y Fx. Then it certainly satisfies that F_xx'φ_x = φ_x'( F↷ Y)_xx' and is an isomorphism by the definition. This completes the proof. Since the category _X^ Y is a groupoid, the image of the functor -↷ Y is in core_X^Y, where we denote the subcategory that consists of all isomorphisms by core (Definition <ref> (4)). The functor -↷ Y^x_0 : _X^ Y core_X^Y, x_0 is full. Note that we have -↷ Y^x_0 = (-)^x_0↷ Y by the definitions. Since the functor (-)^x_0 : _X^ Y_X^ Y is full by Lemma <ref>, we show that the restriction -↷ Y : _X^ Y, x_0 core_X^Y, x_0 is full. Let θ : F^x_0↷ Y ⟹G^x_0↷ Y be an isomorphism in _X^Y, x_0, where F, G ∈_X^ Y. Then we have an isometry θ_x : Y Y such that G_x'x_0G_xx'G_x_0xθ_x = θ_x'F_x'x_0F_xx'F_x_0x for any x, x' ∈ X. Since we have θ_x ∈ Y, we obtain a morphism θ' : F^x_0⟹G^x_0∈_X^ Y, x_0 defined by θ'_x = θ_x. It is obvious that we have θ' ↷ Y = θ. This completes the proof. The functor -↷ Y^x_0 : _X^ Y core_X^Y, x_0 is a category equivalence. The categories _X^ Y and core_X^Y are equivalent. It follows from Corollary <ref> with core_X^Y ≃ core_X^Y ≃ core_X^Y, x_0 by Lemma <ref>. §.§ Classification for the case of unbounded fibers To classify general metric fibrations, we generalize the discussions so far to extended metric groups. * An extended metric group is a group object in . * For extended metric groups 𝒢 and ℋ, a homomorphism from to $̋ is a Lipschitz map$̋ that commutes with the group structure. * We denote the category of extended metric groups and homomorphisms by . Note that the category is a full subcategory of . Let (X, d) be a metric space, and let X be the group of isometries on X. We define a distance function on X by d_ X(f, g) = sup_x∈ X d_X(fx, gx). Then it is immediate to verify the conditions that ( X, d_ X) is an extended metric group. We note that the `unit component' of X, that is a set of isometries f such that d_ X( id_X, f)< ∞, is exactly ^u X (Example <ref>). Note that, if the metric space X has finite diameter, then we have X = ^u X that is a metric group. Let and ' be extended metric groups, and let (, ') be the set of homomorphisms. We equip (, ') with a groupoid structure similarly to the metric group case by defining (, ')(φ, ψ) = {h ∈' |φ = h^-1ψ h} for any homomorphisms φ, ψ : '. We note that the same statement as Lemma <ref> holds for extended metric groups. Further, the relationship between extended metric spaces and normed groups similar to Proposition <ref> holds if we replace the codomain of norms by [0, ∞]. Let be an extended metric group and X be a metric space. An extended -metric action F is a correspondence X ∋ x ↦ Fx = and F_xx'∈ such that * F_xx = e, F_xx' = F_x'x^-1, * d_(F_x'x”F_xx', F_xx”) ≤ d_X(x, x')+d_X(x', x”) - d_X(x, x”). For extended -metric actions F and G, an extended -metric transformation θ : F⟹ G is a family of elements {θ_x ∈}_x∈ X such that G_xx'θ_x = θ_x'F_xx'. We denote the category of extended -metric actions and extended -metric transformations by _X^. The following is obtained from the same arguments in subsection <ref> by replacing the `metric group' by `extended metric group'. For an extended metric group and a metric space X, the categories ^_X and (π^m_1(X, x_0), ) are equivalent. Further, the arguments in subsection <ref> can be applied for extended case, and we obtain the following. For any metric spaces X and Y, the categories _X^ Y and core_X^Y are equivalent. Hence metric fibrations with fiber Y are classified by (π^m_1(X, x_0), ). § COHOMOLOGICAL INTERPRETATION In this section, we give a cohomological classification of -torsors. It is an analogy of the 1-Čech cohomology. Before giving the definition, we introduce the following technical term. Let X be a metric space, and x_1, x_2, x_3 ∈ X. We denote the subset {x_1, x_2, x_3}⊂ X by Δ(x_1, x_2, x_3) and call it a triangle. We define the degeneracy degree of the triangle Δ(x_1, x_2, x_3) by |Δ(x_1, x_2, x_3)| := min{d_X(x_i, x_j) + d_X(x_j, x_k) - d_X(x_i, x_k) |{i, j, k} = {1, 2, 3}}. Note that it is enough to consider i, j, k's running in the cyclic order to obtain the above minimum. The following is the definition of our `1-Čech chomology'. Let X be a metric space and suppose that points of X are indexed as X = {x_i}_i ∈ I. For a metric group , we define the 1-cohomology of X with the coefficient in as the category ^1(X; ) by ^1(X; ) = {(a_ijk) ∈^I^3| a_ijka_kjℓ = a_ijℓ, |a_ijka_jkia_kij| ≤ |Δ(x_i, x_j, x_k)|}, and ^1(X; )((a_ijk), (b_ijk)) = {(f_ij) ∈^I^2| a_ijkf_jk = f_ijb_ijk}, where we denote the conjugation invariant norm on by |-|. We call an object of ^1(X; ) a cocycle. Apparently, the above constructions are independent from the choice of the index I. Note that, for a cocycle (a_ijk) ∈^1(X; ), the condition a_ijka_kjℓ = a_ijℓ implies that a_iji = e and a_ijk = a_kji^-1 for any i, j, k ∈ I. Further, for a morphism (f_ij), we have f_ij = f_ji from the condition a_ijkf_jk = f_ijb_ijk and a_iji = b_iji = e. The 1-cohomology of X with the coefficient in is well-defined, that is, ^1(X; ) is indeed a category, in particular a groupoid. Let (a_ijk), (b_ijk), (c_ijk) ∈^1(X; ), and (f_ij) : (a_ijk) (b_ijk) and (f'_ij) : (b_ijk) (c_ijk) be morphisms. Then (f'∘ f)_ij := f_ijf'_ij defines a morphism ((f'∘ f)_ij) : (a_ijk) (c_ijk) since we have a_ijkf_jkf'_jk = f_ijb_ijkf'_jk = f_ijf'_ijc_ijk. It obviously satisfies the associativity. The identity on a_ijk is apparently defined by e_ij = e, where e denotes the unit of . Further, (f^-1_ij) defines a morphism (f^-1_ij) : b_ijk a_ijk that is the inverse of (f_ij). This completes the proof. We have a faithful functor β : ^1(X; ) ^_X. For (a_ijk) ∈^1(X; ), we define a -torsor β (a_ijk) as follows. Let 𝒰 = ∐_(i, j) ∈ I^2_ij, where _ij = ^ij_i ∐^ij_j = ∐. We write an element of ^ij_∙ as g^ij_∙ and we denote the identification = ^ij_∙ by the map ^ij_∙ ; g ↦ g^ij_∙, where ∙∈{i, j} for any i ≠ j ∈ I. We define an equivalence relation ∼ on 𝒰 generated by g^ij_j ∼ (ga_ijk)^jk_j. Note that we have g^ij_j∼ g^ji_j for any i, j ∈ I. We denote the quotient set 𝒰/∼ by β (a_ijk) in the following. Then we have a surjective map π : β (a_ijk) X defined by π [g^ij_j] = x_j. For this map π, we have the following. For any i, j ∈ I, the map π^-1x_j ; g ↦ [g^ij_j] is a bijection. The surjectivity is clear. We show the injectivity. Suppose that we have [g^ij_j] = [h^ij_j] for g, h ∈. That is, we have elements a_k_0jk_1, a_k_1jk_2, …, a_k_N-1jk_N∈ such that ga_k_0jk_1… a_k_N-1jk_N = h and k_0 = k_N = i. Then the condition a_ijka_kjℓ = a_ijℓ implies that ga_iji=h, hence g = h. This completes the proof. Note that Lemma <ref> implies that [g^ij_j] = [h^jk_j] implies that h = ga_ijk. Now we can define a distance function d_β (a_ijk) on β (a_ijk) as follows. Let ε_i ∈π^-1x_i and ε_j ∈π^-1x_j. Then there uniquely exist g, h ∈ such that [g^ij_i] = ε_i and [h^ij_j] = ε_j by Lemma <ref>. Then we define that d_β (a_ijk)(ε_i, _j) = d_X(x_i, x_j) + d_(g, h). The non-degeneracy is clear. The symmetry follows from that [g^ij_i] = [g^ji_i]. The triangle inequality is verified as follows. Let ε_i ∈π^-1x_i, ε_j ∈π^-1x_j and ε_k ∈π^-1x_k. Suppose that we have [g^ij_i] = ε_i = [g'^ik_i], [h^ij_j] = ε_j = [h'^jk_j], and [m^jk_k] = ε_k = [m'^ik_k]. Then we have g = g'a_kij, h' = ha_ijk and m = m'a_ikj, hence we obtain that d_β (a_ijk)(ε_i, ε_j) + d_β (a_ijk)(ε_j, ε_k) = d_X(x_i, x_j) + d_(g, h) + d_X(x_j, x_k) + d_(h', m) = d_X(x_i, x_j) + d_X(x_j, x_k) + d_(g'a_kij, h) + d_(ha_ijk, m'a_ikj) = d_X(x_i, x_j) + d_X(x_j, x_k) + d_(g'a_kij, h) + d_(ha_ijka_jkia_kij, m'a_kij) + d_(h, ha_ijka_jkia_kij) - d_(h, ha_ijka_jkia_kij) ≥ d_X(x_i, x_j) + d_X(x_j, x_k) + d_(g'a_kij, m'a_kij) - |a_ijka_jkia_kij| ≥ d_X(x_i, x_j) + d_X(x_j, x_k) + d_(g', m') - |Δ(x_i, x_j, x_k)| ≥ d_X(x_i, x_k) + d_(g', m') = d_β (a_ijk)(ε_i, ε_k). Now a map π : β (a_ijk) X is obviously a 1-Lipschitz map. Further, we verify that it is a metric fibration as follows. Let x_i, x_j ∈ X and ε_i ∈π^-1x_i. Suppose that we have ε_i = [g^ij_i] for g ∈. Then ε_j := [g^ij_j] ∈π^-1x_j is the unique element in π^-1x_j such that d_β (a_ijk)(ε_i, ε_j) = d_X(x_i, x_j). Also, for ε'_j := [h^ij_j] ∈π^-1x_j, we have d_β (a_ijk)(ε_i, ε'_j) = d_X(x_i, x_j) + d_(g, h) = d_β (a_ijk)(ε_i, ε_j) + d_β (a_ijk)(ε_j, ε'_j). Finally, we equip the metric fibration π : β (a_ijk) X with a right action by as [g^ij_∙]h = [(h^-1g)^ij_∙] for any i, j ∈ I and ∙∈{i, j}. This is well-defined since we have that [(ga_ijk)^jk_j]h = [(h^-1ga_ijk)^jk_j] = [(h^-1g)^ij_j] = [g^ij_j]h. It is straightforward to verify that this is a -torsor. Next we show the functoriality. Let (f_ij) : (a_ijk) (b_ijk) ∈^1(X; ). We construct a map f_∗ : β(a_ijk) β(b_ijk) by [g^ij_∙] ↦ [(gf_ij)^ij_∙] for any i, j ∈ I and ∙∈{i, j}. It is well-defined since we have that [(ga_ijk)^jk_j] ↦ [(ga_ijkf_jk)^jk_j]= [(gf_ijb_ijk)^jk_j] = [(gf_ij)^ij_j]. The map f_∗ obviously preserves fibers, and is an isometry since we have that d_β(b_ijk)(f_∗ [g^ij_i], f_∗ [h^ij_j]) = d_β(b_ijk)( [(gf_ij)^ij_i], [(hf_ij)^ij_j]) = d_X(x_i, x_j) + d_(gf_ij, hf_ij) = d_X(x_i, x_j) + d_(g, h) = d_β(a_ijk)([g^ij_i], [h^ij_j]). Further, it is -equivariant since we have that (f_∗[g^ij_j])m = [(gf_ij)^ij_j]m = [(m^-1gf_ij)^ij_j] = f_∗([g^ij_j]m). The faithfullness is obvious from the construction. This completes the proof. The functor β : ^1(X; ) ^_X is full. Let (a_ijk), (b_ijk) ∈^1(X; ) be cocycles, and suppose that we have a morphism φ : β(a_ijk) β(b_ijk) in ^_X. We denote the projections β(a_ijk) X and β(b_ijk) X by π_a and π_b respectively in the following. For any i, j ∈ I, we have bijections A_ij : π_a^-1x_j and B_ij : π_b^-1x_j given by g ↦ [g^ij_j] by Lemma <ref>. Then we define a map φ_ij = B_ij^-1φ A_ij :, namely we have φ[g^ij_j] = [(φ_ijg)^ij_j]. Now the -equivariance of φ implies that φ[g^ij_j] = φ[(ge)^ij_j] = (φ[e^ij_j])g^-1 = [(φ_ije)^ij_j]g^-1 = [(gφ_ije)^ij_j], which implies that φ_ijg = gφ_ije by Lemma <ref>. From this, we obtain that φ[(ga_ijk)^jk_j] = φ[(ga_ijk)^kj_j] = [(φ_kj(ga_ijk))^kj_j] = [(ga_ijkφ_kje)^kj_j]. Since we have [g^ij_j] = [(ga_ijk)^jk_j], we obtain that a_ijkφ_kje = (φ_ije)b_ijk by Lemma <ref>. Further, since the lift of x_j along [g^ij_i] is [g^ij_j] and φ preserves the lift, the conditions φ[g^ij_j] = [(φ_ijg)^ij_j] and φ[g^ji_i] = [(φ_jig)^ji_i] implies that φ_ij = φ_ji. Hence we obtain a morphism (φ_ije) : (a_ijk) (b_ijk) in ^1(X; ), which satisfies that β (φ_ije) = φ by the construction. This completes the proof. Let π : E X be a -torsor. For x_i, x_j ∈ X, we define a local section of π over a pair (x_i, x_j) as a pair of points (ε_i, ε_j) ∈ E^2 such that π_i = x_i, π_j = x_j and ε_j is the lift of x_j along ε_i. We say that ((ε^ij_i,ε^ij_j))_(i, j)∈ I^2 is a local section of π if each (ε^ij_i, ε^ij_j) is a local section of π over a pair (x_i, x_j) and satisfies that ε^ij_i = ε^ji_i. Let π : E X be a -torsor. For a local section s =((ε^ij_i,ε^ij_j))_(i, j)∈ I^2 of π, we can construct a cocycle α_s π∈^1(X;). Further, for any two local sections s, s' of π, the corresponding cocycles α_s π and α_s'π are isomorphic. We define a_ijk∈ as the unique element such that ε^ij_ja_ijk = ε^jk_j. Then (a_ijk) satisfies that a_ijka_kjℓ = a_ijℓ since we have ε^ij_ja_ijka_kjℓ = ε^jk_ja_kjℓ = ε^kj_ja_kjℓ = ε^jℓ_j. Now note that we have ε_xg = (ε g)_x for any ε∈ E, x ∈ X and g ∈. Hence we have that ε^ij_ja_ijka_jkia_kij = ε^jk_ja_jkia_kij = (ε^jk_k)_x_ja_jkia_kij = (ε^jk_ka_jki)_x_ja_kij = (ε^ki_k)_x_ja_kij = ((ε^ki_i)_x_ka_kij)_x_j = ((ε^ki_ia_kij)_x_k)_x_j = ((ε^ij_i)_x_k)_x_j. Hence we obtain that |a_ijka_jkia_kij| = d_E(ε^ij_j, ε^ij_ja_ijka_jkia_kij) = d_E(ε^ij_j, ((ε^ij_i)_x_k)_x_j) = -d_E(ε^ij_j, ε^ij_i) + d_E(ε^ij_i, ((ε^ij_i)_x_k)_x_j) ≤ -d_E(ε^ij_j, ε^ij_i) + d_E(ε^ij_i, (ε^ij_i)_x_k) + d_E((ε^ij_i)_x_k, ((ε^ij_i)_x_k)_x_j) = -d_X(x_j, x_i) + d_X(x_i, x_k) + d_X(x_k, x_j). Since the norm |-| on is conjugation invariant, the value |a_ijka_jkia_kij| is invariant under the cyclic permutation on {i, j, k}, hence we obtain that |a_ijka_jkia_kij| ≤ |Δ(x_i, x_j, x_k)|. Thus we obtain a cocycle α_s π := (a_ijk) ∈^1(X ; ). Suppose that we have local sections s = ((ε^ij_i,ε^ij_j))_(i, j)∈ I^2 and s' = ((μ^ij_i,μ^ij_j))_(i, j)∈ I^2. Then there exists an element (f_ij) ∈^I^2 such that (ε^ij_if_ij,ε^ij_jf_ij) = (μ^ij_i,μ^ij_j). Let α_s π = (a_ijk) and α_s'π = (b_ijk). Then we obtain that ε^ij_ja_ijkf_jkb^-1_ijk = ε^jk_jf_jkb^-1_ijk = μ^jk_jb^-1_ijk = μ^ij_j, which implies that f_ij = a_ijkf_jkb^-1_ijk. Hence (f_ij) defines a morphism (f_ij) : (a_ijk) (b_ijk) in ^1(X; ). Since ^1(X; ) is a groupoid, this is an isomorphism. This completes the proof. The functor β : ^1(X; ) ^_X is split essentially surjective. Let π : E X be a -torsor. Fix a local section s = ((ε^ij_i,ε^ij_j))_(i, j)∈ I^2 of π. Let α_sπ = (a_ijk) be the cocycle constructed in Proposition <ref>. We show that the -torsors β(a_ijk) and π are isomorphic. We define a map φ : β(a_ijk) E by [g^ij_∙] ↦ε^ij_∙ g^-1. It is well-defined since we have that [(ga_ijk)^jk_j] ↦ε^jk_ja^-1_ijkg^-1 = ε^ij_jg^-1. It obviously preserves fibers and is a bijection. Also, it is an isometry since we have that d_E(φ[g^ij_i], φ[h^ij_j]) = d_E(ε^ij_ig^-1, ε^ij_jh^-1) = d_E(ε^ij_i, ε^ij_jh^-1g) = d_E(ε^ij_i, ε^ij_j) + d_E(ε^ij_j, ε^ij_jh^-1g) = d_X(x_i, x_j) + d_(g^-1, h^-1) = d_β(a_ijk)([g^ij_i], [h^ij_j]). Further, it is immediately verified that φ is -equivariant. Hence the map φ gives an isomorphsim in ^_X. This completes the proof. The functor β : ^1(X; ) ^_X is a category equivalence. 99 A0 Y. Asao, Magnitude and magnitude homology of filtered set enriched categories, (2023), preprint, arXiv:2303.05677. Gr A. Grothendieck, Technique de descente et théorèmes d’existence en géométrie algébrique. I. Généralités. Descente par morphismes fidèlement plats, Sèminaire N. Bourbaki exp. no190 (1960) 299–327 Gr2A. Grothendieck, Revêtements Étales et Groupe Fondamental - Séminaire de Géometrie Algébrique du Bois Marie 1960/61, LNM 224 Springer (1971) JPT P. T. Johnstone, Sketches of an Elephant : A Topos Theory Compendium, Oxford: Oxford University Press (2002). La F. W. Lawvere, Metric spaces, generalized logic and closed categories, Rendiconti del Seminario Matematico e Fisico di Milano, XLIII : 135–166, 1973. Reprinted as Reprints in Theory and Applications of Categories 1:1–37, 2002. L3 T. Leinster, The magnitude of metric spaces, Documenta Mathematica 18 (2013), 857–905. L1 T. Leinster, The magnitude of a graph , Mathematical Proceedings of the Cambridge Philosophical Society 166 (2019), 247–264. Mc S. Mac Lane, Categories for the Working Mathematician, Graduate Texts in Mathematics 5, Springer, Berlin, 1971. Roff E. Roff, The size and shape of things: magnitude, diversity, homology, PhD thesis, University of Edinburgh, 2022.
http://arxiv.org/abs/2307.04441v1
20230710094718
Randomized Communication and Implicit Representations for Matrices and Graphs of Small Sign-Rank
[ "Nathaniel Harms", "Viktor Zamaraev" ]
cs.CC
[ "cs.CC", "cs.DM", "cs.DS" ]
Automatic diagnosis of knee osteoarthritis severity using Swin transformer Rachid Jennane ========================================================================== We prove a characterization of the structural conditions on matrices of sign-rank 3 and unit disk graphs (UDGs) which permit constant-cost public-coin randomized communication protocols. Therefore, under these conditions, these graphs also admit implicit representations. The sign-rank of a matrix M ∈^N × N is the smallest rank of a matrix R such that M_i,j = (R_i,j) for all i,j ∈ [N]; equivalently, it is the smallest dimension d in which M can be represented as a point-halfspace incidence matrix with halfspaces through the origin, and it is essentially equivalent to the unbounded-error communication complexity. Matrices of sign-rank 3 can achieve the maximum possible bounded-error randomized communication complexity Θ(log N), and meanwhile the existence of implicit representations for graphs of bounded sign-rank (including UDGs, which have sign-rank 4) has been open since at least 2003. We prove that matrices of sign-rank 3, and UDGs, have constant randomized communication complexity if and only if they do not encode arbitrarily large instances of the Greater-Than communication problem, or, equivalently, if they do not contain large half-graphs as semi-induced subgraphs. This also establishes the existence of implicit representations for these graphs under the same conditions. empty empty empty § INTRODUCTION Consider a sign matrix M ∈^N × N. In communication complexity, learning theory, and graph theory, it is often useful to represent M as a point-halfspace incidence matrix of the following form. To each row x ∈ [N], assign a point p_x ∈^d ∖{0}, and to each row y ∈ [N] assign a unit vector h_y ∈^d, such that M(x,y) = (p_x, h_y). In other words, M(x,y) = 1 if and only if the point p_x belongs to the halfspace H_y { p ∈^d | p,h_y≥ 0} whose boundary hyperplane goes through the origin. It is always possible to find such a representation, but, naturally, we wish to accomplish it in the simplest way. Here are two common ways to measure the complexity of this representation: Sign-rank. We might want to minimize the dimension d of the representation. The minimum possible d where M admits such a representation is called the sign-rank of M and denoted _±(M). It is equivalent to the smallest rank d of a matrix R such that M(x,y) = (R(x,y)) for all x,y ∈ [N]. Thinking of the rows of M as a fixed domain , and the columns as a hypothesis class (subsets of ), a standard technique in learning theory is to transform the domain into points in ^d, and the hypothesis class into halfspaces; _±(M) is the smallest dimension such that this transformation is possible. Since halfspaces through the origin in ^d have VC dimension d, sign-rank is lower bounded by the VC dimension of the hypothesis class. In communication complexity, sign-rank is essentially equivalent to the unbounded-error communication complexity of M <cit.>, where the two players have access to private randomness and wish to succeed with probability strictly better than 1/2. A set of matrices has bounded sign-rank if there exists a constant d such that all matrices M ∈ have sign-rank at most d. This is equivalent to having constant unbounded-error communication cost. In graph theory, finding implicit representations (defined below) for graphs whose adjacency matrices have bounded sign-rank is an open problem since at least 2003 <cit.>. Margin. We might want to maximize the margin of the representation. For a fixed representation {p_x}_x ∈ [N] and {h_y}_y ∈ [N], we define the margin as min_x,y|p_x,h_y|/p_x·h_y. Write (M) for the maximum m such that there is a representation with margin m; the dimension of this representation is irrelevant. The complexity of various learning algorithms like SVM or perceptron can be bounded in terms of the margin. It is also known that (M) is functionally equivalent to the two-way, public-coin randomized communication complexity (<ref>). A set of matrices has bounded margin if there is some constant m such that all M ∈ have (M) ≥ m, and having bounded margin is equivalent to having constant public-coin randomized communication cost. Therefore, graphs whose adjacency matrices have bounded margin admit implicit representations, due to the observation of <cit.>. One of the main goals in communication complexity is to understand the power of randomness, and both of the above measures of complexity capture a type of randomized communication. A rapidly-growing body of work on constant-cost communication <cit.> studies the properties of matrices with bounded margin or bounded sign-rank, but the relationship between these two measures is not well understood. In one direction, it is believed that there exist sets of matrices with bounded margin but unbounded sign-rank, but all known lower bounds fail to prove this <cit.> (although it was proven for partial matrices <cit.>). In this paper, we are interested in the other direction: For matrices of bounded sign-rank, under what conditions does also have bounded margin?[Note that a matrix having bounded sign-rank and bounded margin does not mean that sign-rank and margin are bounded simultaneously by the same point-halfspace representation.] It is known that some conditions are required. Write (M) for the two-way, public-coin randomized communication cost of a matrix M ∈^N × N (which we will refer to simply as communication cost) and () for the communication cost of matrices M ∈ as a function of their size N (see <ref> for formal definitions). The Greater-Than communication problem, defined by the matrices ∈^N × N where _i,j = 1 if and only if i > j, has sign-rank 2 but communication cost[Standard notation in the literature uses n as the number of bits in the input; we use N for the domain size, so Θ(loglog N) corresponds to the more commonly-stated bound Θ(log n).] () = Θ(loglog N) and therefore unbounded margin. When sign-rank increases to 3, matrices can achieve the maximum possible communication cost () = Θ(log N) <cit.>, far exceeding the complexity of Greater-Than. However, one of our main results is that, for sign-rank 3, Greater-Than is the only barrier to constant-cost communication: A set of matrices with sign-rank 3 has () = O(1) (and therefore constant margin) if and only if it does not contain arbitrarily large instances of Greater-Than. We prove a similar theorem for the adjacency matrices of unit-disk graphs (UDGs), which have sign-rank 4, and these results establish the existence of implicit representations when the condition on the Greater-Than instances is satisfied. We also exhibit a fundamental gap between sign-rank 4 and 5 which shows that the “type” of randomness used in our communication protocols cannot succeed in sign-rank 5 and above. <ref> is a consequence of more general results whose motivation and applications we elaborate upon below. §.§ Constant-Cost Communication and Implicit Graph Representations The study of constant-cost randomized communication was initiated independently in <cit.>. One motivation of <cit.> was that constant-cost communication is a special case of a well-studied open problem in structural graph theory and distributed computing, which asks to characterize the hereditary graph classes that admit implicit representations (see <cit.>). *Implicit representations. A class of graphs is a set of (labeled) graphs that is closed under isomorphism. It is hereditary if it is closed under taking induced subgraphs. A hereditary class admits an implicit representation if there exists a decoder D : ^* ×^* → such that, for every N-vertex graph G ∈, each vertex v of G can be assigned an encoding (v) of O(log N) bits, where D((u), (v)) outputs the adjacency of vertices u,v; the decoder D depends on the class but not the specific graph G. Implicit representations were introduced in <cit.>, who observed that they are equivalent to a graph U of size (N), called a universal graph, that contains every N-vertex graph G ∈ as an induced subgraph. Since a graph of size (N) has at most 2^O(N log N) N-vertex induced subgraphs, a necessary condition for the existence of implicit representations is that contains at most 2^O(N log N) N-vertex graphs, in which case is said to have factorial speed. The communication problem defined by any matrix M ∈^N × N is equivalent to the problem of deciding adjacency in the (bipartite) graph whose adjacency matrix is M, where each player is given a vertex. Building on <cit.>, <cit.> observed that constant-cost communication problems are equivalent to hereditary graph classes that admit an adjacency sketch, which is a randomized version of an implicit representation, where the encodings (v) are assigned by a randomized algorithm and have constant size (independent of the number of vertices), in such a way that ∀ u,v : D((u), (v)) correctly outputs adjacency of u,v ≥ 2/3 . Adjacency sketches for trees also appeared earlier in <cit.>. As noted in <cit.>, adjacency sketches can be derandomized (see <ref>) to obtain implicit representations, making constant-cost randomized communication protocols a stronger type of implicit representation. *Unit disk graphs. This again motivates our focus on sign-rank. Graphs whose adjacency matrices have bounded sign-rank are among the most important types of graphs for which implicit representations are not known to exist in general: to obtain implicit representations for geometric intersection graphs (more precisely, semi-algebraic graphs), it suffices to study graphs of bounded sign-rank (see <cit.>). Any class of bounded sign-rank satisfies the necessary condition of factorial speed <cit.>, which was conjectured to be sufficient in <cit.>. Until this conjecture was refuted in <cit.> by a non-constructive argument, classes of bounded sign-rank were considered promising candidates for a counterexample <cit.>. The best known implicit representations for classes of bounded sign-rank in general use O(N^1-ϵ) bits per vertex where ϵ > 0 is a constant <cit.>. A canonical example is the unit disk graphs (UDGs). UDGs admit an “implicit representation” in the sense that each vertex may be encoded with the coordinates of its disk in ^2. However, this encoding requires exponentially-many bits <cit.>, and it is a central open problem whether this difficulty can be sidestepped to obtain encodings of size O(log N); our understanding is that this is not widely believed to be possible. In this paper, we resolve the randomized version of the question by giving a complete characterization of the UDGs which admit constant-size adjacency sketches. To state this result, we require the notion of stability (see <cit.>). *Stability. The chain-index (G) of a graph G is the largest k such that there exist disjoint sets of vertices {a_1, …, a_k} and {b_1, …, b_k} where, for any i < j, a_i, b_j are adjacent but b_i, a_j are not. In the terminology of <cit.>, a graph class is graph-theoretically stable if there is a constant k such that (G) ≤ k for all G ∈; we will say simply stable[We use stable in this paper but we note that the disambiguation graph-theoretically stable in <cit.> is necessary to avoid confusion with stability in the literature on model theory.]. The chain-index is essentially[Not exactly: we have no restriction on the adjacency between a_i, b_i, which helps the analysis but is not qualitatively important.] the largest instance of the Greater-Than communication problem that appears in G, and therefore a class that is not stable must have non-constant communication cost (see <cit.> for more on the stability condition in communication). For a graph class , write () for the function N ↦max_G (_G) where G ranges over the N-vertex graphs in and _G is the adjacency matrix of G (if is a class of bipartite graphs, we take the bipartite adjacency matrix). Stability is necessary for () = O(1); for UDGs and graphs of sign-rank 3, we show it is also sufficient: Let be either a subclass of UDGs, or a class of sign-rank at most 3. Then () = O(1) if and only if is stable. As a consequence, stable subclasses of UDGs and graphs of sign-rank 3 admit implicit representations. §.§ Results and Techniques <ref> follows from a more general result that has other implications for implicit graph representations and which unifies and generalizes a number of previous results. We also complement it with an impossibility result that rules out using the type of randomized techniques in this paper to prove similar results in sign-rank 5 and above. Let us now explain these results in more detail and give a brief summary of the techniques. *Constant-cost reductions. We require the notion of constant-cost reductions and the Equality oracle. The Equality communication problem is the standard example of the power of (public-coin) randomized communication. Two players are given inputs x, y ∈ [N], respectively, and they must decide if x = y. By random hashing, this can be done with success probability 3/4 using only 2 bits of communication. The success probability can be improved to any arbitrary constant by increasing the number of bits by a constant factor. One way to design a constant-cost communication protocol is to design a deterministic communication protocol with constant cost, which has access to an oracle that computes Equality. This means that the two players can, at any time, supply the oracle with arbitrary values a,b and receive, at unit cost, the answer to the query “a = b?” The power of the Equality oracle has been studied in several works <cit.>. One may think of these protocols as the ones that can be implemented using standard practical hash functions like SHA256. Constant-cost protocols of this form are examples of constant-cost reductions, a type of reduction that is natural for both constant-cost communication complexity and implicit graph representations; we formally define constant-cost reductions in general in <ref>. Along with the algorithmic definition of reductions to Equality, there is an equivalent structural definition (see <cit.>): if a graph class admits a constant-cost protocol for computing adjacency in graphs G ∈, using Equality oracles, then there exists a constant t such that the adjacency matrix _G of every graph G ∈ (or bipartite adjacency matrix, if is a class of bipartite graphs) can be written as ∀ x,y : _G(x,y) = f(Q_1(x,y), Q_2(x,y), …, Q_t(x,y)) , where f : ^t → and each Q_i is the bipartite adjacency matrix of a bipartite equivalence graph (disjoint union of bicliques). We write ( M ) for the minimum cost of a 2-way deterministic protocol with Equality oracles. For computing adjacency in monotone graph classes (closed under edge & vertex deletions), all constant-cost randomized protocols can be put in this form <cit.>, but in general they cannot <cit.>. <cit.> showed that () = O(1) implies that has bounded sign-rank; our results explore the converse. *Forbidden cycles and subdivided stars. Our <ref> is a consequence of a more general result, <ref> below, which also makes some progress towards characterizing the finitely-defined bipartite graph classes for which constant-cost communication and implicit representations are possible. For any set of bipartite graphs, a class of bipartite graphs is -free if no graph G ∈ contains any H ∈ as an induced subgraph. Every hereditary class of bipartite graphs is -free for some unique but possibly infinite set . For fixed , write _ for the -free bipartite graphs. For a bipartite graph G=(U,W,E) with a fixed bipartition, we write G for the bipartite complement of G, i.e. G=(U,W,(U × W) ∖ E). The condition that is stable is equivalent to the condition that it is H_k-free for some constant k, where H_k denotes the half-graph (see <cit.>), so (_) = O(1) requires that contain some half-graph H_k. When is finite, it is also necessary that contain both a tree and the bipartite complement of a tree, otherwise the number of graphs in _ is too large <cit.>. In the case || = 2, it is therefore necessary for (_) = O(1) that = {H_k, T} where T and its bipartite complement T are both trees; it was proved in <cit.> that this is also sufficient. We believe these conditions remain sufficient for larger (but still finite) , (_) = O(1) whenever = { H_k, T_1, T_2} for some trees T_1 and T_2. When T_1 and T_2 are subdivided stars, our result confirms this. For s,t ∈, we write S_s,t for the subdivided star, which is obtained by taking the star graph with s leaves and subdividing each edge t-1 times. As usual, we denote by C_t the cycle on t vertices. Our main technical result is: theoremthmintromain Let be a stable class of bipartite graphs that satisfies either of these conditions: * There exist constants s, t such that is (S_s,t, S_s, t)-free; or * There exists a constant t such that is { C_t', C_t' |  t' ≥ t and t' is even}-free. Then () = O(1). We use <ref> to prove <ref> by decomposing UDGs or graphs of sign-rank 3 into bipartite graphs that are both (S_3, 3, S_3, 3)-free and { C_t, C_t |  t ≥ 10 and t is even}-free (which, to clarify, is stronger than necessary to apply the theorem). We remark that the implicit representation implied by <ref> can be efficiently computed, meaning that the labels can be constructed in time (N) and decoded in time log N. This efficiency is inherited by the implicit representations of UDGs and graphs of sign-rank 3, provided that the encoder is given the geometric representation of the input graph. <ref> is much more general, and also allows us to recover several prior results. Analogs of <ref> for the classes of permutation graphs, interval graphs, and P_7-free and S_1,2,3-free bipartite graphs were proved in <cit.>. All of these results, which in <cit.> each required different proof strategies, follow as corollaries of <ref>. Likewise, <cit.> showed the existence of implicit representations for stable, chordal bipartite graphs, which is also implied by <ref>. *Higher sign-ranks and weakly-sparse graphs. To advance beyond sign-rank 3, it is helpful to compare the stability condition with the stronger weakly-sparse condition. A class of graphs is weakly-sparse if there is a constant t such that no graph G ∈ contains K_t,t as a subgraph. Any weakly-sparse class is also stable. It is known and not difficult to prove that any weakly-sparse subclass of UDGs has bounded degeneracy, and therefore the analog of <ref> for weakly-sparse UDGs is trivial (because () = O(1) for any of bounded degeneracy). For weakly-sparse graph classes, we present a proof in <ref> that reductions to Equality are equivalent to bounded degeneracy: theoremthmeqlowerbound Let be a hereditary class of bipartite graphs that is weakly-sparse. Then () = O(1) if and only if has bounded degeneracy. In <cit.>, it is conjectured that the point-line incidence graphs 𝒫ℒ satisfy (𝒫ℒ) = ω(1). <ref> shows the weaker result () = ω(1), because point-line incidences are K_2,2-free and have unbounded degeneracy. They also have sign-rank at most 6, which means that the Equality oracle does not suffice to extend <ref> to sign-rank 6 and above, even if the stability condition is replaced with the much stronger weakly-sparse condition. Combining known results in the literature, we also give in <ref> an example (K_2,2-free point-box incidence graphs) with sign-rank 5 that is K_2,2-free but has unbounded degeneracy, showing in fact that the Equality oracle does not suffice to extend <ref> to sign-rank 5. It may be the case that reductions to Equality are the only type of constant-cost communication possible for matrices of bounded sign-rank, see <ref>. We summarize the known results for low sign-ranks in <ref>. *Proof overview. We briefly summarize the proofs of <ref>. Although UDGs and graphs of sign-rank 3 do not satisfy the conditions of <ref>, we prove that two parties with access to an Equality oracle can agree on a graph decomposition into pieces that avoid edge-asteroid triple structures (used in <cit.>), which guarantees that these pieces satisfy the conditions of <ref>. Our main tool to prove <ref> is the decomposition, which we take from <cit.>. The decomposition partitions a bipartite graph into bags of vertices with a tree-like structure on the bags that controls the edges between the bags. In particular, every root-to-leaf path on the bags induces a path in the original graph. For this reason, the method has previously been used (as in <cit.>) to analyze P_t-free graphs, graphs which forbid long induced paths, where the depth of the decomposition is constant. However, in our case, the depth of the decomposition is unbounded. Instead, we show that, under the conditions of <ref>, each bag has edges to only a bounded number of its ancestors. Using this guarantee, we show that a communication protocol on input vertices x,y may use the Equality oracle to either determine the adjacency, or agree on a subset of bags that contains x and y. The protocol may then recurse on these bags, sometimes switching to the bipartite complement of the graph when it does so (this is why we require both S_s, t and S_s, t to be forbidden). Due to arguments of <cit.>, this recursion will reduce the chain-index of the graph and is therefore guaranteed to terminate after a constant number of iterations. §.§ Discussion and Open Problems *Communication complexity. An intriguing possibility arises from this work, in conjunction with other recent work on bounded sign-rank. Adapting (or abusing) some notation of <cit.>, write 𝖴𝖯𝖯[1] for the set of communication problems with bounded sign-rank (constant unbounded-error communication cost <cit.>), write 𝖡𝖯𝖯[1] for the set of communication problems with constant public-coin randomized communication cost, and write [1] for the set of communication problems with a constant-cost reduction to Equality. With these definitions of communication complexity classes, we can ask: Is it the case that [1] = 𝖴𝖯𝖯[1] ∩𝖡𝖯𝖯[1]? A positive answer to this question would “explain” all of the known results and conjectures relating these classes. It is proved in <cit.> that [1] ⊆𝖴𝖯𝖯[1] ∩𝖡𝖯𝖯[1]. In the other direction, there are communication problems in 𝖡𝖯𝖯[1] that do not belong to [1], which was proved independently in <cit.> and <cit.>, but the example in both cases, the 1-Hamming Distance problem (adjacency in the hypercube), is believed not to belong to 𝖴𝖯𝖯[1] <cit.>, which is implied by a positive answer to <ref>. In <ref>, we give two explicit examples (K_2,2-free point-box incidences, and point-line incidences) in 𝖴𝖯𝖯[1] that do not belong to [1], which could possibly provide a negative answer to <ref> if they belong to 𝖡𝖯𝖯[1], but point-line incidences are conjectured not to belong to 𝖡𝖯𝖯[1] in <cit.>. On the other hand, a negative answer to <ref> seems to require a substantially different type of randomized protocol than the ones which have so far been discovered[By this we mean that it seems unlikely to us that a negative answer to the question would be achieved by a reduction to any currently-known constant-cost problem, most of which can be found in <cit.>.], and would therefore be very interesting. *Implicit representations. An obvious question is whether the stability condition in our positive result for implicit representations can be dropped. This cannot be accomplished by reductions to Equality, for which stability is necessary. We have shown that the Greater-Than problem is the only barrier to constant-cost communication, so one idea for generalizing our result is to allow the more powerful Greater-Than oracles in the communication protocol. Constant-cost reductions to Greater-Than are equally good for the purpose of finding implicit representations (we may think of some standard implicit representations, like for interval graphs <cit.> and point-box incidences <cit.>, as protocols of this form). But this cannot succeed: a constant-cost reduction to Greater-Than for graphs of sign-rank 3 would imply () = Θ(loglog N) which contradicts the known bound of Θ(log N) <cit.>. This answers an open question asked in independent and concurrent work <cit.> whether (in our terminology) reductions to Greater-Than suffice to obtain implicit representations for geometric intersection graphs with small sign-rank realized by integer coordinates[The bounds in <cit.> hold for constructions with integer coordinates.]. This at least demonstrates that communication complexity lower bounds can be used against certain natural types of implicit representation, although it remains open how to prove any explicit, non-trivial lower bounds for implicit representations. § PRELIMINARIES Let us define some notation and formalize the notions we have discussed in the introduction. We intend this paper to be accessible to readers in graph theory or communication complexity who may not have a background in both, so we make an attempt to make the terminology explicit. We will also define a general notion of constant-cost reductions which has not yet appeared explicitly in the literature. §.§ Notation For a matrix M ∈^X × Y, row x ∈ X, and column y ∈ Y, we will write either M_x,y or M(x,y) for the entry at x and y. For a graph G, we write G for the complement of G. For a bipartite graph G = (X,Y,E) with a fixed bipartition, write G for the bipartite complement, which has edge xy if and only if xy is not an edge of G. The adjacency matrix of a graph G = (V,E) is the matrix _G ∈^V × V with _G(x,y) = 1 if and only if xy ∈ E. For a bipartite graph G = (X,Y,E) with a fixed bipartition, the bipartite adjacency matrix is the matrix _G ∈^X × Y with _G(x,y) = 1 iff xy ∈ E, where we note that the rows are indexed by X instead of the full set of vertices X ∪ Y (and similar for the columns). For a graph G and disjoint sets X,Y ⊆ V(G), we will write G[X,Y] for the semi-induced bipartite subgraph, which is the bipartite graph G[X,Y] = (X,Y,E) defined by putting an edge between x ∈ X and y ∈ Y if and only if xy are adjacent in G. (In particular, any edges within X or Y in G are not present in G[X,Y].) §.§ Sign-Rank For a matrix M ∈^N × N, the sign-rank of M is denoted _±(M) and it is the minimum d ∈ such that there exists a matrix R ∈^N × N of rank d with M = (R), where (R) ∈^N × N is the matrix with entries ∀ i,j ∈ [N] : (R)_i,j = (R_i,j) . Equivalently, _±(M) is the minimum d such that each row i ∈ [N] may be associated with a unit vector p_i ∈^d (which we think of as a point) and each column j ∈ [N] may be associated with a unit vector h_j ∈^d (which we think of as the normal vector for a halfspace), such that M_i,j = (p_i, h_j). In this way, the sign-rank of M is equivalent to the minimum dimension d such that M is the incidence matrix between a set of points X and a set of halfspaces Y, where the hyperplane boundaries of the halfspaces contain the origin. We require a notion of sign-rank for graphs, which we will define separately for bipartite graphs with a fixed bipartition, and for general graphs. For a bipartite graph G = (X,Y,E) with a fixed bipartition, its sign-rank _±(G) is defined as the sign-rank _±(_G) of its bipartite adjacency matrix _G ∈^X × Y. For a general graph G = (V,E), we define its partial adjacency matrix A ∈{± 1, ⋆}^V × V to be _G^*(x,y) ⋆ if x = y 1 if xy ∈ E -1 otherwise. We then define the sign-rank _±(G) as the minimum rank of a matrix R such that ∀ i ≠ j : (R_i,j) = _G^*(i,j) . Specifically, we do not make any requirement on the diagonal entries. §.§ Communication Complexity and Margin For a matrix M ∈^N × N, we will write (M) for the public-coin randomized communication complexity of M, with success probability 2/3. In this model, Alice receives a row x ∈ [N] and Bob receives a column y ∈ [N] and they must output M(x,y). They are given shared access to a string of random bits, and they take turns sending messages that depend on their respective inputs and the random string. They must output the correct answer with probability at least 2/3 over the random string, and the complexity of a protocol is the total number of bits communicated between the players on the worst-case inputs x,y. (M) is the minimum complexity of any such protocol computing M. See <cit.>. The standard notion of a (total, Boolean-valued) communication problem is a sequence = (P_N)_N ∈ of matrices, where P_N ∈^N × N, and the complexity of the problem, denoted (), is the function N ↦(P_N). However, we are interested in the complexity of classes of matrices (specifically adjacency matrices of graphs belonging to some graph class), not merely sequences of matrices, where there is a variety of N × N matrices instead of just one. So we define communication problems more generally, as in <cit.>. A communication problem is a set = ⋃_N ∈_N of Boolean matrices, where _N is a finite set of matrices in ^N × N. We then define the communication complexity () as the function N ↦max_P ∈_N(P) . For a class of graphs, we write _ for the communication problem that is the set of adjacency matrices of graphs in . If is a class of bipartite graphs, we take the bipartite adjacency matrices. We abuse notation and write () = (_), so that () is the function N ↦max{ R(_G) | G ∈ has N vertices } . Communication complexity is always upper bounded by the number of bits n in the input, or in our notation, by ⌈log N ⌉. We are interested in determining which communication problems have constant cost, which means that there exists a constant c such that (M) ≤ c for all M ∈. One way to rule out a constant-cost protocol for a problem is if the Greater-Than communication problem appears as a subproblem of . Formally, this is captured by the stability condition (see <cit.>): Let be any graph class which is not stable. Then () = ω(1). As mentioned in the introduction, having constant communication cost is equivalent to having constant margin, due to the following inequality, which follows from results of <cit.>: Let M ∈^N × N. Then Ω(log1/(M)) ≤(M) ≤ O(1/(M)^2) . §.§ Constant-Cost Communication Reductions and Equality One way to obtain constant-cost protocols is by reduction to the Equality problem, for which we require the definitions of the Equality problem and a notion of reduction. The Equality communication problem is the set { I_N × N : N ∈} where I_N × N denotes the N × N identity matrix. In other words, for input size N, Alice and Bob receive elements x,y ∈ [N] and wish to decide whether x = y. It is well-known that () = 2. Constant-cost communication reductions, specifically to the Equality problem, have been used implicitly in several prior works. Here we choose to explicitly define constant-cost reductions in general[This general definition of constant-cost reductions has arisen out discussions with several other researchers.]. For this, we require the notion of a query set. A query set is a set of matrices that is closed under the following operations: * For every Q ∈ and any Q' obtained by row and column permutations of Q, Q' ∈. * For every Q ∈, if Q' is any submatrix of Q then Q' ∈. * For every Q ∈, if Q' is obtained by duplicating a row or a column of Q, then Q' ∈. For a set of matrices, we define () to be the closure of under these operations. In the communication complexity literature, () was recently named the set of blocky matrices <cit.>. In graph theory, () are the adjacency matrices of disjoint unions of bicliques, also called bipartite equivalence graphs. It is easily verified that for any constant c, if () ≤ c then (()) ≤ c. However, we caution that (()) ≤() does not hold for non-constant complexities, because () includes all submatrices of and (·) takes the maximum complexity over all size-N matrices (see <cit.> for examples). We now give two equivalent definitions for reductions between problems; one algorithmic and one structural. Let be a communication problem and let P ∈^N × N. A deterministic protocol computing P with oracles is a rooted binary tree T where each leaf ℓ is assigned a value b(ℓ) ∈ and inner node v is assigned an N × N matrix Q_v ∈(), with the following conditions. On each pair of inputs x,y ∈ [N] the protocol begins at the root node v of T. At each node v, if Q_v(x,y) = -1 then the protocol proceeds by advancing the current node v to its left child, and if Q_v(x,y) = 1 then the protocol proceeds by advancing the current node v to its right child, until v becomes a leaf, at which point the protocol outputs b(v). It is required that b(v) = P_x,y for all inputs x,y. The cost of the protocol is the depth of the tree. We write ^(P) for the minimum cost of a protocol which computes P with oracles. For a communication problem , we write ^() for the function N ↦max_P ∈_N^(P). In other words, a communication protocol with oracles is a deterministic protocol where in each round, Alice and Bob transform their inputs x,y into inputs to a problem in and receive the answer from an oracle computing at unit cost. Observe that, as long as is non-trivial (does not contain only all-1 and all-(-1) matrices), the definition of () allows any single round of deterministic communication to be simulated by an oracle, so without loss of generality we may assume that every inner node of the protocol is an oracle call. If there is a constant c such that ^() ≤ c, then we say that constant-cost reduces (or just reduces) to . The following proposition is easily obtained by standard error-boosting techniques: Suppose () = O(1) and reduces to . Then () = O(1). In particular, if reduces to then () = O(1). The second, structural definition of reduction is as follows. We say reduces to if there exists a constant t such that, for every A ∈, there exists: * a function f : ^t →; and * matrices Q_1, …, Q_t ∈(), such that A = f(Q_1, …, Q_t), meaning that A(i,j) = f(Q_1(i,j), Q_2(i,j), …, Q_t(i,j)) for all i,j ∈ [N]. In the special case when is the set of identity matrices, this definition appeared independently in <cit.> and subsequently in <cit.>, and the minimum t such that the above conditions hold is a “functional” analog of rank, recently called the functional blocky-rank in <cit.>. It is not difficult to show that this structural definition of constant-cost reductions is equivalent to the algorithmic one. One may easily derive a constant-cost protocol with oracles Q_i from the structural definition, and in the other direction one may simply let the set of matrices Q_i be the inner nodes of the communication protocol and define f as the function that simulates the protocol on these queries. In the structural definition it is not hard to see an analog of <ref> for implicit representations. A similar[There are some technicalities involved in translating between the two.] notion of reductions for implicit representations appeared independently and concurrently in <cit.>, which included reductions to Equality and Greater-Than as parts of a complexity hierarchy of implicit representations. Suppose is the set of adjacency matrices for a hereditary graph class that admits an implicit representation, and suppose is the set of adjacency matrices for a hereditary graph class . If reduces to then admits an implicit representation. §.§ From Communication Protocols to Implicit Representations An observation of <cit.> is that any hereditary graph class for which () = O(1) must also have an implicit representation (and any constant-cost communication problem may be transformed into a hereditary graph class). Therefore, as argued in <cit.>, constant-cost communication is essentially the probabilistic version of implicit representations. We will present our proofs as upper bounds on communication complexity, which imply implicit representations. The general correspondence between constant-cost communication and implicit representations is non-constructive (by the probabilistic method), but for the sake of clarity and completeness, we briefly describe how to directly translate a communication protocol that uses Equality oracles (as ours will do) into an implicit representation. Recall that, for a graph G = (V,E), if (G) ≤ c then there exists a binary communication tree of depth c with each inner node v assigned to a matrix Q_v ∈(), which means that Q_v is the adjacency matrix of a bipartite equivalence graph. In other words, there are functions a_v, b_v : V → [N] such that Q_v(x,y) = 1 if a_v(x) = b_v(y) 0 otherwise. To obtain an implicit representation, we need to define a decoder D and encodings (·) for each graph G ∈. We define (x) for each x ∈ V by writing down the values a_v(x), b_v(y) for each inner node v of the tree, together with the output values at the leaves of the tree. Each value a_v(x) and b_v(x) requires at most ⌈log N ⌉ bits, and there are at most 2^c nodes in the tree, which is constant, so the size of the encoding is O(log N). The decoder D, on inputs (x) and (y) for x,y ∈ V, may use the values of a_v(x) and b_v(y) for each node v, together with the outputs on the leaves, to simulate the communication protocol. § COMMUNICATION BOUNDS FOR EXCLUDED CYCLES AND SUBDIVIDED STARS Our results for unit disk graphs and matrices of sign-rank 3 will follow from a more general result on bipartite graphs excluding either long cycles or subdivided stars, which we prove in this section. Recall the definition of the subdivided star S_s, t, <ref>. * Our main tool will be the decomposition, which we borrow from <cit.>, defined below. §.§ Decomposition: Definition, Existence, and Properties The following definition of the decomposition is taken from <cit.>. We will only apply the decomposition to bipartite graphs in this paper, so we state the special case of the decomposition for bipartite graphs. See <ref> for an illustration. A decomposition of a connected bipartite graph G is a rooted tree Y satisfying the following properties: * Each node of Y is a subset of V(G), called a bag, and the nodes of Y form a partition of V(G). For each vertex v ∈ V(G), write _Y(v) for the unique bag in Y that contains v. We will drop the subscript Y when the decomposition is clear from context. * The root bag of Y is a singleton containing the root vertex. * If u,v ∈ V(G) are adjacent then (u) is an ancestor of (v) or vice-versa. * For every bag B of Y, the subgraph of G induced by B together with all of its descendents is connected. * For every non-root bag B of Y, there exists a vertex h(B), called the hook of B, which belongs to the parent bag of B and has the property that h(B) is adjacent to all vertices of B and non-adjacent to all vertices in the strict descendents of B. For each bag B, we write (B) for the length of the path from the root bag to B in Y (where the depth of the root bag is 0). For each ℓ∈, we say that level ℓ of Y is the set of all bags B with (B) = ℓ. A decomposition for a disconnected bipartite graph G is the union of decompositions for its connected components. There is a simple algorithmic proof that such decompositions always exist <cit.>. For every connected bipartite graph G and vertex r ∈ V(G), there exists a decomposition of G with root vertex r. Given G and r, this decomposition can be computed in polynomial time. A path P = (v_0, v_1, v_2, …, v_k) in G is a hook path (with respect to Y) if v_i is the hook of (v_i-1) for every i ∈ [k]. Observe that any hook path with respect to a decomposition is an induced path. decompositions are typically used in the case where some induced path P_t is forbidden, in which case the depth of the decomposition is bounded. In our case, we will not necessarily have a forbidden P_t or bounded depth of the decomposition, but we will see that the decomposition has a different structure that will permit efficient communication protocols. For this we define the notion of back degree. Given a decomposition Y of G. We say that a bag B of Y has an edge to another bag B' in Y if there exist a vertex in B and a vertex in B' that are adjacent. The back-degree of a bag B in Y is the number of ancestor bags of B to which B has an edge. The maximum back-degree of Y is the maximum back-degree of any of its bags. Note that decomposition of a P_t-free graph has depth at most t, and therefore the maximum back-degree of the decomposition is also bounded by t. In the next two sections we show that if a graph has bounded chain-index and either * does not contain long induced cycles (<ref>), or * does not contain a fixed subdivision of a star (<ref>), then its decompositions have bounded maximum back-degree. In <ref>, we give a general communication protocol for decompositions with bounded maximum back-degree. Before proceeding <ref> and <ref>, we introduce some notation and properties of the interactions between bags in decompositions that are used in both sections. Let Y be a decomposition of a bipartite graph G = (X,Y,E), and let B be a bag of Y with (B) > 0. Write h for the hook of B. Let A_1, A_2, …, A_r be some ancestors of B, excluding the immediate parent of B, to which B has an edge. Then the following properties are easy to verify: Let s ∈ and suppose that (A_1) < (A_2) < … < (A_r) and (A_i+1) - (A_i) ≥ s for all i ∈ [r-1]. For i ∈ [r], we define h_i,1 to be the hook of A_i, and for z ∈ [s-1], inductively define h_i,z as the hook of (h_i,z-1). For each i ∈ [r], let a_i ∈ A_i be a neighbour of some b_i ∈ B. Then the following properties hold: * The hook h of B is adjacent to each b_i. For each i ∈ [r] and z ≥ 1, a_i is not adjacent to h, because they are on the same side of the bipartition of G, and h_i,z is not adjacent to h, because h_i,z is a hook in an ancestor bag of (h) that is not the parent of (h). * For each i,j ∈ [r], a_i is not adjacent to a_j, because they are on the same side of the bipartition of G. * For each 1 ≤ i < j ≤ r and each z ≥ 1, h_i,z is not adjacent to a_j, because h_i,z is a hook that is not in the parent bag of A_j. * For each i ∈ [r] and each z ≥ 2, we have h_i,z not adjacent to a_i because h_i,z is a hook that is not in the parent bag of (a_i). * For each i,j ∈ [r], and z ≥ 1, h_i,z is not adjacent to b_j because h_i,z is a hook that is not in the parent bag of B. §.§ Excluding Long Cycles For any t, k ∈, there exists a constant ℓ such that the following holds. Let G = (X,Y,E) be any (C_t, C_t+1, C_t+2, …)-free bipartite graph with (G) < k. Let Y be a decomposition of G. Then Y has maximum back-degree at most ℓ. Without loss of generality we assume that t ≥ 4. Let R be the Ramsey number that guarantees that a complete graph on R vertices with edges colored by 2^t-4 colors has a monochromatic clique of size r max{ 2, k }. Let ℓ = (t-3) · R and let B be a bag of Y. If B has depth at most ℓ in Y the result holds trivially, so we will assume that B has depth greater than ℓ. Let A”_1, A”_2, …, A”_m be the ancestors of B, excluding the immediate parent of B, to which B has an edge. For each i ∈ [m], let a”_i ∈ A”_i be a neighbour of some b”_i ∈ B. Assume for the sake of contradiction that B has edges to more than ℓ ancestors, so m ≥ℓ. Then there is a subsequence of ancestor bags A'_1, …, A'_R such that (A'_i+1) - (A'_i) ≥ t-3, for each i ∈ [R-1], so that there are at least t-4 levels of the decomposition separating each bag in this subsequence. We will write a'_i a”_i^* and b'_i b”_i^*, where i^* is the index of the bag satisfying A”_i^* = A'_i. For each A'_i, let h'_i,1 be the hook of A'_i, and for 1 < z ≤ t-3, inductively define h'_i,z as the hook of (h'_i,z-1). For each pair { A'_i, A'_j } with i < j we assign a color 𝖼𝗈𝗅{A'_i, A'_j}∈^t-4 as follows: the z^th bit is 1 if and only if a_i' is adjacent to h_j,z', for z ∈ [t-4]. By Ramsey's theorem, we may now choose a subsequence A_1, …, A_r of ancestor bags, where for each i ∈ [r] there is a corresponding i^* ∈ [R] such that A_i = A'_i^*, and each pair {A_i, A_j} with 1 ≤ i < j ≤ r has the same color. We will now obtain a contradiction for each possibility of this color. We will write a_i a'_i^*, b_i b'_i^*, and h_i,z h'_i^*, z, for each i ∈ [r] and z ∈ [t-3], and use the notation and the properties from <ref>. Case 1: There is z ∈ [t-4] such that the z^th bit of the color is 1. Consider the subgraph H induced by the vertices {a_1, a_2, …, a_r}∪{ h_1,z, h_2,z, …, h_r,z}. For 1 ≤ i < j ≤ r, we have a_i adjacent to h_j,z due to the color, and h_i,z is not adjacent to a_j due to Property <ref>. Thus, by definition, (G) ≥(H) = r ≥ k, a contradiction. Case 2: All bits of the color are 0. Consider the hook path P from a_2 to h_1,1. Let v be the first (i.e. closest to a_2) vertex on P that is adjacent to a_1. Such a vertex exists because a_1 is adjacent to the last vertex h_1,1 of the path. Let P' be the subpath of P from a_2 to v. By Property <ref> and the color assumption, P' contains the first t-2 vertices of P: a_2, h_2,1, h_2,2, …, h_2,t-4, h_2,t-3. Now, if b_2 is adjacent to a_1, then b_2,P',a_1,b_2 is an induced cycle of length at least t. Similarly, if b_1 is adjacent to a_2, then b_1, P', a_1, b_1 is such a cycle. Finally, if neither b_2 is adjacent to a_1, nor b_1 is adjacent to a_2, then b_1 ≠ b_2 and, by Properties <ref> and <ref>, h,b_2,P',a_1,b_1,h is a forbidden induced cycle. §.§ Excluding Subdivisions of Stars For any s, t, k ∈, there exists a constant ℓ such that the following holds. Let G = (X,Y,E) be any bipartite graph with (G) < k that does not contain S_s, t as an induced subgraph. Let Y be a decomposition of G. Then Y has maximum back-degree at most ℓ. Let R be the Ramsey number that guarantees that a complete graph on R vertices with edges colored by 2^3+(t-1) colors has a monochromatic clique of size r max{ s, k }. Let ℓ = t · R + 1 and let B be a bag of Y. If B has depth at most ℓ in Y the result holds trivially, so we will assume that B has depth greater than ℓ. Let A”_1, A”_2, …, A”_m be the ancestors of B, excluding the immediate parent of B, to which B has an edge, meaning that for each i ∈ [m], there exists a vertex b”_i ∈ B with an edge to a vertex a”_i ∈ A”_i. Assume for the sake of contradiction that B has edges to more than ℓ ancestors, so m ≥ℓ. Then there is a subsequence of ancestor bags A'_1, …, A'_R such that (A'_1) ≥ t and (A'_i+1) - (A'_i) ≥ t, for each i ∈ [R-1]; in particular there are at least t-1 levels of the Gyárfás decomposition separating each bag in this subsequence. We will write a'_i a”_i^* and b'_i b”_i^*, where i^* is the index of the bag satisfying A”_i^* = A'_i. For each A'_i, let h'_i,1 be the hook of A'_i, and for 1 < z ≤ t-1, inductively define h'_i,z as the hook of (h'_i,z-1). For each pair { A'_i, A'_j } with i < j we assign a color 𝖼𝗈𝗅{A'_i, A'_j}∈^3+(t-1) as follows: * The first bit indicates whether b'_i = b'_j (i.e. set the bit to 1 if b'_i = b'_j and 0 otherwise). * The second bit indicates whether b'_i is adjacent to a'_j. * The third bit indicates whether b'_j is adjacent to a'_i. * The remaining t bits indicates whether a'_i is adjacent to h'_j,z, for z ∈ [t-1]. By Ramsey's theorem, we may now choose a subsequence A_1, …, A_r of ancestor bags, where for each i ∈ [r] there is a corresponding i^* ∈ [R] such that A_i = A'_i^*, and each pair {A_i, A_j} with i < j has the same color. We will now obtain a contradiction for each possibility of this color. We will write a_i a'_i^*, b_i b'_i^*, and h_i,z h'_i^*, z, for each i ∈ [r] and z ∈ [t-1], and use the notation and the properties from <ref>. Case 1: There is z ∈ [t-1] such that the (3+z)^th bit of the color is 1. The argument is exactly as in Case 1 of <ref>. Consider the subgraph H induced by the vertices {a_1, a_2, …, a_r}∪{ h_1,z, h_2,z, …, h_r,z}. For 1 ≤ i < j ≤ r, we have a_i adjacent to h_j,z due to the color, and h_i,z is not adjacent to a_j due to Property <ref>. Thus, by definition, (G) ≥(H) = r ≥ k, a contradiction. Case 2: The first bit or second bit of the color is 1, and the (3+z)^th color is 0 for all z ∈ [t-1]. Consider the subgraph induced by the vertices {b_1}∪⋃_i=1^s{a_i, h_i,1, h_i,2, …, h_i,t-1} . Since each A_i is separated by at least t-1 levels of Y, each of the above named vertices are distinct. If the first bit of the color is 1, then we have b_1 adjacent to each a_i by definition, since b_1 = b_2 = … = b_s. If the first bit of the color is 0 but the second bit of the color is 1, then we have b_1 adjacent to each a_i because of the color. For each 1 ≤ i < j ≤ s and z ∈ [t-1], we have a_i not adjacent to h_j,z because the associated bit of the color is set to 0, and we have a_j not adjacent to h_i,z by Property <ref>. For z ≥ 2, we have a_i not adjacent to h_i,z by Property <ref>. We have a_i not adjacent to a_j by Property <ref>. And we have b_1 not adjacent to h_i,z by Property <ref>. But we have b_1 adjacent to each a_i, as well as edges a_ih_i,1 and h_i,1h_i,2, h_i,2h_i,3, …, h_i,z-1h_z by definition. So the subgraph induced by the considered vertices is S_s, t, which is a contradiction. Case 3: The first two bits of the color are 0, the third bit is 1, and the (3+z)^th bit is 0 for each z ∈ [t-1]. Consider the subgraph H induced by the vertices {b_1, …, b_k}∪{ a_1, …, a_k }. For each i < j, a_i is adjacent to b_j due to the third bit of the color, but b_i not adjacent to a_j due to the second bit of the color. Then we have (G) ≥(H) = r ≥ k, a contradiction. Case 4: All bits of the color are 0. Consider the subgraph induced by the vertices { h }∪⋃_i=1^s { b_i, a_i, h_i,1, …, h_i,t-2} . Since each bag A_i in the sequence is separated by at least t-1 levels of Y and the first bit of the color is 0, each of the named vertices above is distinct. By Property <ref>, h is adjacent to none of the vertices a_i, h_i,1, …, h_i,t-2 for every i ∈ [s]. For each i < j, we have b_i not adjacent to a_j and a_i not adjacent to b_j due to the color. For each z ∈ [t-1], we have h_i,z not adjacent to b_i or b_j due to Property <ref>; and we have h_i,z not adjacent to a_j due to Property <ref> and a_i not adjacent to h_j,z due to the color. For z ≥ 2 we have h_i,z not adjacent to a_i due to Property <ref>. On the other hand, we have edges hb_i for each i ∈ [s] by definition, along with edges b_ia_i, a_ih_i,1, and h_i,1h_i,2, …, h_i,t-3h_i,t-2. Therefore the induced subgraph is S_s, t, which is a contradiction. §.§ A Communication Protocol for the Gyárfás Decomposition Let G be a connected bipartite graph and let Y be a decomposition of G. For any bag B of Y with depth d = (B), let G_B denote the subgraph of G induced by B together with all of the descendent bags of B in Y with depth d' ≢d 2. We require the next two lemmas of <cit.>. Let B be a bag of Y with (B) = d for any d ≥ 2. Then (G_B) < (G). Let B be a bag of Y with (B) = 1. Let C be a connected component of G_B and Y_C be a decomposition of C rooted at a vertex r_C ∈ V(C) ∩ B. Let B' be a bag of Y_C with (B') = d ≥ 1. Then (C_B') < (G). We will also require the following easy fact. Let be the class of bipartite graphs G with (G) = 1. Then () ≤ 2. Note that the chain-index of P_4, the 4-vertex path, is 2. Thus each G ∈ is P_4-free and therefore is a disjoint union of bicliques, an equivalence graph. Therefore Alice and Bob may compute adjacency in G by using 1 bit of communication to ensure that their input vertices x and y are on opposite sides of the bipartition, and using 1 call to the oracle to check if x,y are in the same biclique. Our first main result, <ref>, follows from the next lemma, applied together with <ref>. Let be a hereditary class of bipartite graphs that is closed under bipartite complementation, and which satisfies the following conditions: * There exists a constant k such that () ≤ k. * There exists a constant ℓ such that for any G = (X,Y,E) ∈, any decomposition of G has back-degree bounded by ℓ. Then there exists a constant c such that () ≤ c. We prove the theorem by induction on k. The base case k = 1 is established in <ref>. Let x,y ∈ V(G) be Alice's and Bob's inputs, respectively. We may assume without loss of generality that G is connected and that x and y are in opposite parts of the bipartition of G, since Alice and Bob may use one oracle call to check whether their inputs x and y are in the same connected component, and use 1 bit of communication to determine whether x and y are in opposite parts. Let Y be a decomposition of G. The communication protocol proceeds as follows. We will assume that the root vertex of Y is on the left side of the bipartition of G, and that Alice's input x is on the left side and Bob's input y is on the right side of the bipartition. * Using 1 bit of communication, Alice tells Bob whether x is the root vertex of Y. If so, Bob outputs 1 if y has depth 1 in Y and the protocol terminates. The protocol is correct in this case, since by <ref>, all vertices at depth 1 are adjacent to the root vertex. * Using 1 bit of communication, Bob tells Alice whether y has depth 1 in Y. If so, they perform the following: * Using 1 call to the oracle, Alice and Bob decide if (x) is a descendent of (y). This is possible because Alice and Bob each know the set of level 1 bags of Y. If (x) is not a descendent of (y), they output 0 and the protocol terminates. The protocol is correct in this case, since by <ref>, if x and y are adjacent then (x) must be the descendent of (y) or vice versa. * Alice and Bob now agree on B = (y), so they each compute the connected components C_1, …, C_m of G_B and agree on decompositions Y_1, …, Y_m of these components, respectively, where the root vertex of each decomposition is on the right side of the bipartition. Using 1 call to the oracle, they decide if x,y belong to the same connected component of G_B. If not, they output 1 and terminate the protocol. The protocol is correct in this case by definition. * Let i be the index of the component C_i of G_B containing both x and y. Using 1 bit of communication, Bob tells Alice whether y is the root vertex of Y_i. If so, Alice outputs 0 if x has depth 1 in Y_i and the protocol terminates. The protocol is correct in this case, since by <ref> all vertices of depth 1 in Y_i are adjacent to the root vertex y in G_B (and therefore non-adjacent in G). * By the assumption of bounded back-degree, _Y_i(x) has edges to at most ℓ of its ancestors in Y_i. Call these ancestors A_1, …, A_ℓ' where ℓ' ≤ℓ. Using ℓ calls to the oracle, Alice and Bob determine whether A_j = B' for some j ≤ℓ', where B' _Y_i(y). * If A_j = B' then Alice and Bob inductively compute adjacency in the graph (C_i)_B', which is the bipartite complement of a graph in and therefore is contained in , and which by <ref> satisfies ((C_i)_B') < (G). They then output the opposite value and terminate the protocol. The protocol is correct in this case since, by induction, they will compute adjacency of x,y in (C_i)_B', which is an induced subgraph of G, so x,y have the opposite adjacency as in G. * If A_j ≠ B' for all j, the protocol proceeds as below. * Similar to step <ref>, _Y_i(y) has edges to at most ℓ of its ancestors in Y_i. Call these ancestors A_1, …, A_ℓ' where ℓ' ≤ℓ. Using ℓ calls to the oracle, Alice and Bob determine whether A_j = B' for some j ≤ℓ', where B' _Y_i(x). * If A_j = B' then Alice and Bob inductively compute adjacency in the graph (C_i)_B', which again is contained in and by <ref> satisfies ((C_i)_B') < (G). They then output the opposite value and terminate the protocol. The protocol is correct in this case since, by induction, they will compute adjacency of x,y in (C_i)_B', which is an induced subgraph of G, so x,y have the opposite adjacency as in G. * If A_j ≠ B' for all j, then Alice and Bob output 1 and the protocol terminates. The protocol is correct in this case because x,y are adjacent in G if and only if they are non-adjacent in C_i. By <ref>, if they are adjacent in C_i then either _Y_i(x) is an ancestor of _Y_i(y) or vice versa. From step <ref>, we know that if _Y_i(y) is an ancestor of _Y_i(x), then _Y_i(x) has no edges to _Y_i(y), so x,y are non-adjacent in C_i and therefore adjacent in G. From the current step, we know similarly that if _Y_i(x) is an ancestor of _Y_i(y) then x,y are again non-adjacent in C_i and therefore adjacent in G. * Now guaranteed that x and y are each in bags at depth 2 or higher, Alice and Bob proceed similarly as in steps <ref> and <ref>, with the following differences. Here, Y is used instead of Y_i. In step <ref>, the protocol outputs 0 instead of 1, because they are operating on the graph G itself instead of an induced subgraph of the bipartite complement G. When applying the inductive hypothesis, we use <ref> instead of <ref>, and the players do not flip the output of the protocol applied to G_B'. Correctness again follows by induction. This concludes the proof. § APPLICATION TO SIGN-RANK 3 AND UNIT DISK GRAPHS We now prove our results <ref> for graphs of sign-rank 3 and unit disk graphs. This will require the notion of edge-asteroid triples (see e.g. <cit.>). A set of three edges in a graph is called an edge-asteroid triple if for each pair of the edges, there is a path containing both of the edges that avoids the neighbourhoods of the end-vertices of the third edge (see <ref> for an illustration). We say that a graph class is edge-asteroid-triple-free if no G ∈ contains an edge-asteroid triple. Since S_3, 3 and C_t for t ≥ 10 contain edge-asteroid triples, we make the following simple observation: Let G be any bipartite graph that is edge-asteroid-triple-free. Then G is both S_3, 3-free, and C_t-free for all t ≥ 10. This observation will allow us to apply our <ref>, but it requires that our graphs G and their complements both be edge-asteroid-triple-free. Unit disk graphs and graphs of sign-rank 3 are not necessarily edge-asteroid-triple-free. But we show that we can decompose these graphs into pieces which satisfy the necessary conditions. §.§ Sign-Rank 3 To apply <ref> to graphs of sign-rank 3, we will decompose these graphs into pieces which are edge-asteroid-triple-free. We achieve this by interpreting graphs of sign-rank 3 as point-halfspace incidences and projecting down into dimension 2. Let P be a set of points in ^d and H be a set of halfspaces in ^d. The incidence graph of P and H is the bipartite graph G(P, H) = ( P, H, { ph  |  p ∈ P, h ∈ H, and p ∈ h }). A bipartite graph G is a point-halfspace incidence graph in ^d if it can be represented as an incidence graph in ^d; more specifically, if there exist a set P of points and a set H of halfspaces both in ^d such that G is isomorphic to G(P, H). If d=2 we call the graph a point-halfplane incidence graph. If there exists such a representation of G, where in addition all pairwise dot products of the norm vectors of the hyperplanes defining the halfspaces in H are non-negative, then G is called a positive point-halfspace incidence graph in ^d. Any bipartite graph G = (U,W,E) of sign-rank d admits a partition U = U_1 ∪ U_2 such that G[U_1,W] and G[U_2,W] are point-halfspace incidence graphs in ^d-1. Let a : U →^d, b : W →^d be such that u ∈ U, w ∈ W are adjacent if and only if ⟨ a(u), b(w) ⟩≥ 0. We assume, without loss of generality, that a(u)_d ≠ 0 for every u ∈ U, and partition U into U_1 = { u ∈ U  |  a(u)_d > 0 } and U_2 = { u ∈ U  |  a(u)_d < 0 } We define a' : U →^d-1 and b' : W →^d-1 as a'(u) = 1/|a(u)_d|(a(u)_1, a(u)_2, …, a(u)_d-1) ,      b'(w) =(b(w)_1, b(w)_2, …, b(w)_d-1). Further, we define the following sets of points and halfspaces in ^d-1: P_i = { p_u = a'(u)  |  u ∈ U_i }, i = 1,2, H_1 = { h_w  |  w ∈ W, h_w = { x ∈^d-1 | ⟨ x, b'(w) ⟩≥ -b(w)_d }}, H_2 = { h_w'  |  w ∈ W, h_w' = { x ∈^d-1 | ⟨ x, b'(w) ⟩≥ b(w)_d }}. Finally, we define two point-halfspace incidence graphs G_1 = (U_1, W, E_1) and G_2 = (U_2, W, E_2), where E_1 = { uw  |  u ∈ U_1, w ∈ W, p_u ∈ h_w } and E_2 = { uw  |  u ∈ U_2, w ∈ W, p_u ∈ h_w' }. We claim that G = G_1 ∪ G_2 = (U_1 ∪ U_2, W, E_1 ∪ E_2). Indeed, for any u ∈ U and w ∈ W, uw ∈ E ⟨ a(u), b(w) ⟩≥ 0 ⟨ a'(u), b'(w) ⟩ + (a(u)_d) · b(w)_d ≥ 0 ⟨ a'(u), b'(w) ⟩≥ - (a(u)_d) · b(w)_d. Hence, if u ∈ U_1, then uw ∈ E ⟨ a'(u), b'(w) ⟩≥ - b(w)_d p_u ∈ h_w uw ∈ E_1; and if u ∈ U_2, then uw ∈ E ⟨ a'(u), b'(w) ⟩≥ b(w)_d p_u ∈ h_w' uw ∈ E_2. Any point-halfspace incidence graph G = G(P,H) in ^d admits a partition H = ⋃_i=1^2^d H_i such that each G(P,H_i) is a positive point-halfspace incidence graph. For h ∈ H, let w_h ∈^d and t_h ∈ be such that h = { x ∈^d  | ⟨ w_h, x ⟩≤ t_h }. We partition H into 2^d subsets H_α, α∈{ -1, +1 }^d, with respect to the sign patterns of the norm vectors. More specifically, h ∈ H_α if and only if (w_h)_i ≥ 0 α_i = +1 for every i ∈ [d]. Clearly, for any α∈{-1,+1}^d and any h, h' ∈ H_α, we have that ⟨ w_h, w_h'⟩≥ 0, i.e. G(P, H_α) is a positive point-halfplane incidence graph. From <ref> and <ref> we obtain the following immediate Any bipartite graph G = (U,W,E) of sign-rank d admits a partition U = ⋃_i=1^2^d U_i such that each G[U_i,W] is a positive point-halfspace incidence graph in ^d-1. We now prove that positive point-halfplane incidence graphs are edge-asteroid-triple free. Every positive point-halfplane incidence graph G is edge-asteroid-triple-free. Let G be a positive point-halfplane incidence graph and let P and H be sets of points and halfplanes respectively whose incidence graph is isomorphic to G. For a point p ∈ P we denote by x_p and y_p its coordinates respectively; for a halfplane h ∈ H we denote by a_h, b_h, t_h the coefficients of the halfplane inequality, i.e. h = { (x,y) ∈^2  |  a_h x + b_h y ≤ t_h }. Without loss of generality, we can assume that no point in P lies on the boundary of any h ∈ H. Since G is positive, using translation and rotation, we can further assume that for every h ∈ H both a_h and b_h are non-negative. This latter assumption implies the following useful claim which is straightforward to verify. Claim 1. For every h ∈ H it holds that if (x,y) ∈ h, then (x',y') ∈ h for every x' ≤ x and y' ≤ y. Suppose now, towards a contradiction, that G contains an edge-asteroid triple, and let p_1h_1, p_2h_2,p_3h_3 be its edges, where p_i ∈ P, h_i ∈ H. Note that the points p_1,p_2,p_3 are pairwise incomparable with respect to the coordinatewise order. Indeed, if for example x_p_1≤ x_p_2 and y_p_1≤ y_p_2, then by Claim 1 we would have p_1 ∈ h_2, i.e. p_1 and h_2 would be adjacent in G, which would contradict the assumption that the three edges form an edge-asteroid triple. Thus, without loss of generality, we assume that x_p_1≤ x_p_2≤ x_p_3 and y_p_1≥ y_p_2≥ y_p_3. Let Q=(q_1,f_1,q_2,f_2,q_3, …, f_k-1,q_k), q_i ∈ P, f_i ∈ H, q_1 = p_1 and q_k = p_3, be a path containing p_1 and p_3 that avoids the neighbourhoods of both p_2 and h_2. Since x_q_1≤ x_p_2≤ x_q_k, there exists s ∈ [k-1] such that x_q_s≤ x_p_2≤ x_q_s+1. As above, using Claim 1, we can conclude q_s is incomparable with p_2, as otherwise f_s would be adjacent to p_2 or h_2 would be adjacent to q_s, contradicting the choice of Q. Similarly, q_s+1 is incomparable with p_2. Hence, we have that y_q_s≥ y_p_2≥ y_q_s+1. Let now h_2 be the closure of the complement of h_2, i.e. h_2 = { (x,y) ∈^2  |  a_h_2 x + b_h_2 y ≥ t_h_2}, and let A_-+ = { (x,y) ∈h_2 |  x ≤ x_p_2, y ≥ y_p_2}, A_+- = { (x,y) ∈h_2 |  x ≥ x_p_2, y ≤ y_p_2}. Note that q_s ∈ A_-+ and q_s+1∈ A_+-. Hence the segment connecting q_s and q_s+1 intersects the line x = x_p_2 that separates the two sets. Let (x_p_2, y^*) ∈^2 be the point of intersection. Since both q_s and q_s+1 are in h_2, so is (x_p_2, y^*), which together with Claim 1 implies that y_p_2≤ y^*. Similarly, (x_p_2, y^*) is in f_s because both q_s and q_s+1 are in f_s. Consequently, by Claim 1, p_2 is also contained in f_s. This contradiction completes the proof. Let G(P,H) be a positive point-halfplane incidence graph. Then the bipartite complement of G(P,H) is also a positive point-halfplane incidence graph. Without loss of generality, we can assume that no point in P lies on the boundary of any h ∈ H. For every point p = (x_p,y_p) ∈ P we define p' := (-x_p,-y_p), and for every h = { (x,y) ∈^2  |  a_h x + b_h y < t_h }∈ H we define h' = { (x,y) ∈^2  |  a_h x + b_h y < -t_h }. Let P' = { p'  |  p ∈ P } and H' = { h'  |  h ∈ H }. We claim that G(P',H') is the bipartite complement of G(P,H). Indeed, for any p ∈ P and h ∈ H we have that p ∈ h a_h x_p + b_h y_p < t_h a_h (-x_p) + b_h (-y_p) > -t_h p' ∉h'. Finally, notice that the norm vector of the hyperplane defining a halfspace h ∈ H is the same as the norm vector of the hyperplane defining h' ∈ H'. Hence, G(P',H') is a positive point-halfplane incidence graph. Let G = (X,Y,E) be a bipartite graph of sign-rank 3. Then there exists a partition Y = ⋃_i=1^2^3 Y_i such that each G[X,Y_i] is both (S_3, 3, S_3, 3)-free, and (C_t, C_t)-free for all t ≥ 10. We claim that the partition Y = ⋃_i=1^2^3 Y_i given by <ref> is a desired one. Indeed, by <ref> and <ref>, we conclude that for each i ∈ [2^3], the graph G_i G[X,Y_i] and its bipartite complement are both positive point-halfplane incidence graphs. Hence, the lemma follows from <ref> and <ref>. theoremthmintrosignrank Let be a graph class with sign-rank at most 3. Then () = O(1) if and only if is stable. It suffices to prove that () = O(1) when is stable, due to <ref>. On graph G = (X,Y,E) and inputs x ∈ X, y ∈ Y, the players compute the decomposition Y = ⋃_i=1^8 Y_i given by <ref> and use 3 bits of communication to agree on the value i such that y ∈ Y_i. Then they compute adjacency in G[X,Y_i] by applying the protocol in <ref>. §.§ Unit Disk Graphs In this section we prove our result for unit disk graphs. A graph G is unit disk if there exists a mapping ϕ : V(G) →^2 such that xy ∈ E(G) if and only if ϕ(x)-ϕ(y)_2 < 2. The mapping ϕ is called a realisation of G. Note that the constant 2 may be replaced with any other constant. We start by observing that unit disk graphs have sign-rank at most 4. Any unit disk graph G has sign-rank at most 4. Let v ↦ (x_v, y_v) ∈^2 for v ∈ V(G), be a realisation of G, such that for any two distinct vertices a,b ∈ V, ab ∈ E(G) if and only if (x_a - x_b)^2 + (y_a - y_b)^2 < √(2) x_a^2 - 2x_a x_b + x_b^2 + y_a^2 - 2y_a y_b + y_b^2 - √(2) < 0 Then, by defining σ : v ↦ (-1, 2x_v, 2y_v, -x_v^2 - y_v^2) ∈^4, ψ : v ↦ (x_v^2 + y_v^2 -√(2), x_v, y_v, 1) ∈^4 for v ∈ V(G), we see that for any distinct a,b ∈ V(G) σ(a), ψ(b) > 0 (x_a - x_b)^2 + (y_a - y_b)^2 < √(2) , so (σ(a), ψ(b)) = 1 if and only if ab ∈ E, as desired. The main tool for our application to unit disk graphs is the following lemma of <cit.>. A graph G is co-bipartite if its complement G is bipartite. Let G be a co-bipartite unit disk graph. Then the bipartite graph G and its bipartite complement do not contain any edge-asteroid triples. In particular, due to <ref>, G is both (S_3, 3, S_3, 3)-free, and (C_t, C_t)-free for all t ≥ 10. Our upper bound on the communication complexity of stable unit disk graphs will follow from a fairly straightforward decomposition of a unit disk graph into unit-length grid cells, such that between any two grid cells the graph is co-bipartite. theoremthmintroudg Let be a subclass of unit disk graphs. Then () = O(1) if and only if is stable. It suffices to show that if is stable, then () = O(1), due to <ref>. Since is stable, there exists a constant k such that for all G ∈, (G) < k. Fix any G ∈ together with its realisation ϕ : V(G) →^2. For convenience, we will identify the vertices x ∈ V(G) with the corresponding points ϕ(x) ∈^2. On inputs x,y ∈ V(G), Alice and Bob will perform the following protocol. * Alice and Bob each partition ^2 into a grid with cells C_i,j for i,j ∈, where C_i,j{ (z_1,z_2) ∈^2 : i ≤ z_1 < i+1, j ≤ z_2 < j+1 }. Observe that if x,y are adjacent, then if x ∈ C_i,j we must have y ∈ C_i+a, j+b for some a,b ∈{-2,-1,0,1,2}; and if x,y ∈ C_i,j then x-y_2 < √(2) so x,y are adjacent. Let i_x, j_x ∈ be such that x ∈ C_i_x, j_x and let i_y, j_y ∈ be such that y ∈ C_i_y, j_y. * Using 1 call to the oracle, Alice and Bob check if (i_x, j_x) = (i_y,j_y). If so, they output 1 and the protocol terminates. In this case, the protocol is correct due to the observation above. * For each (a,b), (a',b') ∈{-2,-1,0,1,2}^2 such that (a,b) ≠ (0,0) and (a',b') ≠ (0,0), Alice and Bob use 2 calls to the oracle to check if both (i_x+a,j_x+b) = (i_y,j_y). If so, then Alice and Bob compute adjacency in the semi-induced bipartite graph G[X,Y] where X C_i_x,j_x∩ V(G) and Y C_i_y,j_y∩ V(G). This is possible because: * Alice and Bob each know X and Y: Alice knows (i_x+a,j_x+b) = (i_y,j_y) and Bob knows (i_y+a',j_y+b')=(i_x,j_x); and * The graph G[X,Y] has (G[X,Y]) ≤(G) ≤ k, and it is (S_3, 3, S_3, 3)-free, and (C_t, C_t)-free for all t ≥ 10, by <ref>, so we may apply <ref>. * If (i_x + a,j_x + b) ≠ (i_y,j_y) for all (a,b) ∈{-2,-1,0,1,2}^2, then Alice and Bob output 0. In this case the protocol is correct by the observation in step <ref>. This concludes the proof. § THE SIGN-RANK HIERARCHY We have now determined exactly the conditions required for graphs of sign-rank 3, and some graphs of sign-rank 4, to have constant randomized communication cost (equivalently, constant margin). Let us now consider sign-ranks 5 and above. We will see that our techniques for the lower sign-ranks, specifically the reduction to Equality, will surely fail. This witnesses a certain threshold between sign-ranks 3 and 5. It is common to study the bipartite graphs which are K_t,t-free, for some constant t. If a class of graphs is K_t,t-free it is called weakly-sparse. This is a much stronger condition than stability: if G is K_t,t-free then it must satisfy (G) ≤ 2t. A hereditary graph class has bounded degeneracy (equivalently, bounded arboricity) if there exists a constant d such that every G ∈ has a vertex of degree at most d (equivalently, if there exists a constant a such that every G ∈ on N vertices has at most a · N edges). The following is well-known and easy to prove (see <cit.>). If a hereditary graph class has bounded degeneracy then () = O(1). Recent results of <cit.> show that, for any constant t, the K_t,t-free point-halfspace incidence graphs in dimension 3 have bounded degeneracy. Since any graph of sign-rank 4 can be written as a union of two point-halfspace incidence graphs in dimension 3 (see <ref>), we obtain the following: Let be a hereditary graph class that is weakly-sparse and has sign-rank at most 4. Then () = O(1). Our <ref> and <ref> are strengthenings of this theorem for the special cases of sign-rank 3 and unit disk graphs, where we replace the weakly-sparse condition with the much less restrictive stability condition. The proof of <cit.> uses a technique based on “shallow cuttings”. In <ref>, we give a simpler proof, using elementary geometry, of the weaker statement that K_2,t-free point-halfspace incidence graphs in dimension 3 have bounded degeneracy. However, it is not possible to extend these results even to sign-rank 5. To see this, we first require a theorem which shows that, for weakly-sparse classes, bounded degeneracy is equivalent to the existence of a reduction to Equality. anonymous The proof generalizes a theorem of <cit.> and is due to Bonamy, Esperet, & Girão, which we include here with permission and gratitude. * This theorem follows from the next two lemmas. The first lemma is implicit in <cit.>. Let be a class of bipartite graphs satisfying the following Ramsey property: for any k, ℓ∈ there exists a graph G ∈ such that, for any coloring of the edges of G with at most k colors, there exists a monochromatic induced path on ℓ vertices. Then ^() = ω(1). This lemma was used in <cit.> in conjunction with a result of <cit.> that established the required Ramsey property for induced subgraphs of hypercubes, which are K_2,3-free, to show that hypercubes do not have constant-cost reductions to Equality. The next lemma generalizes this result. anonymous For any k, t, ℓ∈, there is an integer d such that, if a K_t,t-free bipartite graph G has average degree at least d, and its edges are colored with at most k colors, then G contains a monochromatic induced path of length at least ℓ. For any k, t, ℓ∈, there is an integer d such that, if a K_t,t-free bipartite graph G has average degree at least d, and its edges are colored with at most k colors, then G contains a monochromatic induced path of length at least ℓ. We first reduce to the case t=2. Suppose t > 2 and let d ≥ t. A result of <cit.> shows that there is a constant d' such that any K_t,t-free bipartite graph of average degree at least d' contains a K_2,2-free induced subgraph of average degree at least d. Therefore it suffices to consider the case t=2. Choose b > ℓ and set d = 2kb. Consider a K_2,2-free graph G, whose edges are colored with at most k colors. Then, if the average degree of G is at least d, there exists a color c ∈ [k] such that the graph G_c induced by the edges with color c has average degree at least 2b. Then G_c has an induced subgraph G'_c with minimum degree at least b. We now construct a monochromatic induced path in G by induction as follows. The base case, a monochromatic path on 2 vertices, is trivial. Suppose we have obtained an induced path P_s-1 = { v_1, …, v_s-1}, for s-1 < ℓ where each (v_i, v_i+1) is an edge of G'_c. Let N'_c(v_s-1) be the neighbors of v_s-1 in G'_c and suppose for contradiction that all vertices u ∈ N'_c(v_s-1) ∖ P_s-1 are adjacent in G to some v_i with i < s-1. Since v_s-1 has at least b > ℓ > s-1 neighbors, there are two vertices u,w ∈ N'_c(v_s-1) ∖ P_s-1 that are adjacent in G to both v_s-1 and v_i for some i < s-1. But then {v_i, v_s-1, u, w} form an induced K_2,2, which is a contradiction. Therefore there exists a vertex v_s ∈ N'_c(v_s-1) ∖ P_s-1 which produces a monochromatic induced path P_s = { v_1, …, v_s }. This concludes the proof. §.§ Sign-Rank 5: Point-Box Incidences To show that our techniques cannot extend to sign-rank 5, even if we ask for the much stronger K_2,2-free condition instead of stability, it now suffices to show that there exists a weakly-sparse class of bipartite graphs with sign-rank 5 and unbounded degeneracy. For this we use the point-box incidence graphs. Let P be a set of points in ^2 and H a set of axis-aligned rectangles in ^2. The incidence graph of P and H is the bipartite graph G(P, H) (P, H, { ph | p ∈ P, h ∈ H, p ∈ h }) . The fact that these graphs have sign-rank 5 follows from a transformation of point-box incidences in dimension 2 to point-halfspace incidences in dimesion 4, which appears in <cit.>. The sign-rank of point-halfspace incidences in ^4 is at most 5. The class of point-box incidence graphs has sign-rank at most 5. What remains is the claim that weakly-sparse point-box incidence graphs on N vertices can have ω(N) edges. This is true even under the strongest condition of being K_2,2-free. The lower bound of the next lemma was proved recently in <cit.>, and the upper bound in <cit.>. We remark that the lemma remains true even if the boxes are restricted to be dyadic, the product of intervals of the form [s2^t, (s+1)2^t) with integers s,t. The maximum number of edges in a K_2,2-free point-box incidence graph is Θ( n ·log n/loglog n). As a consequence, K_2,2-free point-box incidence graphs have unbounded degeneracy. Combining <ref>, we get: There is a hereditary class of K_2,2-free bipartite graphs with sign-rank 5 and () = ω(1). §.§ Sign-Rank 6: Point-Line Incidences The above result shows that reductions to Equality cannot be used to prove () = O(1) in general, even for weakly-sparse classes, let alone stable ones. This leaves open the possibility that there is another method for obtaining constant-cost randomized communication protocols for weakly-sparse or even stable graph classes with sign-rank 5, 6, or any constant. However, we discuss here a recent conjecture of <cit.> regarding point-line incidences suggesting that weakly-sparse graphs of sign-rank 6 have non-constant communication complexity. Let P be a set of points in ^2 and L be a set of lines in ^2. The incidence graph of P and L is the bipartite graph G(P, L) (P, L, { ph | p ∈ P, ℓ∈ L, p ∈ℓ}) . Point-line incidence graphs are K_2,2-free by definition, and it is well-known that the incidence graph between N points and N lines can have Θ(N^4/3) edges; therefore, <ref> guarantees that they do not reduce to Equality. Furthermore, it is known that point-line incidence graphs are point-halfspace incidence graphs in ^5 (see e.g. <cit.>), and hence they have sign-rank at most 6: Point-line incidence graphs have sign-rank at most 6. The communication complexity of point-line incidence graphs was recently studied in <cit.>, but it remains unknown whether they have constant-cost. It was conjectured that they do not: The class of point-line incidence graphs has () = ω(1). anonymous Acknowledgments We are grateful to Marthe Bonamy, Louis Esperet, and Antonio Girão for their proof of <ref>, and to Louis Esperet for communicating this proof to us and allowing us to include it here. We thank Lianna Hambardzumyan, Pooya Hatami, and Sebastian Wild for several conversations on the topic of this paper. The general definition of constant-cost reductions given in this paper has arisen partly out of collaboration with Yuting Fang, Lianna Hambardzumyan, and Pooya Hatami. We thank Mónika Csikós for telling us about <ref>. alpha § ON THE NUMBER OF EDGES IN WEAKLY-SPARSE POINT-HALFSPACE INCIDENCE GRAPHS In this section we show that K_2,s-free point-halfspace incidence graphs in dimensions 1,2, and 3 have linearly many edges. The same result was recently obtained by Chan and Har-Peled in <cit.> for more general classes of K_s,s-free graphs. We present our results for two reasons. First, the proof technique is completely different and might be of independent interest. Second, our bounds are more specific for the considered cases. To prove our upper bounds, we will show that every graph in a class has a vertex of bounded degree. Since the classes are hereditary, this will imply linear bounds on the number of edges. §.§ On the line In this section we will show that the K_s,s-free point-halfline incidence graphs on have linear number of edges. In fact we will show a linear bound on the number of edges in the more general class of the K_s,s-free point-interval incidence graphs. For this latter class, <cit.> shows that an n-vertex K_s,s-free point-interval incidence graphs with n_p points and n_i intervals contains at most s(n_p+3n_i) edges. Our bound of (s-1)n = (s-1)(n_p+n_i) is a slight improvement over the bound from <cit.>. Let G be a K_s,s-free n-vertex point-interval incidence graph. Then G has at most (s-1)n edges. Let P be a set of points and I be a set of intervals on the real line such that G ≃ G(P, I). To prove the statement we will show that G has a vertex of degree at most s-1. Suppose that all vertices of G have degree at least s and let p be the leftmost point in P. The degree assumption implies that p belongs to at least s intervals, which we denote i_1, i_2, …, i_s. For the same reason, each of these intervals should contain the s-1 points in P closest to p, which we denote p_1, p_2, …, p_s-1. But then the vertices corresponding to i_1, i_2, …, i_s and p, p_1, p_2, …, p_s-1 induce the forbidden K_s,s. §.§ On the plane In dimensions 2 and 3, the bounds for K_s,s-free graphs from <cit.> are O(sn), and the constants in the big-O are not specified. Our bounds in dimension 2 and 3 are respectively 3(s-1)n and 5(s-1)n. To obtain them we will use the following lemma that reduces the analysis to the case where the points are in convex position. Let G ≃ G(P, H) be the incidence graph of a set P of points and a set H of halfspaces in ^d. If G is K_2,s-free and P is not in convex position, then G has a vertex of degree at most (d+1)(s-1). Suppose that P is not in convex position, and let p ∈ P be a non-extremal point of the convex hull (P). By Carathéodory's theorem, p belongs to the convex hull of at most d+1 extremal points of (P). Let p_1,p_2, …, p_k, k ≤ d+1 be a minimal set of such extremal points. Since p belongs to the interior of ({p_1, …, p_k}), any halfspace containing p contains one of the points p_1, …, p_k. Thus, if p belongs to at least k(s-1)+1 halfspaces, one of the points p_1, …, p_k belongs to at least s of them resulting in the forbidden K_2,s. Hence, the degree of p is at most k(s-1) ≤ (d+1)(s-1). The polytope graph of a polytope is the incidence graph of the extremal points and 1-dimensional faces of the polytope. We will need the following well-known fact. Let P and H be respectively a polytope and a halfspace in ^d. The subgraph of the polytope graph of P induced by the extremal point of P that belong to H is connected. Let G be a K_2,s-free n-vertex point-halfplane incidence graph. Then G has at most 3(s-1)n edges. Let P be a set of points on the plane and H be a set of halfplanes such that G ≃ G(P, H). We assume without loss of generality that |P| ≥ 3. To prove the lemma we will show that G has a vertex of degree at most 3(s-1). If P is not in convex position, such a vertex exists by <ref>, so we can assume that all points in P are extremal points of (P). Suppose that all vertices of G have degree at least 3(s-1)+1 and let p be an arbitrary point in P. The polytope graph of P is a cycle, and hence p has exactly 2 neighbours in this graph. <ref> implies that each of the halfplanes that contain p and some other vertices in P should also contain at least one of these 2 neighbours. Thus, since p belongs to 3(s-1)+1 ≥ 2(s-1)+1 halfplanes in H, at least s of them contain one other fixed point in P, which witnesses a forbidden K_2,s. §.§ In ^3 Let G be a K_2,s-free n-vertex point-halfspace incidence graph in ^3. Then G has at most 5(s-1)n edges. Let P and H be respectively a set of points and a set of halfspaces in ^3 such that G ≃ G(P, H). As before, to prove the lemma we will show that G has a vertex of degree at most 5(s-1). Towards a contraction, suppose that all vertices in G have at least 5(s-1)+1 neighbours. This assumption and <ref> imply that P is in convex position, and hence all points in P are extremal points of (P). Let F be the polytope graph of P. By Steinitz's theorem (see e.g. <cit.>), F is planar, and therefore has a vertex of degree at most 5. Let p ∈ P be such a vertex. It follows from <ref> that any halfspace in H that contains p also contains at least one of the neighbours of p in F. Thus, by the pigeonhole principle, at least s halfspaces among those in H that contain p contain also one fixed neighbour of p, which witnesses the forbidden K_2,s. This contradiction completes the proof.
http://arxiv.org/abs/2307.05568v1
20230710025610
Subtraction of the foreground confusion and parameter uncertainty of resolvable galactic binaries on the networks of space-based gravitational-wave detectors
[ "Jie Wu", "Jin Li" ]
gr-qc
[ "gr-qc", "astro-ph.IM" ]
[email protected] ^1 College of Physics, Chongqing University, Chongqing 401331, China ^2 Department of Physics and Chongqing Key Laboratory for Strongly Coupled Physics, Chongqing University, Chongqing 401331, China There are tens of millions of compact binary systems in the Milky Way, called galactic binaries (GBs), most of which are unresolved, and the Gravitational waves (GWs) emitted overlap to form foreground confusion. By simulating such foreground confusion, we have studied how LISA, Taiji and TianQin, including their alternative orbital configurations, subtract resolvable GBs when they combine as some networks. The results of our research indicate that the order of detected number for a single detector from high to low is: Taiji-m, Taiji-p (c), LISA, TianQin I, TianQin II. For detector combinations on the network, the foreground confusion is effectively reduced as the number of detectors grows, and the optimal combinations with different numbers are: Taiji-m, LISA+Taiji-m, LISA+Taiji-m+TianQin I, and LISA+Taiji-m+TianQin I+II. The sensitivity curve is optimized as the number of detectors increases, which renders it possible to detect other gravitational wave sources more precisely and decrease the resolvable GBs parameter uncertainty. Based on this, we discuss the parameter uncertainty of resolvable GBs detected by the combinations above and find that GW detection can promote electromagnetic (EM) detection. On the contrary, we discovered that by utilizing EM detection, determining the inclination angle can reduce the uncertainty of GW strain amplitude by ∼93%, and determining the sky position can reduce the uncertainty of the phase by ∼30%, further strengthening the connection between GW detection and EM detection, and contributing to the research of Multi-messenger astronomy. Subtraction of the foreground confusion and parameter uncertainty of resolvable galactic binaries on the networks of space-based gravitational-wave detectors Jin Li^1,2 ============================================================================================================================================================= § INTRODUCTION Since LIGO detected the first GW event from a binary black hole merger (GW150914) in 2015 <cit.>, a series of ground-based GW detectors, such as Advanced LIGO <cit.>, Advanced Virgo <cit.> and KAGRA <cit.>, have been built around the world, opening the window for detecting GW. However, due to the limitation of the interferometer arm length, the observation window of the ground-based GW detector is in the high-frequency band from 1 Hz to kHz, and the low-frequency GW signal below 1 Hz cannot be effectively detected. Therefore, constructing an interferometer with an arm length in order of one million kilometers in space is an ideal solution for detecting low-frequency GW. The mission proposed by European Space Agency to detect GW in the low-frequency band named Laser Interferometer Space Antenna (LISA) is scheduled to be launched around the 2030s <cit.>. At the same time, the Taiji mission proposed by the Chinese Academy of Sciences to construct a space-based GW observatory similar to LISA, which consists of a triangle of three spacecraft (S/C) orbiting the sun linked by laser interferometers, will be in operation <cit.>. Another Chinese mission, TianQin, being different from LISA and Taiji, consists of three identical drag-free controlled S/C in high Earth orbits <cit.>. LISA, Taiji, and TianQin are all sensitive to the milli-Hertz frequency band. Compared with the Hertz frequency band, there are a large variety of GW sources in the milli-Hertz frequency band sensitive to the space-based GW detectors. These sources are expected to carry a large amount of information about galaxy formation, galactic nuclei, the Milky Way, and the early universe <cit.>, including massive black hole binaries (MBHB) <cit.>, extreme/intermediate mass ratio inspirals (EMRIs/IMRIs) <cit.>, compact binaries in the Milk Way <cit.> and stochastic gravitational-wave backgrounds (SGWBs) <cit.>. According to current astrophysical models and observations, there are a large number of GBs in our Milky Way, whose orbital period is less than a few hours, and the frequency band of emitted GW is from 0.1 mHz to 10 mHz <cit.>. Considering the sensitivity of the space-based GW detectors, the GWs emitted by tens of millions of GBs will enter the observation frequency band at the same time, overlapping to form the galactic foreground <cit.>. Except for a small percentage of high signal-to-noise ratio (SNR) GBs known as resolvable GBs, the majority of them are unresolved, resulting in an effective noise called foreground confusion or confusion noise. <cit.>. In the frequency range of 0.5∼3 mHz, the foreground confusion will be greater than the instrument noise, affecting the observation of other GW sources and creating a bump on the sensitivity curve. While the unresolved GBs constitute the foreground confusion and have a negative impact on the observation of other GW sources, the resolvable GBs are conducive to researching the evolution and distribution of GBs in our Milky Way, which is also one of the main science objectives of the space-based GW detectors <cit.>. Since the proposal of LISA, extensive research has been conducted on the foreground confusion from GBs <cit.>. Subtracting the foreground confusion as much as possible is beneficial for better observation of other GW sources. The research on LISA, Taiji, and TianQin in subtracting of the foreground confusion is introduced respectively in Ref. <cit.>. In addition to increasing observation time and improving the sensitivity of the GW detector, the networks of the GW detector can also effectively identify more resolvable GBs and subtract the foreground confusion <cit.>. In this paper, we simulate subtracting foreground confusion using different combinations between LISA, Taiji, and TianQin on the network, including their alternative orbital configurations to determine the best combination on the network, and draw the sensitivity curve to calculate the SNR and parameter uncertainty of detected resolvable GBs, thus discussing the Multi-messenger astronomy combined with EM detection. In Sec. <ref>, we introduce the GW signal model used to simulate GBs, the response of different space-based GW detectors to GW, as well as their instrument noise, sensitivity, and the alternative orbit configurations. In Sec. <ref>, we use the population model to construct the GBs signal, subtracting the resolvable GBs by the iterative procedure to estimate the foreground confusion, and calculating the parameters of the resolvable GBs. In Sec. <ref>, we present the subtraction of the foreground confusion by different combinations on the network, analyze the factors responsible for them, and plot the full sensitivity curves containing the foreground confusion. Finally, we summarize our results in Sec. <ref>. § GW SIGNALS AND DETECTORS §.§ GW signals from GBs Considering that GBs have a few hours of orbital period and emit GW frequencies in milli-Hertz, they are in the very beginning phase of inspiral, millions of years before the merger <cit.>. Therefore, the orbital period evolves slowly and the GWs emitted by GBs can be fully regarded as quasi-sinusoidal signals (quasi-monochromatic sources). For the GW signal, we can use a very simple model in which the phase is decomposed in a Taylor series, and consequently, the time domain waveform of a GB can be written as <cit.>: h_+(t)=𝒜(1+cosι^2)cosΦ(t) h_×(t)=2𝒜cosιsinΦ(t) with Φ(t)=ϕ_0+2π f_0t+πḟ_0t^2+ Φ_D(t) where 𝒜 is the GW strain amplitude, ι is the inclination angle, Φ(t) is the orbital phase , Φ_D(t) is the Doppler phase, ϕ_0 is the initial phase, f_0 and ḟ_0 is the frequency and the derivative of the frequency of GW. The frequency variation, also known as the frequency derivative, can be expressed with the equation described in Ref. <cit.>: ḟ_0=48/5(Gℳ/2c^3)^5/3f^11/3 where ℳ=(m_1m_2)^3/5/(m_1+m_2)^1/5 is the chirp mass, G and c are the gravitational constant and the speed of light. By substituting frequency f_0∼10^-3 into equation 3, we can roughly calculate the derivative of frequency ḟ_0∼10^-19, indicating that the derivative of frequency is much lower in magnitude than that of frequency, which is also why we consider GWs as quasi-sinusoidal signals. Therefore, we neglect higher-order phase terms as they contribute minimally to the waveform and have little impact on foreground confusion. Additionally, we assume that the GBs are in circular orbits and ignore the influence of the third perturbation body. <cit.>. For the space-based GW detector, the periodic motion around the Sun will produce the Doppler phase, which is given by <cit.>: Φ_D(t) = 2π f_0t(R/c)cosβcos(2π f_mt-λ ) where R = 1 A.U. is the distance between the Sun and the Earth, f_m = 1/year is the Geocentric orbit modulation frequency and (λ,β) are the Ecliptic coordinates of the GW source. §.§ Detector’s response and noise For the space-based GW detector, the GW strain recorded by the detector can be described as the linear combination of two GW polarizations <cit.>: h(t)=F^+(t)h_+(t)+F^×(t)h_×(t) where F^+ and F^× are the antenna pattern functions. In the low-frequency limit, the antenna pattern functions in the detector’s coordinate frame can be expressed as<cit.>: F^+ = -sinγ/2[(1+cos^2θ_d)sin2ϕ_dcos2ψ_s+2cosθ_dcos2ϕ_dsin2ψ_s] F^× = -sinγ/2[-(1+cos^2θ_d)sin2ϕ_dsin2ψ_s+2cosθ_dcos2ϕ_dcos2ψ_s] where γ=π/3 is the angle between the two arms of the detector, (ϕ_d,θ_d) are the coordinates of the location of the GW source in the detector coordinate frame and ψ_s is the polarization angle. The transformation between detector coordinates (ϕ_d,θ_d) and Ecliptic coordinates (λ,β) can be found in Appendix <ref>. To explore the response of the detector to GWs in different positions, we introduce the combined tensor mode response function: F=√(|F^+|^2+|F^×|^2) The results in the detector coordinate frame are shown in FIG. <ref>. It can be seen that the position perpendicular to the constellation plane has the highest response, implying that different orientations will affect detection capacity in the same configuration. Besides, the noise of the detector is another element that influences detection ability. In this paper, we focus solely on the impact of instrument noise composed of acceleration noise and displacement noise when subtracting foreground confusion. Therefore, an analytical model of the detector's sensitivity curve S_n(f) can be constructed from the sky average response function and instrument noise. For LISA <cit.> and Taiji <cit.>, the sensitivity curve can be expressed as follows: S_n(f) =10/3L^2[P_dp+2(1+cos^2(f/f_*))P_acc/(2π f)^4] ×[1+0.6(f/f_*)^2] with P_dp =S_x[1+(2mHz/f)^4] P_acc =S_a[1+(0.4mHz/f)^2][1+(f/8mHz)^4] For TianQin <cit.>, the sensitivity curve can be written in the form of: S_n(f) =1/L^2[4S_a/(2π f)^4(1+0.4mHz/f)+S_x] ×[1+0.6(f/f_*)^2] where f_*=c/(2π L) is the transfer frequency, c is the speed of light, L is the arm length, S_a is acceleration noise and S_x is displacement measurement noise, all of which are given in TABLE <ref>. §.§ Alternative orbital configurations LISA, Taiji, and TianQin are all scheduled to launch a triangular constellation composed of three S/C. The difference is that LISA and Taiji apply heliocentric orbits, whereas TianQin applies geocentric orbits. There are multiple orbital configurations to be chosen, as detailed in FIG. <ref>, FIG. <ref> and TABLE <ref>. LISA includes three S/C forming a 2.5×10^6 km triangle trailing the Earth by 20^∘ on the Heliocentric orbit and the constellation plane has a 60^∘ inclination to the Ecliptic plane as shown in FIG. <ref> and FIG. <ref>. Meanwhile, Taiji expects to use a LISA-like orbital configuration with a 3×10^6 km arm length and three different orbital configuration options available <cit.>. The first configuration is called Taiji-p, which has the same inclination angle as LISA but is 20^∘ ahead of Earth. The second configuration is exactly the same as LISA, called Taiji-c. These two configurations are shown on the right side of FIG. <ref>. The third configuration named Taiji-m has an inclination of -60^∘ to the Ecliptic plane and a leading angle of 20^∘ to the Earth, as shown on the left side of FIG. <ref>. Unlike LISA and Taiji, TianQin uses a Geocentric orbit with a √(3)×10^5 km arm length, hence the normal direction of the constellation plane will remain unchanged, pointing in the same direction. <cit.>. The two orbital configurations of TianQin are the different orientations of the normal directions of the constellation plane. The normal direction of TianQin I points towards the tentative reference source RX J0806.3+1527 (pointing towards λ = 120.4^∘,β = -4.7^∘), while the normal direction of TianQin II perpendicularly (pointing towards λ = 30.4^∘,β = 0^∘), which is shown in FIG. <ref> and FIG. <ref>. The observation time varies with different orbital configuration. LISA and Taiji are both year-round observation schemes, and any of Taiji's three alternative orbital configurations will not operate simultaneously. Different from the former, TianQin follows the “three months on + three months off” observation scheme, and TianQin I and TianQin II can operate simultaneously to fill the data gaps of each other <cit.>, which will be considered in the subtraction methodology in Sec. <ref>. § METHODOLOGY §.§ Data analysis The SNR ρ of a GB source, which play an important role for judging the resolvable sources, can be defined as: ρ^2=(h|h) where the inner product (·|·) is a generalisation of the time-domain correlation product and is conventionally defined as <cit.>: (a|b) =4∫_0^∞df ã^*(f)b̃(f)/S_n(f) ≃2/S_n(f_0)∫_0^T_obsdt a(t)b(t) where ã(f) and b̃(f) are the Fourier transformations of a(t) and b(t), S_n(f) is the sensitivity curve defined by Eq. <ref> and Eq. <ref>, T_obs is the observation duration. Note that the second line of Eq. <ref> only holds when calculating a quasi-sinusoidal signal (quasi-monochromatic source) that have an almost constant noise PSD and it can be seen that the SNR increases while the observation duration increases. A quasi-sinusoidal signal like GB can be represented in the spectrum using the Dirac Delta function, thus the signal is plotted as a point with amplitude in the spectrum. Therefore, the SNR of GB in the Eq. <ref> can be roughly calculated as follows, which is obtained by evaluating the SNR integral <cit.>: ρ^2 =16/5𝒜^2T_obs/S_n(f_0) where 𝒜 is the GW strain amplitude. Using Eq. <ref> can calculate SNR more quickly than using Eq. <ref>, and in the processing steps of Sec. <ref>, we use Eq. <ref> to quickly calculate and filter optimal resolvable GBs. Usually, the GB with the SNR greater than 7 (ρ>7) is defined as the resolvable GB <cit.> and we can analyze the uncertainties of the resolvable GB using Fisher information matrix (FIM), which is defined as: Γ_ij=(∂ h/∂ξ_i|∂ h/∂ξ_j) where ξ_i represents the parameter of GB. For high SNR signals (ρ≫ 1), the variance-covariance matrix obtained from the inverse of FIM, Σ=Γ^-1, where the diagonal element represents the variance (or mean squared error) of each parameter, and the off-diagonal element represents the covariance (or correlation) between the parameters <cit.>. Therefore, the uncertainty of each parameter can be written as: Δξ_i=√(Σ_ii) Compared to the uncertainty of coordinates, the uncertainty of sky position is more commonly used, which can be obtained by combining the uncertainty of both coordinates <cit.>: ΔΩ=2π|sinβ|√(Σ_ββΣ_λλ-Σ_βλ^2) When calculating FIM in Eq. <ref>, use the following numerical differentiation approximation <cit.>: ∂ h/∂ξ_i≈h(t,ξ_i+δξ_i)-h(t,ξ_i-δξ_i)/2δξ_i When considering network detection by multiple independent detectors, the total SNR and FIM can be obtained by the sum of the inner products calculated by each detector, which can be written as <cit.>: ρ_net^2=∑_kρ_k^2=∑_k(h_k|h_k) Γ_net=∑_kΓ_k=∑_k(∂ h_k/∂ξ_i|∂ h_k/∂ξ_j) where k represents different independent detectors. From Eq. <ref>, the sensitivity in the network can be obtained, whose reciprocal is the sum of the reciprocal sensitivities of each detector, which can be expressed as follows: S_net^-1=∑_kS_k^-1 §.§ Subtraction of the foreground confusion For population simulation of GBs, we used the population datasets from the first “new” LISA Data Challenge (LDC), codenamed , which contains approximately 30 million GB sources in the milli-Hertz band <cit.>. For the convenience of data processing, we select 1% of the GBs in (3×10^5 GBs) and multiply them to achieve the same amplitude level as the actual situation to generate the galactic foreground. The number of 3×10^5 GBs is sufficient to include the same parameter distribution 3×10^7 GBs in , and the number of the resolvable GBs should be 1% of that in . Notice that although the multiplication operation was performed during the generation of the galactic foreground, which would increase the amplitude of a single signal, the smoothed spectrum is used in subsequent processing to obtain the same amplitude as without affecting the calculated SNR. The basic steps for subtracting foreground confusion are shown in FIG. <ref>, which can be summarized as follows<cit.>: * Simulate the superposition h(t) of 3×10^5 GBs in the time domain and then calculate the power spectrum density (PSD) of the galactic foreground. Run the median on the PSD to estimate the foreground confusion S_c(f). * Roughly calculate the optimal SNR ρ under the sensitivity curve of instrument noise S_n(f) using Eq. <ref>, and consider GBs with an optimal SNR greater than 3 (ρ>3) as optimal resolvable GBs, which can quickly filter out 99.6% of unresolved GBs. * For the ith optimal resolvable GB, the sensitivity curve is formed by adding instrument noise and foreground confusion (S_n(f)+S_c(f)), and the SNR ρ_i is calculated using Eq. <ref> and Eq. <ref>. If the SNR is less than 7 (ρ_i<7), skip and repeat the method to calculate the SNR of the (i+1)th optimal resolvable GB. If the SNR is greater than 7 (ρ_i≥7), the GB is resolvable, and then continue with the next subtraction step. * Subtract the ith GB signal in the time domain (h(t)-h_i(t)) and use the method in Step <ref> to re-estimate the subtracted galactic confusion. Repeat Steps <ref> and <ref>, continuously subtracting resolvable GBs and re-estimating galactic confusion until all optimal resolvable GBs are calculated. * Repeat Steps <ref>, <ref> and <ref> in the remaining optimal resolvable GBs until the subtracted GB is 0, indicating the galactic confusion composed of unresolved GBs. * Recalculate the SNR and FIM of the resolvable GBs using the final subtracted galactic confusion. In the above steps, it is assumed that the resolvable GB can be subtracted perfectly without residual error, which will not be achievable in practice, and the subtraction error should be considered <cit.>. When generating the time-domain galactic foreground signal, we set the Earth in the Vernal Equinox as zero time (t=0), and conduct observation simulation at different times (T_obs={0.5,1,2,4}years) to subtract the galactic confusion using the above basic steps. Considering the observation on the networks, we use the method of Eq. <ref> to calculate the SNR and FIM, and get the results on different networks. § RESULTS §.§ Resolvable GBs Using the method in Sec. <ref>, we simulated and calculated the number of resolvable GBs detected on different detectors and their networks at different observation times, as shown in FIG. <ref>. Apparently, FIG. <ref> illustrates that as the observation time increases, the number of resolvable GBs also increases due to Eq. <ref> and Eq. <ref>. Given the observation time, for a single detector, the number of resolvable GBs detected in descending order is: Taiji-m, Taiji-p (c), LISA, TianQin I, and TianQin II, mainly due to the arm length and orientation of the detector. In terms of arm length, from Eq. <ref> and Eq. <ref>, it can be seen that the longer the detector arm length results in the better sensitivity. Moreover, from TABLE <ref>, it can be seen that Taiji's arm length (3×10^9 m) is the longest, followed by LISA's arm length (2.5×10^9 m), and TianQin's arm length (√(3)×10^8 m) is the shortest, making Taiji detect more resolvable GBs than LISA and TianQin. In terms of orientation, FIG. <ref> shows that the detector is most sensitive to signals perpendicular to the constellation plane position (θ_d=0^∘ or 180^∘). The density of GBs in the bulge region of the Galaxy is significantly higher than that in the disk region <cit.>, therefore the closer the normal direction of the detector constellation plane is to the Galactic Center (λ = 266.8^∘,β = -5.6^∘), the greater the detector response and the more resolvable GBs can be detected. From FIG. <ref>, it can be seen that the normal direction of Taiji-m (β = -30^∘ and β = 60^∘) is closer to the Galactic Center compared to Taiji-p (c) (β = 30^∘ and β = -60^∘) over a year, and the normal direction of TianQin I (λ = 120.4^∘,β = -4.7^∘ and λ = 300.4^∘,β = 4.7^∘) is also closer to the Galactic Center than TianQin II (λ = 30.4^∘,β = 0^∘ and λ = 210.4^∘,β = 0^∘). Therefore, Taiji-m detects more resolvable GBs than Taiji-p (c), and TianQin I detects more than TianQin II. For detection on networks, just like the result in a single detector, the arm length and orientation of the detector are the major factors in resolvable GBs detection. Due to the longer arm length of Taiji and LISA than that of TianQin, the networks of Taiji and LISA detect more resolvable GBs than individual Taiji or LISA, but the improvement is not significant compared to TianQin's network. Eq. <ref> indicates that the reciprocal sensitivity on the network is the sum of the reciprocal sensitivities of each detector. Therefore, as the number of detectors in the network increases, the sensitivity of the network increases, but the increase rate decreases. In summary, it can be concluded that as the number of detectors on the network increases, the number of resolvable GBs detected will also increase. The optimal result will be achieved when LISA, Taiji-m, TianQin I and TianQin II are combined as a network. §.§ Improvement of sensitivity In order to better show the impact of foreground confusion on the sensitivity curve, and the subtraction of foreground confusion by different number of detectors on the network, we can fit the foreground confusion on logarithmic scale through a polynomial function, which can be written as follows <cit.>: S_c(f) = 10^x with x = ∑_n = 0^5a_n[ log 10( f/1 mHz) ] ^n This fitting is only applicable to the frequency range of 0.1∼6 mHz, and the fitting parameters a_n are listed in TABLE <ref>. Due to different fitting functions, they can affect the final curve. Therefore, the fitting parameters given in TABLE <ref> and the curves drawn in FIG. <ref> are only used as a reference. In our previous calculations, we estimate foreground confusion using a running median on PSD. In FIG. <ref>(a), we plotted the sensitivity curve of a single detector, and it can be seen that in the part where the foreground confusion affects, the sensitivity curve generated by instrument noise is better in Taiji than in LISA than in TianQin. In the range of 8∼1.5 mHz, the full sensitivity curves of LISA and Taiji are almost identical, due to the larger response of resolvable GBs in Taiji, resulting in greater foreground confusion. In the range of 1.5∼3.5 mHz, the full sensitivity of Taiji-m is superior to that of Taiji-p (c), as Taiji-m can detect more resolvable GBs than Taiji-p (c), resulting in lower subtracted foreground confusion. In the 2∼6 mHz range, the full sensitivity of TainQin I is slightly lower than that of TianQin II, which is also because TainQin I has a greater response to resolvable GBs. In FIG. <ref>(b), we show the sensitivity curves of different numbers of detectors on the network. It can be seen that as the number of detectors on the network increases, the sensitivity curve of instrument noise decreases. Moreover, because in this range, the sensitivity of TianQin is much lower than that of LISA and Taiji, the sensitivity curve of instrument noise only slightly changes after adding TianQin to the network. As the number of detectors on the network increases, the more resolvable GBs are the subtracted foreground confusion is smaller, which is sufficient to demonstrate the advantage of detecting on the network for subtracting foreground confusion. §.§ SNR and uncertainty In addition to the number of resolvable GBs detected and the sensitivity curve containing foreground confusion, the uncertainty of parameters for resolvable GBs is also crucial. Therefore, we calculated the FIM on different networks (choosing TJm, LISA+TJm, LISA+TJm+TQI and LISA+TJm+TQI+II due to the most number of resolved GBs with 1, 2, 3, 4 detectors respectively) using Eq. <ref> ∼ Eq. <ref> to obtain the uncertainty of different parameters, as shown in FIG. <ref>. Explanation of result on the right side of FIG. <ref>, for the resolvable GBs detected only by Taiji-m, it can be clearly seen that as the number of detectors on the network increases, the SNR will increase, while the uncertainty of parameters will decrease. This is due to the sensitivity improvement for the increased number of detectors on the network. Similar to the increase rate in the number of resolvable GBs described in Sec. <ref>, the magnitude of changes in SNR and uncertainty will decrease as the number of detectors on the network increases. Increasing from one detector to two has a significant effect, but increasing from two to three is relatively less significant. Unlike the above situation, in actual detection, the resolvable GB detected by different detector combinations is different. From the result on the left side of FIG. <ref>, it can be seen that the changes in SNR and uncertainty of resolvable GBs detected on different networks are not as significant as those of the same resolvable GBs. Except for the decrease in the uncertainties of GW strain amplitude, frequency, and sky position, there are almost no significant changes in the rest, and even some uncertainties have no decrease but increase. For example, the initial phase and polarization angle show a slight increase when the number of detectors on the network increases from three to four. This is because as the number of detectors on the network increases, the sensitivity improves, making many unresolved GBs become resolvable GBs, adding more low-SNR resolvable GBs. Therefore, it is possible that as the number of detectors on the network increases, uncertainty increases instead of decreasing, and SNR decreases instead of increasing. Nonetheless, as the number of detectors on the network increases, the SNR of the same resolvable GBs increases, and uncertainty decreases. Moreover, after adding more low-SNR resolvable GBs, the overall SNR remains almost unchanged, with some uncertainties significantly decreasing and others slightly increasing, which is sufficient to demonstrate the positive impact of increasing the number of detectors on the network. Not only these, but also the GW detection of resolvable GBs is helpful for the detection of EM bands, constituting Multi-messenger astronomy <cit.>. The more accurate the GW detection of resolvable GBs parameters, i.e. the lower the uncertainties, the more conducive it is to EM detection. If the sky position of the source is sufficiently accurate, it is possible to search for EM counterparts through EM follow-up observations Among all resolvable GBs, the uncertainty of the sky position is less than 1 deg^2 (ΔΩ<1 deg^2) for 30.2∼31.6% of resolvable GBs, and less than 0.1 deg^2 (ΔΩ<0.1 deg^2) for 9.6∼10.3%. It can be seen from the data in FIG. <ref> that among all parameters, the frequency measurement of resolvable GBs is the most accurate, of which the uncertainty on Δ f_0/f_0 of 29.2∼32.3% GB is less than 1×10^-6 (Δ f_0/f_0<1×10^-6), while the GW frequency f_0 is directly related to the period T_p of resolvable GBs (f_0=2/T_p), that is, the period can be measured accurately. Note that as the number of detectors on the network increases, the proportion of the above items will also increase. On the contrary, the results of EM detection can also serve as a prior to reduce the uncertainty of GW detection. We adopt the method in Ref. <cit.>, which can be used to reduce the uncertainty of parameters from GW data by removing the respective rows and columns in the FIM. By observing GBs, the inclination angle ι can be independently determined by EM detection, and we assume that the inclination angle of resolvable GBs can be completely determined. By calculating the uncertainty of other parameters through the removed FIM, we found that only the uncertainty on Δ𝒜 /𝒜 changes significantly, with the mean uncertainty decrease of 91.9∼93.5% and the median uncertainty decrease of 60.8∼61.9%. From Eq. <ref>, there is degeneracy between GW strain amplitude 𝒜 and inclination angle ι, which is why determining the inclination angle can significantly improve the measurement of amplitude. Using the same method, we assume that the EM counterparts can be found through EM detection, that is, the sky position (λ,β) is completely determined. Therefore, the mean uncertainty on ϕ_0 is reduced by 25.8∼33.6%, the median uncertainty is reduced by 25.1∼26.9%, and other parameters will have a decrease of 2∼9%. Notice that the above situations are all very idealized and are based on the assumption that a certain parameter of all resolvable GBs is completely determined, which cannot be achieved in practice. Even so, it can also indicate that there is feasibility in reducing the parameter uncertainty of GW detection through EM detection. In summary, GW detection and EM detection can complement each other, and as the number of detectors on the network increases, the improvement of both will be greater. § SUMMARY AND DISCUSSION In this paper, we used 1% of the data in LDC, which is 3×10^5 GBs, to simulate the galactic foreground by overlapping GBs as quasi-sinusoidal signals. We treated GB with the SNR greater than 7 as resolvable GBs, studied the number of detected resolvable GBs under different detector combinations and their alternative orbital configurations on the network, calculated the parameter uncertainties of resolvable GBs, and plotted the fitted full sensitivity curve. Through the iterative method, we predict the number of resolvable GBs detected by different detector combinations on the network. In the single detectors, the number of resolvable GBs is arranged in descending order of detected quantity: Taiji-m, Taiji-p (c), LISA, TianQin I, and TianQin II. The trend of results for different detectors combinations on the network is also similar to that of a single detector. The optimal combination for each number on the network is TJm, LISA+TJm, LISA+TJm+TQI, and LISA+TJm+TQI+II. Based on the above optimal combinations, we calculate the uncertainty of the parameters of resolvable GBs using FIM. As the number of detectors on the network increased, the uncertainty of the same resolvable GBs decreased, and the magnitude of the decrease also decreased. The uncertainty remained reduced or almost unchanged even when more low-SNR resolvable GBs were detected. Resolvable GBs with low uncertainty can help EM detection find electromagnetic counterparts and determine the period of GBs, while EM detection can also serve as a prior to reducing the uncertainty of GW detection. We find that determining the inclination angle through EM detection can reduce GW strain amplitude uncertainty by ∼93%, and determining the sky position can reduce the phase uncertainty by ∼30%. Therefore, GW joint detection on the network can complement EM detection, which is conducive to the development of Multi-messenger astronomy. By fitting the full sensitivity curve containing foreground confusion, it is possible to intuitively see the effect of a single detector and different combinations of detectors on the network on subtracting foreground confusion. The effect of subtracting foreground confusion is basically proportional to the number of resolvable GBs detected. The more detectors in the network, the better the subtracting effect. In addition, it should be noted that so far, no space-based GW detector has been launched, so the data related to the space GW detector are simulated and predicted. In fact, during the observation, the noise is assumed to be Gaussian and stationary, and the data quality is assumed to be optimal and uninterrupted <cit.>. We use SNR to define thresholds and distinguish resolvable GBs, which is very useful and efficient to estimate foreground confusion. Moreover, we assume that the subtraction of GBs is perfect without residual, which leads to our results being optimal and ideal. Some new and more practical methods have been proposed, such as iterative subtraction based on Particle swarm optimization algorithm <cit.>, search and subtraction using Bayesian evidence ratio <cit.>. In future research, we can delve into multiple aspects to improve our understanding and accuracy of foreground confusion. Firstly, we can further investigate the relationship between GW detection and EM detection, exploring how to better combine GW detectors and EM detectors to enhance observation and understanding of GBs <cit.>. Secondly, we can delve deeper into the impact of time-delay interferometr (TDI) technology on foreground confusion, as well as the subtraction of foreground confusion by different generations of TDI and channels <cit.> In addition, we can also consider the impact of different population models on foreground confusion to better understand the population distribution and evolution theory of GBs. Finally, we can also consider the impact of foreground confusion on other GW sources to better evaluate the sensitivity and accuracy of GW detection, and use foreground noise to improve the data processing and analysis methods. In conclusion, through in-depth research on the above aspects, we can further improve our understanding and accuracy of GW detection, so as to better explore the essence and evolution history of astrophysical events, and provide more valuable data and information for research in Cosmology, Astrophysics and other fields. § COORDINATE TRANSFORMATION The transformation between detector coordinates (ϕ_d,θ_d) and Ecliptic coordinates (λ,β) is based on the method described in Ref. <cit.>, and the situation in both coordinate frames is shown in FIG. <ref>. We can use a rotation matrix R to connect detector coordinates X^d={sinθ_d cosϕ_d,sinθ_d sinϕ_d,cosθ_d} and Ecliptic coordinates X^e={cosβcosλ,cosβsinλ,sinβ}, which can be expressed as: X^e =RX^d X^d =R^-1X^e For LISA and Taiji: R= ([ cosθ_l cos ^2α_d+sin ^2α_d (cosθ_l-1) sinα_dcosα_d -sinθ_l cosα_d; (cosθ_l-1) sinα_dcosα_d cosθ_l sin ^2α_d+cos ^2α_d -sinθ_l sinα_d; sinθ_l cosα_d sinθ_l sinα_d cosθ_l; ]) For TianQin: R= ([ cosθ_t qcosϕ_t qsinα_d+sinϕ_t qcosα_d cosθ_t qcosϕ_t qcosα_d-sinϕ_t qsinα_d sinθ_t qcosϕ_t q; cosθ_t qsinϕ_t qsinα_d-cosϕ_t qcosα_d cosθ_t qsinϕ_t qcosα_d+cosϕ_t qsinα_d sinθ_t qsinϕ_t q; -sinθ_t qsinα_d -sinθ_t qcosα_d cosθ_t q ]) where α_d=2π f_sct+2π/3(n-1)+α_0, n is the nth S/C, α_0 is the initial phase, f_sc=1/T_sc and T_sc is the rotation period. For TianQin, T_sc=3.65 days and f_sc≃3×10^-3 mHz, but for LISA and Taiji, T_sc=1 year and f_sc≃3×10^-6 mHz The angles in the rotation matrix R can be determined from FIG. <ref>. For LISA, Taiji-p and Taiji-c, θ_l=60^∘ and for Taiji-m, θ_l=120^∘. For TianQin I, θ_tq=94.7^∘,ϕ_tq=120.4^∘ and for TianQin II, θ_tq=90^∘,ϕ_tq=30.4^∘.
http://arxiv.org/abs/2307.04517v1
20230710122524
Study on the Correlation between Objective Evaluations and Subjective Speech Quality and Intelligibility
[ "Hsin-Tien Chiang", "Kuo-Hsuan Hung", "Szu-Wei Fu", "Heng-Cheng Kuo", "Ming-Hsueh Tsai", "Yu Tsao" ]
eess.AS
[ "eess.AS" ]
Efficient Match Pair Retrieval for Large-scale UAV Images via Graph Indexed Global Descriptor San Jiang, Yichen Ma, Qingquan Li, Wanshou Jiang, Bingxuan Guo, Lelin Li, and Lizhe Wang S. Jiang, Y. Ma, and L. Wang are with the School of Computer Science, China University of Geosciences, Wuhan 430074, China; S. Jiang is also with the Guangdong Laboratory of Artificial Intelligence and Digital Economy (Shenzhen), Shenzhen 518060, China, and with the Hubei Key Laboratory of Intelligent Geo-Information Processing, China University of Geosciences, Wuhan 430078, China. E-mail: [email protected], [email protected], [email protected]. (Corresponding author: Lizhe Wang) Q. Li is with the College of Civil and Transportation Engineering, Shenzhen University, Shenzhen 518060, China, and also with the Guangdong Laboratory of Artificial Intelligence and Digital Economy (Shenzhen), Shenzhen 518060, China. E-mail: [email protected]. W. Jiang and B. Guo are with the State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University, Wuhan 430072, China. E-mail: [email protected], [email protected]. L. Li is with the Provincial Key Laboratory of Geo-information Engineering in Surveying, Mapping and Remote Sensing, Hunan University of Science and Technology, Xiangtan 411201, China. E-mail: [email protected]. August 12, 2023 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Subjective tests are the gold standard for evaluating speech quality and intelligibility, but they are time-consuming and expensive. Thus, objective measures that align with human perceptions are crucial. This study evaluates the correlation between commonly used objective measures and subjective speech quality and intelligibility using a Chinese speech dataset. Moreover, new objective measures are proposed combining current objective measures using deep learning techniques to predict subjective quality and intelligibility. The proposed deep learning model reduces the amount of training data without significantly impacting prediction performance. We interpret the deep learning model to understand how objective measures reflect subjective quality and intelligibility. We also explore the impact of including subjective speech quality ratings on speech intelligibility prediction. Our findings offer valuable insights into the relationship between objective measures and human perceptions. Objective measures, subjective listening tests, speech quality, speech intelligibility § INTRODUCTION Speech quality and intelligibility are crucial in various speech-related applications, such as speech enhancement (SE), teleconferencing, voice conversion and text-to-speech, and hearing aids. As humans are the end-users of these applications, subjective listening tests are considered the most precise and trustworthy way to evaluate speech quality and intelligibility. However, conducting listening tests on a large number of participants is time-consuming and expensive. Therefore, a significant amount of research has been devoted to developing objective measures that can mathematically quantify speech quality and intelligibility. Objective measures can be divided into intrusive measures, where quality and intelligibility are estimated by comparing degraded/processed speech with clean references, and non-intrusive measures, where quality and intelligibility are calculated directly on the degraded/processed speech without clean reference. Perceptual Evaluation of Speech Quality (PESQ) <cit.> and Perceptual Objective Listening Quality Analysis (POLQA) <cit.> are intrusive speech quality measures. Despite being widely used in speech processing research, PESQ and POLQA are shown to correlate suboptimally with subjective tests <cit.>. Short-time objective intelligibility measure (STOI) <cit.> and extended STOI (ESTOI) <cit.> are popular intrusive speech intelligibility measures. However, STOI has been reported to provide suboptimal prediction capability for the subjective intelligibility results of the Wiener filtering <cit.> or deep learning (DL)-based <cit.> SE systems. Moreover, intrusive measures are less applicable to real-world scenarios because clean signals may not always be available. Compared with intrusive methods, non-intrusive methods such as ITU-T P.563 <cit.>, ANIQUE+ <cit.>, and speech-to-reverberation modulation ratio (SRMR) <cit.> overcome the limitation. A recent approach of non-intrusive methods directly predicts objective measures by using DL models without the need of a clean signal. These models are trained to predict standard objective measures, such as PESQ and STOI <cit.>. Several studies have shown high performance using this approach, but the ground-truth labels used to train these DL models are not always aligned with human perception. To better align with human perception, researchers have started to rely on ground truth human labels for model training. DNSMOS <cit.> and NISQA <cit.> are examples of DL models trained on mean opinion score (MOS) datasets, where DNSMOS focuses on distortions in SE and NISQA on distortions in communication networks. Andersen et al. <cit.> and Pedersen et al. <cit.> used convolutional neural network (CNN) models to predict subjective intelligibility. However, owing to the time-consuming nature of conducting subjective listening tests, collecting large-scale datasets of human labels to train DL-based models is challenging. One potential solution to bridge the gap between objective measures and human perception without relying on large-scale datasets of human labels is to predict human perception of speech quality and intelligibility by leveraging commonly used objective measures. The advantage of this approach is that it is considerably less time-consuming compared to conducting subjective listening tests. Previous studies have attempted to predict either speech quality or intelligibility using objective measures. Hu et al. <cit.> proposed composite measures for evaluating SE by linearly combining objective quality measures. Liu et al. <cit.> showed that ASR and objective quality measures have the potential to estimate intelligibility under noisy conditions. Ma et al. <cit.> reported that objective measures originally designed to predict speech quality can reliably predict the intelligibility of noise-suppressed speeches. However, there is a lack of research that considers both quality and intelligibility criteria and provides interpretations of the relationship between objective measures and human perception to indicate how objective measures reflect subjective quality and intelligibility in practical usage. In this study, we first time proposed using DL models that take a combination of off-the-shelf objective measures as inputs to predict subjective quality and intelligibility ratings. We evaluated the correlation between commonly used objective measures and subjective ratings of quality and intelligibility on a Chinese dataset called TMHINT-QI <cit.>, and then use DL techniques to propose new objective measures composed of all of the used objective measures. We demonstrated that the proposed DL model can achieve strong performance in predicting subjective quality and intelligibility ratings, even when trained on small amounts of training data. This core strength makes the DL model practical for real-world applications, as it can still maintain high accuracy without requiring a large amount of training data. Furthermore, we interpreted the proposed DL models to describe the relationship between the objective measures and subjective ratings of speech quality and intelligibility. We also investigated the potential improvement in intelligibility prediction by incorporating subjective quality ratings. This allows us to leverage the extensive research conducted in the field of speech quality assessment to enhance the assessment of speech intelligibility. Our results can provide valuable insights into the utility and limitations of objective measures in reflecting subjective quality and intelligibility ratings and potentially contribute to bridging the gap between objective measures and human perception. The remainder of this paper is organized as follows. Section <ref> describes the objective measures used in our experiments. Section <ref> details our dataset and presents our correlation analysis. We present our experimental setup and results in Section <ref>. Finally, we conclude the paper in Section <ref>. § OBJECTIVE MEASURES This study investigates several objective measures to predict subjective speech quality and intelligibility. These measures are categorized as either intrusive or non-intrusive, depending on whether a clean reference is required or not. In this section, we describe both types of objective measures. §.§ Intrusive objective measures Six different intrusive objective measures were assessed: PESQ, ITU-T P.835, normalized covariance metric (NCM), STOI, ESTOI, and word error rate (WER). PESQ evaluates speech quality and ranges from -0.5 to 4.5. ITU-T P.835 evaluates speech quality in terms of three aspects: signal quality (SIG), background noise (BAK), and overall quality (OVRL) <cit.>. NCM assesses the covariance between the envelopes of the clean and degraded/processed speech and provides scores ranging from 0 to 1 <cit.>. STOI and ESTOI evaluate speech intelligibility and have scores between 0 and 1. Finally, WER is calculated using Google ASR <cit.>. §.§ Non-intrusive objective measures In addition to intrusive measures, two non-intrusive objective measures were also evaluated: DNSMOS P.835 <cit.> and MOSA-Net <cit.>. DNSMOS P.835 is a multi-stage self-teaching based model that evaluates speech quality based on three aspects: signal quality (DNSMOS-SIG), background noise (DNSMOS-BAK), and overall quality (DNSMOS-OVRL). MOSA-Net uses time and spectral features and latent representations from a self-supervised model, and is originally trained to predict several objective metrics, but can be adapted for MOS predictions. Here we adopted the MOS prediction results of the MOSA-Net. Overall, four quality measures (PESQ, ITU-T P.835, DNSMOS P.835, MOSA-Net) and four intelligibility measures (NCM, STOI, ESTOI, WER) were involved in this study. Notably, we were able to obtain several objective measures, such as WER, DNSMOS, and MOSA-Net, by leveraging pre-trained APIs from third-party sources, eliminating the need for additional efforts to acquire these pre-trained models or gather extensive amounts of training data. § CORRELATIONS BETWEEN OBJECTIVE AND SUBJECTIVE ASSESSMENTS §.§ Dataset We conducted experiments using TMHINT-QI [TMHINT-QI dataset: http://gofile.me/6PGhz/4U6GWaOtY; TMHINT-QI dataset description: https://github.com/yuwchen/InQSS] <cit.>, a Chinese corpus containing noisy and enhanced data. To form the noisy data, we corrupted the clean speech from the TMHINT dataset with four types of noise (babble, street, pink, and white) at four different SNR levels (-2, 0, 2, and 5 dB). The noisy data was then enhanced by the minimum mean squared error (MMSE), Karhunen-Loéve transform (KLT), deep denoising-autoencoder (DDAE), fully convolutional network (FCN), and transformer model (denoted as Trans). Human listeners were recruited to evaluate the subjective TMHINT-QI scores. A total of 226 people aged between 20 and 50 years participated in the listening test. The quality score ranges from 1-5, where a higher value indicates a better speech quality. The intelligibility score calculates the number of words correctly recognized by listeners in a ten-word sentence; the intelligibility score ranged from 0-10. A higher intelligibility score indicates that the listeners correctly identify more words. There were 24,408 samples in total. We followed the setup in <cit.> to split the TMHINT-QI dataset into training and test sets. The subjective scores of each utterance were averaged to obtain its ground-truth score. Hence, the final training and test sets contained 12,937 and 1,978 unique utterances, respectively, along with their subjective quality and intelligibility scores. The training set was randomly split into 90% for training and 10% for validation in accordance with <cit.>. More details can be found in <cit.>. §.§ Correlation analysis We investigated the relationship between subjective quality and intelligibility ratings and objective measures of the test data by calculating the Pearson correlation coefficient (PCC). The correlation values between the subjective and objective measures are presented in Fig. <ref>, along with the correlation between subjective quality and intelligibility. Several observations are reported in these figures. We first found that human perceptions of quality and intelligibility are moderately correlated with each other, showing a correlation of about 0.68 between subjective quality and intelligibility ratings. In addition, we observed that all objective measures, with the exception of WER, demonstrated higher correlations with subjective quality compared to subjective intelligibility. For subjective quality, it is interesting to note that a high correlation is expected to PESQ, but objective intelligibility measures (ie, NCM, ESTOI, STOI, and WER) are more highly correlated with subjective quality ratings. For subjective intelligibility, the correlations of objective quality measures (ie, PESQ, ITU-T P.835, DNSMOS P.835, and MOSA-Net) are generally lower (below 0.24), which is reasonably expected for which they were originally designed to predict speech quality. Interestingly, in relation to speech intelligibility, high correlations are expected between objective intelligibility measures (ie, NCM, ESTOI, STOI, and WER), but all except WER have moderate correlations with subjective intelligibility in our dataset. We also exploited the correlations of either STOI or PESQ with WER. Fig. <ref> shows the scatter plots of WER against PESQ and STOI. Our finding is consistent with previous studies <cit.>, which show that the correlation value between STOI and WER is higher than that between PESQ and WER. This supports the results in <cit.> that integrating STOI into SE model optimization can improve WER on the enhanced speech. In summary, the strongest absolute correlation with subjective quality is found in NCM, followed by ESTOI and STOI. For subjective intelligibility, WER shows the highest absolute correlation, followed by subjective quality and NCM. § EXPERIMENTS §.§ Experimental setup The correlation analysis in Section <ref> indicates that none of the objective measures show a strong correlation (above 0.8) with subjective quality and intelligibility ratings, which aligns with the findings of <cit.> that no single objective measure demonstrates a high correlation. Thus, we seek to develop a DL model which takes a combination of objective measures as inputs to predict corresponding subjective quality and intelligibility scores. Fig. <ref> illustrates the details of the proposed DL model. Each of the objective and subjective measures was normalized using min–max to be between 0 and 1 before feeding into the DL model. Twelve objective measures are utilized as input for the DL model. The DL model consists of six dense layers, where each dense layer is followed by GELU activation, except for the last layer, which is followed by a sigmoid activation. This sigmoid activation produces values between 0 and 1, which are then divided into two separate tasks, one for quality estimation and the other for intelligibility prediction. Afterward, the output values are denormalized to obtain the predicted subjective quality and intelligibility scores. To evaluate the performance, three criteria: mean squared error (MSE), PCC and Spearman’s rank correlation coefficient (SRCC) were selected as evaluation criteria. §.§ Experimental results In our comparative analysis, we examined the performance of the proposed DL model in relation to the LR model, which predicts subjective quality or intelligibility scores separately. Additionally, we incorporated two DL-based non-intrusive speech assessment models into our evaluation. The first model, InQSS, combines self-supervised models and scattering transform. It has the capability to predict both subjective quality and intelligibility simultaneously, as outlined in <cit.>. The second model, MOS-SSL, utilizes fine-tuned features from wav2vec 2.0 to predict MOS <cit.>. We trained the MOS-SSL model on the TMHINT-QI dataset using a single task criterion to predict quality and intelligibility scores as separate targets. Table <ref> summarizes the results. The superior performance of the DL model over the LR model is evident in both subjective quality and intelligibility prediction. Additionally, the effectiveness of the DL model is confirmed by achieving higher PCC and SRCC scores compared to the InQSS and MOS-SSL methods. These results demonstrate the improved accuracy and reliability of the DL model in predicting subjective quality and intelligibility. We also examine how well the proposed DL model predicts compared to InQSS and MOS-SSL when different quantities of training data are accessible. To avoid the time-consuming process of conducting listening tests, it is preferable to possess a model that demands less training data but still performs comparable results. Table <ref> illustrates the percentage decrease in PCC for various percentages of training data, while Fig. <ref> visually represents the changes in PCC and SRCC as the number of training data varies. Table <ref> clearly shows that when trained with only 25% of the data, all three models were close to reaching saturation. The InQSS and MOS-SSL models had a decrease within 3%, while the DL model for quality prediction had a decrease of 1%. For intelligibility prediction, the InQSS and MOS-SSL models had a decrease within 8%, while the DL model had a decrease of 5%. Furthermore, the DL model demonstrated its superiority in performance with a percentage decrease of 3.4% for quality prediction and 4.6% for intelligibility prediction when trained with only 5% of the training data. Fig. <ref> also clearly demonstrates that DL models consistently outperform InQSS and MOS-SSL models. Moreover, it shows that the increase in PCC and SRCC values gradually slows down when the amount of training data exceeds 1,000. The overall analysis indicates that InQSS and MOS-SSL heavily relies on a large amount of training data and exhibits promising prediction performance only when more training data is available. In contrast, the proposed DL model's capacity to achieve good performance with limited amounts of training data is a significant advantage. This is particularly beneficial because collecting subjective human ratings is a challenging and expensive process, and being less reliant on a large amount of data is highly advantageous. §.§ Interpretation of the DL model Our aim is to investigate how each objective measure impacts the prediction performance of subjective quality and intelligibility. In order to uncover the underlying functional relationship between these measures of the DL model, we generated subjective quality and intelligibility scores by feeding data samples obtained from a multivariate normal distribution into the DL model. The subjective quality and intelligibility scores were then divided into 200 equal parts based on the values of the objective measures being analyzed. The scores for each part were averaged, resulting in 200 scores for each subjective quality and intelligibility. These scores were connected to form a line graph, which illustrates the functional relationship between the quality or intelligibility scores and the objective measures. We repeated this process 1,000 times and the functional relationship between the objective and subjective measures of the DL model is depicted in Figure <ref>, where the solid line represents the mean and the light-colored areas represent the standard deviation of the 1,000 lines. We limited our focus to several objective measures due to space limitations. From Fig. <ref>, it is evident that the relationships between objective measures and subjective quality is relatively linear compared to subjective intelligibility. As for subjective intelligibility, it can be noted that the slope gradually becomes less steep as objective measures increase. This implied that higher values of objective measures can accurately demonstrate the expected improvement in subjective quality, but not necessarily in subjective intelligibility. In addition, we found that subjective measures decline when DNSMOS-BAK reaches approximately 2.0. More specifically, there is a significant reduction in subjective intelligibility, decreasing from 9.2 to 8.8, while subjective quality experiences a less drastic drop, going from 3.4 to 3.2. Our findings suggest that this phenomenon occurs because attempts to suppress background noise inevitably result in speech distortion, which has a negative impact on speech quality and intelligibility. We can observe that individual objective measures cannot fully capture subjective quality and intelligibility with perfection. This observation supports our rationale for integrating all objective measures in order to establish a strong correlation with subjective listening tests. §.§ Enhancing intelligibility prediction through the incorporation of subjective quality While our DL model predicts both subjective quality and intelligibility simultaneously, we are interested in exploring whether including subjective quality can enhance the prediction of intelligibility. The moderate correlation of 0.68 between subjective quality and subjective intelligibility indicates a potential association between the two factors. Consequently, we propose that integrating subjective quality ratings has the potential to enhance the prediction of subjective intelligibility, to some extent at least. Meanwhile, opting for quality tests instead of intelligibility tests provides significant advantages in terms of saving effort. Quality tests require less time compared to the time-consuming process of listening intelligibility tests, which involve word identification for calculating intelligibility scores. Therefore, choosing quality tests offers a more time-efficient approach. We introduced modifications to the proposed DL model by including subjective quality scores as additional inputs. As a result, the model's primary objective shifted to predicting subjective intelligibility scores while taking into account these subjective quality scores. Table <ref> demonstrates a significant improvement in subjective intelligibility prediction when incorporating subjective quality ratings. The PCC value increased from 0.792 to 0.870, validating the effectiveness of using subjective quality to predict intelligibility. The inclusion of subjective quality ratings represents a valuable contribution to improving the accuracy of intelligibility predictions. By integrating subjective quality, we can harness the extensive research conducted in the field of speech quality assessment to enhance the assessment of speech intelligibility. § CONCLUSION The contributions of this study are four fold. First, the study proposes the use of DL models that take a combination of off-the-shelf objective measures as inputs to predict subjective quality and intelligibility ratings. Second, we evaluate the proposed DL model against different speech assessment methods and analyze the percentage decrease in PCCs as the amount of training data varies. The experimental results highlight the significant advantage of our DL model, which exhibits strong performance even with a small amount of training data. This is particularly beneficial in situations where gathering subjective human ratings is arduous and expensive. Thirdly, we provide insights into how objective measures reflect subjective quality and intelligibility. This analysis can help researchers better understand the relationship between objective measures and subjective measures. Fourthly, we demonstrate that incorporating subjective quality ratings can improve the prediction of subjective intelligibility. This integration allows us to leverage the extensive research conducted in the field of speech quality assessment to enhance speech intelligibility evaluation. Additionally, quality tests offer a time-saving advantage compared to the more time-consuming process of listening intelligibility tests. IEEEbib
http://arxiv.org/abs/2307.04580v1
20230710141505
An implicit DG solver for incompressible two-phase flows with an artificial compressibility formulation
[ "Giuseppe Orlando" ]
math.NA
[ "math.NA", "cs.NA", "physics.flu-dyn" ]
An implicit DG solver for incompressible two-phase flows with an artificial compressibility formulation Giuseppe Orlando^(1) ======================================================================================================= ^(1) MOX - Dipartimento di Matematica, Politecnico di Milano Piazza Leonardo da Vinci 32, 20133 Milano, Italy [email protected] Keywords: Navier-Stokes equations, incompressible flows, two-phase flows, artificial compressibility, Discontinuous Galerkin methods. We propose an implicit Discontinuous Galerkin (DG) discretization for incompressible two-phase flows using an artificial compressibility formulation. Conservative level set (CLS) method is employed in combination with a reinitialization procedure to capture the moving interface. A projection method based on the L-stable TR-BDF2 method is adopted for the time discretization of the Navier-Stokes equations and of the level set method. Adaptive Mesh Refinement (AMR) is employed to enhance the resolution in correspondence of the interface between the two fluids. The effectiveness of the proposed approach is shown in a number of classical benchmarks, such as the Rayleigh-Taylor instability and the rising bubble test case, for which a specific analysis on the influence of different choices of the mixture viscosity is carried out. § INTRODUCTION Two-phase flows are common in many engineering and industrial applications. An evolving interface delimits the bulk regions of the single phases. Many techniques have been developed over the years to capture the motion of the interface. Two classes of methods are commonly used to locate the interface: interface-tracking and interface-capturing. Interface-tracking schemes employ either Arbitrary Lagrangian–Eulerian (ALE) methods on a mesh that deforms with the interface <cit.> or marker and cell methods <cit.>. Interface-capturing techniques are instead based on fixed spatial grids with an interface function which captures the interface. A full survey on interface-capturing methods goes beyond the scope of this work and we refer e.g. to <cit.> for a review of these techniques. Interface capturing methods include the level set (LS) method <cit.>, which represents the interface as an iso-surface of the so-called level set function. Classically, the level set function is defined as the signed distance function. However, this choice leads to non conservative methods. A number of approaches have been developed to overcome this issue; in this work, we employ the conservative level set (CLS) method, originally proposed in <cit.>, and briefly summarized in Section <ref>. CLS includes a reinitialization equation to maintain the shape of the level set, which will be also discussed in Section <ref>. Changing fluid properties, such as density and viscosity, and surface tension at the interface lead to discontinuities that make the discretization of the Navier-Stokes equations particularly challenging. The Discontinuous Galerkin (DG) method has been widely employed in the field of Computational Fluid Dynamics, see e.g. <cit.>, and is a natural candidate for the discretization of the governing equations of two-phase flows. Several approaches have been proposed in the literature combining the DG method and the level set method, see among many others <cit.>. In this paper, we propose an extension of the solver for single-phase incompressible Navier-Stokes equations with an artificial compressibility formulation presented in <cit.>, so as to overcome well know issues of projection methods. The time discretization is therefore based on the TR-BDF2 scheme <cit.>, which is a second order two-stage method. A brief review of the TR-BDF2 method will be given in Section <ref>, whereas we refer to <cit.> for a detailed analysis of the scheme. The solver is implemented in the framework of the open source numerical library deal.II <cit.>, which supports native non-conforming h-adaptation. We will exploit these capabilities to enhance the resolution in the regions close to the interface between the two fluids. The paper is structured as follows: the model equations and their non-dimensional formulation are reviewed in Section <ref>. The time discretization approach is outlined and discussed in Section <ref>. The spatial discretization is presented in Section <ref>. The application of the proposed method to a number of significant benchmarks is reported in Section <ref>. Here, we also analyze the impact of different possible choices for the mixture viscosity when the interface undergoes large deformations. Finally, some conclusions and perspectives for future work are presented in Section <ref>. § THE MODEL EQUATIONS Let Ω⊂ℝ^d, 2 ≤ d ≤ 3 be a connected open bounded set with a sufficiently smooth boundary ∂Ω and denote by 𝐱 the spatial coordinates and by t the temporal coordinate. The two fluids in Ω are considered immiscible and they are contained in the subdomains Ω_1(t) and Ω_2(t), respectively, so that Ω_1(t)∪Ω_2(t) = Ω. The moving interface between the two fluids is denoted by Γ(t), defined as Γ(t) = ∂Ω_1(t) ∩∂Ω_2(t). We consider the classical unsteady, isothermal, incompressible Navier-Stokes equations with gravity, which read as follows <cit.>: ρ(𝐱)[∂𝐮/∂ t + (𝐮·∇)𝐮] = -∇ p + [2μ(𝐱) 𝐃(𝐮)] + ρ(𝐱)𝐠 𝐮 = 0, for 𝐱∈Ω, t ∈ (0, T_f], supplied with suitable initial and boundary conditions. Here T_f is the final time, 𝐮 is the fluid velocity, p is the pressure, ρ is the fluid density and μ is the dynamic viscosity. We assume that both the density and the viscosity are discontinuous functions ρ(𝐱) = ρ_1 in Ω_1(t) ρ_2 in Ω_2(t) and μ(𝐱) = μ_1 in Ω_1(t) μ_2 in Ω_2(t) with ρ_1, ρ_2, μ_1, and μ_2 constant values. Moreover, 𝐠 is the gravitational acceleration and 𝐃(𝐮) denotes the symmetric part of the gradient of the velocity, defined as 𝐃(𝐮) = 1/2[∇𝐮 + (∇𝐮)^T]. In the following, for the sake of simplicity in the notation, we omit the explicit dependence on space and time for the different quantities. Surface tension effects are taken into accounts through the following balance of forces at the interface Γ: [𝐮]_Γ = 0 [-p𝐈 + 2μ𝐃(𝐮)]_Γ𝐧_Γ = σκ𝐧_Γ, where 𝐧_Γ is the outward unit normal to Γ, [Ψ]_Γ = Ψ|_Γ∩Ω_1 - Ψ|_Γ∩Ω_2 denotes the jump of Ψ across the interface Γ, σ is the constant surface tension coefficient, and κ = -𝐧_Γ is the curvature. The first condition implies the continuity of the velocity along Γ, whereas the second condition describes the balance of forces at the interface. A common way to handle the term with surface tension is to introduce the following volumetric force <cit.>: 𝐟_σ = σκ𝐧_Γδ(Γ), where δ(Γ) is the Dirac delta distribution supported on the interface. Hence, system (<ref>) can be rewritten as follows: ρ[∂𝐮/∂ t + (𝐮·∇)𝐮] = -∇ p + [2μ𝐃(𝐮)] + ρ𝐠 + 𝐟_σ 𝐮 = 0. A level set approach <cit.> is employed to capture the interface Γ. The interface between the two fluids is considered sharp and is described as the zero level set of a smooth function. Hence, the following relation holds: ∂φ/∂ t + 𝐮·∇φ = 0, where φ is the level set function. A common choice <cit.> is to consider as level set the signed distance function to Γ. In order to fix the notation, we consider φ < 0 in Ω_2 and φ > 0 in Ω_1. Therefore, we define φ = -dist(𝐱,Γ) if 𝐱∈Ω_2 0 if 𝐱∈Γ dist(𝐱,Γ) if 𝐱∈Ω_1 The unit normal vector can be evaluated at each point as follows <cit.>: 𝐧_Γ = ∇φ/|∇φ|, 𝐱∈Γ, so that (<ref>) is equivalent to ∂φ/∂ t + (𝐮·𝐧_Γ)|∇φ| = 0. Relation (<ref>) shows that the deformation of the level set function is due only to the normal component of the velocity. Moreover, we can express the density and the dynamic viscosity through the Heaviside function H ρ = ρ_2 + (ρ_1 - ρ_2)H(φ) μ = μ_2 + (μ_1 - μ_2)H(φ) The whole system of equations reads therefore as follows: ρ[∂𝐮/∂ t + (𝐮·∇)𝐮] = -∇ p + [2μ𝐃(𝐮)] + ρ𝐠 + 𝐟_σ 𝐮 = 0 ∂φ/∂ t + 𝐮·∇φ = 0. System (<ref>) can be rewritten in conservative form. First of all, thanks to the incompressibility constraint 𝐮 = 0, we can rewrite (<ref>) as ∂φ/∂ t + (φ𝐮) = 0. Moreover, one can verify that (<ref>), in combination with the incompressibility constraint, implies mass conservation. Indeed, we get ∂ρ/∂ t + (ρ𝐮) = ∂ρ/∂ t + 𝐮·∇ρ = (ρ_1 - ρ_2)(∂ H(φ)/∂ t + 𝐮·∇ H(φ)) = (ρ_1 - ρ_2)δ(φ)(∂φ/∂ t + 𝐮·∇φ) = 0, where we exploited the relation dH(φ)/dφ = δ(φ) <cit.>, with δ(φ) denoting the Dirac delta distribution with support equal to the function φ which implicitly describes the surface. It is appropriate to stress the fact that the differential operators involving the Heaviside function H(φ) have to be intended in a proper distributional sense. Finally, as discussed in <cit.>, we can rewrite 𝐟_σ = [σ(𝐈 - 𝐧_Γ⊗𝐧_Γ)δ(Γ)], where, once more, the divergence operator should be intended in a distributional sense. Hence, the conservative form of (<ref>) is ∂(ρ𝐮)/∂ t + (ρ𝐮⊗𝐮) = -∇ p + [2μ𝐃(𝐮)] + ρ𝐠 + 𝐟_σ 𝐮 = 0 ∂φ/∂ t + (φ𝐮) = 0. The Continuum Surface Force (CSF) approach, introduced in <cit.>, is employed to treat density, viscosity, and surface tension term. A regularized Heaviside H_ε(φ) is introduced, so as to obtain ρ≈ρ_2 + (ρ_1 - ρ_2)H_ε(φ) μ≈μ_2 + (μ_1 - μ_2)H_ε(φ). It is important at this stage to point out the relation between δ(Γ) and δ(φ). As discussed in <cit.>, the following relation holds: δ(Γ) = δ(φ)|∇φ|, so that we can rewrite 𝐟_σ = σκ𝐧_Γδ(φ)|∇φ| = [σ(𝐈 - 𝐧_Γ⊗𝐧_Γ)δ(φ)|∇φ|]. Hence, the CSF approximation of the surface tension term reads as follows: 𝐟_σ≈[σ(𝐈 - 𝐧_Γ⊗𝐧_Γ)δ_ε(φ)|∇φ|] = [σ(𝐈 - 𝐧_Γ⊗𝐧_Γ)dH_ε/dφ(φ)|∇φ|]. Since the seminal proposals in <cit.> (see also the review in <cit.>), projection methods have become very popular for the discretization of incompressible Navier-Stokes equations. However, difficulties arise in choosing boundary conditions for the Poisson equation which is to be solved at each time step to compute the pressure. An alternative that allows to avoid or reduce some of these problems is the so-called artificial compressibility formulation, originally introduced in <cit.> and employed in <cit.> among many others. In this formulation, the incompressibility constraint is relaxed and a time evolution equation for the pressure is introduced. This kind of approach has been adopted for incompressible flows with variable density, see e.g. <cit.>, and we aim here to consider an artificial compressibility formulation for immiscible, isothermal two-phase flows with gravity. The model equations can be therefore rewritten as follows: ∂(ρ𝐮)/∂ t + (ρ𝐮⊗𝐮) = -∇ p + [2μ𝐃(𝐮)] + ρ𝐠 + 𝐟_σ 1/ρ_0 c^2∂ p/∂ t + 𝐮 = 0 ∂φ/∂ t + (φ𝐮) = 0, where c is the artificial speed of sound and ρ_0 is a reference density. Finally, since we are relaxing the incompressibility constraint, we consider (<ref>) for the level set motion, which is valid for the transport of φ independently of the constraints on the velocity 𝐮. Moreover, this choice is justified by the results reported in <cit.> for a rising bubble test case, for which a non-conservative formulation leads to less diffusion in the treatment of the interface. Hence, the final form of the system under consideration reads as follows: ∂(ρ𝐮)/∂ t + (ρ𝐮⊗𝐮) = -∇ p + [2μ𝐃(𝐮)] + ρ𝐠 + 𝐟_σ 1/ρ_0 c^2∂ p/∂ t + 𝐮 = 0 ∂φ/∂ t + 𝐮·∇φ = 0, Before proceeding to describe the time and space discretization schemes, we perform a dimensional analysis to derive a non-dimensional version of system (<ref>). §.§ Dimensional analysis In this Section, we derive a non-dimensional formulation for system (<ref>). We denote with the symbol * non-dimensional quantities. We introduce a reference length and velocity, denoted by L_ref and U_ref, respectively, so as to obtain 𝐱 = L_ref𝐱^* 𝐮 = U_ref𝐮^* t = L_ref/U_reft^*. Moreover, we choose as reference density and viscosity those associated to the heavier fluid, which is conventionally considered in Ω_1. For the sake of simplicity, we also assume ρ_0 = ρ_1. The reference pressure p_ref is taken equal to p_ref = ρ_1U_ref^2. Hence, we get ρ = ρ_1ρ^* μ = μ_1μ^* p = ρ_1U_ref^2p^* κ = 1/L_refκ^* φ = L_refφ^*. Introducing the appropriate non-dimensional quantities, we obtain ρ_1U_ref^2/L_ref∂^*(ρ^*𝐮^*)/∂^*t^* + ρ_1U_ref^2/L_ref∇^*·(ρ^*𝐮^*⊗𝐮^*) = -ρ_1U_ref^2/L_ref∇^*p^* + μ_1U_ref/L_ref∇^*·[2μ^*𝐃(𝐮^*)] - ρ_1ρ^*g𝐤 + 1/L_ref^2∇^*·[σ(𝐈 - 𝐧_Γ⊗𝐧_Γ)δ^*_ε(φ^*)|∇^*φ^*|] ρ_1U_ref^3/ρ_1L_ref1/c^2∂^*p^*/∂^*t^* + U_ref/L_ref∇^*·𝐮^* = 0 U_ref∂^*φ^*/∂^*t^* + U_ref𝐮^*·∇^*φ^* = 0, where 𝐤 is the upward pointing unit vector in the standard Cartesian reference frame. System (<ref>) reduces to ∂^*(ρ^*𝐮^*)/∂^*t^* + ∇^*·(ρ^*𝐮^*⊗𝐮^*) = -∇^*p^* + 1/Re∇^*·[2μ^*𝐃(𝐮^*)] - 1/Fr^2ρ^*𝐤 + 1/We∇^*·[(𝐈 - 𝐧_Γ⊗𝐧_Γ)δ^*_ε(φ^*)|∇^*φ^*|] M^2∂^*p^*/∂^*t^* + ∇^*·𝐮^* = 0 ∂^*φ^*/∂^*t^* + 𝐮^*·∇^*φ^* = 0, where Re = ρ_1U_refL_ref/μ_1 Fr = U_ref/√(g L_ref) We = ρ_1U_ref^2L_ref/σ M = U_ref/c denote the Reynolds number, the Froude number, the Weber number, and the Mach number, respectively. In the following, with a slight abuse of notation, we omit the symbol * to mark non-dimensional quantities and we consider therefore the following system of equations: ∂(ρ𝐮)/∂ t + (ρ𝐮⊗𝐮) = -∇ p + 1/Re[2μ𝐃(𝐮)] - 1/Fr^2ρ𝐤 + 1/We[(𝐈 - 𝐧_Γ⊗𝐧_Γ)δ_ε(φ)|∇φ|] M^2∂ p/∂ t + 𝐮 = 0 ∂φ/∂ t + 𝐮·∇φ = 0, where ρ = ρ_2/ρ_1 + (1 - ρ_2/ρ_1)H_ε(φ) μ = μ_2/μ_1 + (1 - μ_2/μ_1)H_ε(φ). §.§ The conservative level set method The traditional level set method lacks of volume conservation properties <cit.>. The conservative level set (CLS) method <cit.> is a popular alternative to add conservation properties to level set schemes. The idea is to replace the signed distance function defined in (<ref>) with a regularized Heaviside function: ϕ(𝐱,t) = 1/1 + e^-φ(𝐱,t)/ε, where ε helps smoothing the transition of the discontinuous physical properties between the two subdomains and it is also known as interface thickness. Since ∇ϕ = 1/εe^-φ/ε/(1 + e^-φ/ε)^2∇φ we can compute the outward unit normal 𝐧_Γ exactly as in (<ref>). From definition (<ref>), it follows that Γ(t) = {𝐱∈Ω : ϕ(𝐱,t) = 1/2}. This new level set function needs to be reinitialized in order to keep the property of being a regularized Heaviside function <cit.>. This goal is achieved by solving the following PDE <cit.>: ∂ϕ/∂τ + (u_cϕ(1 - ϕ)𝐧_Γ) = (βε u_c(∇ϕ·𝐧_Γ)𝐧_Γ), where τ is an artificial pseudo-time variable, u_c is an artificial compression velocity, and β is a constant. It is important to notice that 𝐧_Γ does not change during the reinizialization procedure, but is computed using the initial value of the level set function. The relation (<ref>) has been originally introduced as an intermediate step between the level set advection and the Navier-Stokes equations to keep the shape of the profile <cit.> and to stabilize the advection <cit.>. Two fluxes are considered: a compression flux which acts where 0 < ϕ < 1 and in normal direction to the interface, represented by u_cϕ(1 - ϕ)𝐧_Γ, and a diffusion flux, represented by βε u_c(∇ϕ·𝐧_Γ)𝐧_Γ. The reinitialization is crucial for the overall stability of the algorithm, but it also introduces errors in the solution <cit.>. Hence, it is important to avoid unnecessary reinitialization. For this purpose, unlike the formulation proposed e.g. in <cit.> and <cit.>, we introduce the coefficient β to tune the amount of diffusion so as to keep it as small as possible. The choices for the different parameters will be specified in Section <ref>. Finally, we stress the fact that, in this method, we are already using a smooth version of Heaviside function so that H_ε = ϕ δ(Γ) ≈dH_ε/dϕ|∇ϕ| = |∇ϕ| § THE TIME DISCRETIZATION In this Section, we outline the time discretization strategy for system (<ref>). Our goal here is to extend the projection method based on the TR-BDF2 scheme developed in <cit.>. We now briefly recall for the convenience of the reader the formulation of the TR-BDF2. This second order implicit method has been originally introduced in <cit.> as a combination of the Trapezoidal Rule (or Crank-Nicolson) method and of the Backward Differentiation Formula method of order 2 (BDF2). Let Δ t = T_f/N be a discrete time step and t^n = nΔ t, n = 0, …, N, be discrete time levels for a generic time dependent problem u^' = 𝒩(u). Hence, the incremental form of the TR-BDF2 scheme can be described in terms of two stages, the first one from t^n to t^n+γ = t^n + γΔ t, and the second one from t^n+γ to t^n+1, as follows: u^n+γ - u^n/γΔ t = 1/2𝒩(u^n+γ) + 1/2𝒩(u^n) u^n+1 - u^n+γ/(1 - γ)Δ t = 1/2 - γ𝒩(u^n+1) + 1 - γ/2(2 - γ)𝒩(u^n+γ) + 1 - γ/2(2 - γ)𝒩(u^n). Here, u^n denotes the approximation at time n = 0, …, N. Notice that, in order to guarantee L-stability, one has to choose γ = 2 - √(2) <cit.>. We refer to <cit.> for a more exhaustive discussion on the TR-BDF2 method. We start by considering the equation in system (<ref>) associated to the level set. In order to avoid a full coupling with the Navier-Stokes equations, we perform a linearization in velocity, so that the first stage for the level set update reads as follows: ϕ^n+γ - ϕ^n/γΔ t + 1/2𝐮^n + γ/2·∇ϕ^n+γ = -1/2𝐮^n + γ/2·∇ϕ^n, where the approximation 𝐮^n + γ/2 is defined by extrapolation as 𝐮^n + γ/2 = (1 + γ/2(1-γ))𝐮^n - γ/2(1-γ)𝐮^n-1. Following then the projection approach described in <cit.> and applying (<ref>), the momentum predictor equation for the first stage reads as follows: ρ^n+γ𝐮^n+γ,* - ρ^n𝐮^n/γΔ t + 1/2(ρ^n+γ𝐮^n+γ,*⊗𝐮^n+γ/2) - 1/21/Re[2μ^n+γ𝐃(𝐮^n+γ,*)] = - 1/2(ρ^n𝐮^n⊗𝐮^n+γ/2) + 1/21/Re[2μ^n𝐃(𝐮^n)] - ∇ p^n + 1/21/We[(𝐈 - 𝐧_Γ^n+γ⊗𝐧_Γ^n+γ)δ_ε(ϕ^n+γ)|∇ϕ^n+γ|] + 1/21/We[(𝐈 - 𝐧_Γ^n⊗𝐧_Γ^n)δ_ε(ϕ^n)|∇ϕ^n|] - 1/21/Fr^2ρ^n+γ𝐤 -1/21/Fr^2ρ^n𝐤. Notice once more that, in order to avoid solving a non-linear system at each time step, 𝐮^n + γ/2 is employed in the momentum advection terms. We set then δ p^n+γ = p^n+γ - p^n and impose ρ^n+γ𝐮^n+γ - 𝐮^n+γ,*/γΔ t =-∇δ p^n+γ M^2δ p^n+γ/γΔ t + 𝐮^n+γ = 0. Substituting the first equation into the second in (<ref>), one obtains the Helmholtz equation M^2δ p^n+γ/γ^2Δ t^2 - (∇δ p^n+γ/ρ^n+γ) = -1/γΔ t𝐮^n+γ,*. Once this equation is solved, the final velocity update for the first stage is given by 𝐮^n+γ = 𝐮^n+γ,* - γΔ t∇δ p^n+γ/ρ^n+γ. The second TR-BDF2 stage is performed in a similar manner applying (<ref>). We first focus on the level set update: ϕ^n+1 - ϕ^n+γ/(1-γ)Δ t + a_33𝐮^n + 3/2γ·∇ϕ^n+1 = - a_32𝐮^n+γ·∇ϕ^n+γ - a_31𝐮^n·∇ϕ^n, where a_31 = 1-γ/2(2-γ) a_32 = 1-γ/2(2-γ) a_33 = 1/2-γ. Again, in order to avoid a full coupling with the Navier-Stokes equations, an approximation is introduced in the advection term, so that 𝐮^n + 3/2γ is defined by extrapolation as 𝐮^n + 3/2γ = (1 + 1 + γ/γ)𝐮^n+γ - 1-γ/γ𝐮^n. Then, we define the second momentum predictor: ρ^n+1𝐮^n+1,* - ρ^n+γ𝐮^n+γ/(1-γ)Δ t + a_33(ρ^n+1𝐮^n+1,*⊗𝐮^n + 3/2γ) - a_331/Re[2μ^n+1𝐃(𝐮^n+1,*)] = - a_32(ρ^n+γ𝐮^n+γ⊗𝐮^n+γ) + a_321/Re[2μ^n+γ𝐃(𝐮^n+γ)] - a_31(ρ^n𝐮^n⊗𝐮^n) + a_311/Re[2μ^n𝐃(𝐮^n)] - ∇ p^n+γ + a_331/We[(𝐈 - 𝐧_Γ^n+1⊗𝐧_Γ^n+1)δ_ε(ϕ^n+1)|∇ϕ^n+1|] + a_321/We[(𝐈 - 𝐧_Γ^n+γ⊗𝐧_Γ^n+γ)δ_ε(ϕ^n+γ)|∇ϕ^n+γ|] + a_311/We[(𝐈 - 𝐧_Γ^n⊗𝐧_Γ^n)δ_ε(ϕ^n)|∇ϕ^n|] - a_331/Fr^2ρ^n+1𝐤 - a_321/Fr^2ρ^n+γ𝐤 - a_311/Fr^2ρ^n𝐤. Notice that 𝐮^n + 3/2γ is employed in the non-linear momentum advection term. We set then δ p^n+1 =p^n+1 - p^n+γ and impose ρ^n+1𝐮^n+1 - 𝐮^n+1,*/(1-γ)Δ t =-∇δ p^n+1 M^2δ p^n+1/(1-γ)Δ t + 𝐮^n+1 = 0. Substituting the first equation into the second in (<ref>), one obtains the Helmholtz equation M^2δ p^n+1/(1-γ)^2Δ t^2 -(∇δ p^n+1/ρ^n+1) = -1/(1-γ)Δ t𝐮^n+1,*. The final velocity update then reads as follows: 𝐮^n+1 = 𝐮^n+1,* - (1 - γ)Δ t∇δ p^n+1/ρ^n+1. Finally, we focus on the reinitialization procedure described in Equation <ref>, which is performed after each stage of the level set update and before computing the momentum predictor. We consider an implicit treatment of the diffusion term (βε u_c(∇ϕ·𝐧_Γ)𝐧_Γ) and a semi-implicit treatment of the compression term u_c(ϕ(1 - ϕ)𝐧_Γ). Hence, the semi-discrete formulation reads as follows: ϕ^k+1,* - ϕ^k,*/Δτ + (u_cϕ^k+1,*(1 - ϕ^k,*)𝐧_Γ) = (βε u_c(∇ϕ^k+1,*·𝐧_Γ)𝐧_Γ), where Δτ is the pseudo time step. Moreover, ϕ^0,* = ϕ^n+γ after the first TR-BDF2 stage and ϕ^0,* = ϕ^n+1 after the second TR-BDF2 stage. We recall once more that 𝐧_Γ = ∇ϕ^0,*/|∇ϕ^0,*| and it does not change during the reinitialization. Following <cit.>, we define the total reinitialization time τ_fin as a fraction of the time step Δ t, namely τ_fin = ηΔ t. η = 0 corresponds to no reinitialization, whereas η = 1 yields an amount of reinitialization which can modify the values of level set function of the same order of magnitude of which they have been modified during the previous advection step. For most applications, η≈ 0.5 seems to provide an appropriate amount of reinitialization <cit.>. A pseudo time step such that two to five reinitialization steps are performed typically ensures stable solutions and leads to the updated level set function <cit.>. § THE SPATIAL DISCRETIZATION For the spatial discretization, we consider discontinuous finite element approximations. We consider a decomposition of the domain Ω into a family of hexahedra 𝒯_h (quadrilaterals in the two-dimensional case) and denote each element by K. The skeleton ℰ denotes the set of all element faces and ℰ = ℰ^I∪ℰ^B, where ℰ^I is the subset of interior faces and ℰ^B is the subset of boundary faces. Suitable jump and average operators can then be defined as customary for finite element discretizations. A face e ∈ℰ^I shares two elements that we denote by K^+ with outward unit normal 𝐧^+ and K^- with outward unit normal 𝐧^-, whereas for a face e ∈ℰ^B we denote by 𝐧 the outward unit normal. For a scalar function Ψ the jump is defined as [[Ψ]] = Ψ^+𝐧^+ + Ψ^-𝐧^- if e ∈ℰ^I [[Ψ]] = Ψ𝐧 if e ∈ℰ^B. The average is defined as {{Ψ}} = 1/2(Ψ^+ + Ψ^-) if e ∈ℰ^I {{Ψ}} = Ψ if e ∈ℰ^B. Similar definitions apply for a vector function Ψ: [[Ψ]] = Ψ^+·𝐧^+ + Ψ^-·𝐧^- if e ∈ℰ^I [[Ψ]] = Ψ·𝐧 if e ∈ℰ^B {{Ψ}} = 1/2(Ψ^+ + Ψ^-) if e ∈ℰ^I {{Ψ}} = Ψ if e ∈ℰ^B. For vector functions, it is also useful to define a tensor jump as: <<Ψ>> = Ψ^+⊗𝐧^+ + Ψ^-⊗𝐧^- if Γ∈ℰ^I <<Ψ>> = Ψ⊗𝐧 if Γ∈ℰ^B. We now introduce the following finite element spaces: Q_k = {v ∈ L^2(Ω) : v|_K∈ℚ_k ∀ K ∈𝒯_h} and 𝐐_k = [Q_k]^d, where ℚ_k is the space of polynomials of degree k in each coordinate direction. Considering the well-posedness analyses in <cit.>, the finite element spaces that will be used for the discretization of velocity and pressure are 𝐕_h = 𝐐_k and W_h = Q_k-1∩ L^2_0(Ω), respectively, where k ≥ 2. For what concerns the level set function, we consider instead X_h = Q_r with r ≥ 2, so that its gradient is at least a piecewise linear polynomial. We then denote by ψ_i(𝐱) the basis functions for the finite element spaces associated to the scalar variable, i.e. W_h and X_h, and by ψ_i(𝐱) the basis functions for the space V_h, the finite element space chosen for the discretization of the velocity. Hence, we get 𝐮≈∑_j=1^dim(𝐕_h) u_j(t)ψ_j(𝐱) p ≈∑_j=1^dim(W_h) p_j(t)ψ_j(𝐱) ϕ≈∑_j=1^dim(X_h)ϕ_j(t)ψ_j(𝐱) The shape functions correspond to the products of Lagrange interpolation polynomials for the support points of (k+1)-order Gauss-Lobatto quadrature rule in each coordinate direction. Given these definitions, the weak formulation of the level set update for the first stage is obtained multiplying equation (<ref>) by a test function w ∈ X_h: ∑_K ∈𝒯_h∫_Kϕ^n+γ/γΔ tw dΩ + 1/2∑_K ∈𝒯_h∫_K𝐮^n+γ/2·∇ϕ^n+γ w dΩ + 1/2∑_e ∈ℰ∫_e{{ϕ^n+γ𝐮^n + γ/2}}·[[w]]dΣ - 1/2∑_e ∈ℰ∫_e{{𝐮^n + γ/2}}·[[ϕ^n + γw]]dΣ + 1/2∑_e ∈ℰ∫_eλ^n + γ/2/2[[ϕ^n + γ]] ·[[w]]dΣ = ∑_K ∈𝒯_h∫_Kϕ^n/γΔ tw dΩ - 1/2∑_K ∈𝒯_h∫_K𝐮^n + γ/2·∇ϕ^n w dΩ - 1/2∑_e ∈ℰ∫_e{{ϕ^n𝐮^n + γ/2}}·[[w]]dΣ - 1/2∑_e ∈ℰ∫_e{{𝐮^n + γ/2}}·[[ϕ^nw]]dΣ - 1/2∑_e ∈ℰ∫_eλ^n + γ/2/2[[ϕ^n]] ·[[w]]dΣ, where λ^n + γ/2 = max(|(𝐮^n + γ/2)^+·𝐧^+|, |(𝐮^n + γ/2)^-·𝐧^-|). Following <cit.>, the numerical approximation of the non-conservative term is based on a double integration by parts. The algebraic form can be obtained taking w = ψ_i, i = 1,…,dim(X_h) and exploiting the representation in (<ref>), so as to obtain in compact form (1/γΔ t𝐌_ϕ + 1/2𝐀_ϕ^n+γ)^n+γ = 𝐅_ϕ^n, where ^n+γ denotes the vector of the degrees of freedom associated to the level set. Moreover, we have set 𝐌_ϕ_ij = ∑_K ∈𝒯_h∫_Kψ_jψ_i dΩ 𝐀_ϕ_ij^n+γ = ∑_K ∈𝒯_h∫_K𝐮^n+γ/2·∇ψ_jψ_i dΩ + ∑_e ∈ℰ∫_e{{𝐮^n + γ/2ψ_j}}·[[ψ_i]]dΣ - ∑_e ∈ℰ∫_e{{𝐮^n + γ/2}}·[[ψ_jψ_i]]dΣ + ∑_e ∈ℰ∫_eλ^n + γ/2/2[[ψ_j]] ·[[ψ_i]]dΣ and 𝐅_ϕ^n = ∑_K ∈𝒯_h∫_Kϕ^n/γΔ tψ_i dΩ + 1/2∑_K ∈𝒯_h∫_K𝐮^n + γ/2·∇ϕ^nψ_i dΩ - 1/2∑_e ∈ℰ∫_e{{ϕ^n𝐮^n + γ/2}}·[[ψ_i]]dΣ + 1/2∑_e ∈ℰ∫_e{{𝐮^n + γ/2}}·[[ϕ^nψ_i]]dΣ - 1/2∑_e ∈ℰ∫_eλ^n + γ/2/2[[ϕ^n]] ·[[ψ_i]]dΣ. Consider now the variational formulation for equation (<ref>). Take 𝐯∈𝐕_h so as to obtain after integration by parts ∑_K ∈𝒯_h∫_K1/γΔ tρ^n+γ𝐮^n+γ,*·𝐯 dΩ - 1/2∑_K ∈𝒯_h∫_Kρ^n+γ𝐮^n+γ,*⊗𝐮^n + γ/2 : ∇𝐯 dΩ + 1/2∑_e ∈𝒯_h∫_e{{ρ^n+γ𝐮^n+γ,*⊗𝐮^n + γ/2}} : <<𝐯>> dΣ + 1/2∑_e ∈𝒯_h∫_eλ^n + γ/2/2<<ρ^n+γ𝐮^n + γ/2>> : <<𝐯>> dΣ + 1/2Re∑_K ∈𝒯_h∫_K 2μ^n+γ𝐃(𝐮^n+γ,*) : ∇𝐯 - 1/2Re∑_e ∈ℰ∫_e{{2μ^n+γ𝐃(𝐮^n+γ,*)}} : <<𝐯>> dΣ - 1/2Re∑_e ∈ℰ∫_e<<𝐮^n+γ,*>> : {{2μ^n+γ𝐃(𝐯)}} dΣ + 1/2Re∑_e ∈ℰ∫_e C_u{{μ^n+γ}}_H<<𝐮^n+γ,*>> : <<𝐯>>dΣ = ∑_K ∈𝒯_h∫_K1/γΔ tρ^n𝐮^n·𝐯 dΩ + 1/2∑_K ∈𝒯_h∫_Kρ^n𝐮^n⊗𝐮^n + γ/2 : ∇𝐯 dΩ - 1/2∑_e ∈𝒯_h∫_e{{ρ^n𝐮^n⊗𝐮^n + γ/2}} : <<𝐯>> dΣ - 1/2∫_eλ^n+γ/2/2<<ρ^n𝐮^n>> : <<𝐯>>dΣ - 1/2Re∑_K ∈𝒯_h∫_K 2μ^n𝐃(𝐮^n) : ∇𝐯 + 1/2Re∑_e ∈ℰ∫_e{{2μ^n𝐃(𝐮^n)}} : <<𝐯>> dΣ + ∑_K ∈𝒯_h∫_K p^n𝐯 dΩ - ∑_e ∈ℰ∫_e{{p^n}}[[𝐯]] dΣ - 1/2 Fr^2∑_K ∈𝒯_h∫_Kρ^n+γ𝐤·𝐯dΩ - 1/2 Fr^2∑_K ∈𝒯_h∫_Kρ^n𝐤·𝐯dΩ - 1/2 We∑_K ∈𝒯_h∫_K(𝐈 - 𝐧_Γ^n+γ⊗𝐧_Γ^n+γ)δ_ε(ϕ^n+γ)|∇ϕ^n+γ| : ∇𝐯 dΩ + 1/2 We∑_e ∈ℰ∫_e{{(𝐈 - 𝐧_Γ^n+γ⊗𝐧_Γ^n+γ)δ_ε(ϕ^n+γ)|∇ϕ^n+γ|}} : <<𝐯>> dΣ - 1/2 We∑_K ∈𝒯_h∫_K(𝐈 - 𝐧_Γ^n⊗𝐧_Γ^n)δ_ε(ϕ^n)|∇ϕ^n| : ∇𝐯 dΩ + 1/2 We∑_e ∈ℰ∫_e{{(𝐈 - 𝐧_Γ^n⊗𝐧_Γ^n)δ_ε(ϕ^n)|∇ϕ^n|}} : <<𝐯>> dΣ, where {{μ^n+γ}}_H = 2/1/μ^n+γ,+ + 1/μ^n+γ,-. Here, following e.g. <cit.>, we employ the harmonic average of the viscosity coefficient for the penalization term. Notice that the approximation of the advection term employs an upwind flux, whereas the approximation of the diffusion term is based on the Symmetric Interior Penalty (SIP) <cit.>. Notice also that no penalization terms have been introduced for the variables computed at previous time steps in the diffusion terms. Following <cit.>, we set for each face e of a cell K σ^𝐮_e,K = (k + 1 )^2diam(e)/diam(K) and we define the penalization constant for the SIP method as C_u = 1/2(σ^𝐮_e,K^+ + σ^𝐮_e,K^-) if e ∈ℰ^I, C_u = σ^𝐮_e,K if e ∈ℰ^B. Finally, we stress the fact that a centered flux has been employed for the surface tension terms. The algebraic formulation is then computed considering 𝐯 = ψ_i, i=1, …, dim(𝐕_h) and the representation in (<ref>) for the velocity. Hence, we obtain (1/γΔ t𝐌_𝐮^n+γ + 1/2Re𝐀_𝐮^n+γ + 1/2𝐂_𝐮^n+γ)𝐔^n+γ,* = 𝐅_u^n, where 𝐔^n+γ,* denotes the vector of degrees of freedom for the velocity. Moreover, we have set 𝐌_𝐮_ij^n+γ = ∑_K ∈𝒯_h∫_Kρ^n+γψ_jψ_i dΩ 𝐂_𝐮_ij^n+γ = -∑_K ∈𝒯_h∫_Kρ^n+γψ_j⊗𝐮^n + γ/2 : ∇ψ_i dΩ + ∑_e ∈𝒯_h∫_e{{ρ^n+γψ_j⊗𝐮^n + γ/2}} : <<ψ_j>> dΣ + ∑_e ∈𝒯_h∫_eλ^n + γ/2/2<<ρ^n+γψ_j>> : <<ψ_i>> dΣ 𝐀_𝐮_ij^n+γ = ∑_K ∈𝒯_h∫_K 2μ^n+γ𝐃(ψ_j) : ∇ψ_i dΩ - ∑_e ∈ℰ∫_e{{2μ^n+γ𝐃(ψ_j)}} : <<ψ_i>>dΣ - ∑_e ∈ℰ∫_e<<ψ_j^n+γ,*>> : {{2μ^n+γ𝐃(ψ_i)}}dΣ + ∑_e ∈ℰ∫_e C_u{{μ^n+γ}}_H<<ψ_j>> : <<ψ_i>>dΣ and 𝐅_𝐮^n = ∑_K ∈𝒯_h∫_K1/γΔ tρ^n𝐮^n·ψ_i dΩ + 1/2∑_K ∈𝒯_h∫_Kρ^n𝐮^n⊗𝐮^n + γ/2 : ∇ψ_i dΩ - 1/2∑_e ∈𝒯_h∫_e{{ρ^n𝐮^n⊗𝐮^n + γ/2}} : <<ψ_i>> dΣ - 1/2∑_e ∈𝒯_h∫_eλ^n+γ/2/2<<ρ^n𝐮^n>> : <<ψ_i>> dΣ - 1/2Re∑_K ∈𝒯_h∫_K 2μ^n𝐃(𝐮^n) : ∇ψ_i + 1/2Re∑_e ∈ℰ∫_e{{2μ^n𝐃(𝐮^n)}} : <<ψ_i>> dΣ + ∑_K ∈𝒯_h∫_K p^nψ_i dΩ - ∑_e ∈ℰ∫_e{{p^n}}[[ψ_i]] dΣ - 1/2 Fr^2∑_K ∈𝒯_h∫_Kρ^n+γ𝐤·ψ_idΩ - 1/2 Fr^2∑_K ∈𝒯_h∫_Kρ^n𝐤·ψ_idΩ - 1/2 We∑_K ∈𝒯_h∫_K(𝐈 - 𝐧_Γ^n+γ⊗𝐧_Γ^n+γ)δ_ε(ϕ^n+γ)|∇ϕ^n+γ| : ∇ψ_idΩ + 1/2 We∑_e ∈ℰ∫_e{{(𝐈 - 𝐧_Γ^n+γ⊗𝐧_Γ^n+γ)δ_ε(ϕ^n+γ)|∇ϕ^n+γ|}} : <<ψ_i>> dΣ - 1/2 We∑_K ∈𝒯_h∫_K(𝐈 - 𝐧_Γ^n⊗𝐧_Γ^n)δ_ε(ϕ^n)|∇ϕ^n| : ∇ψ_i dΩ + 1/2 We∑_e ∈ℰ∫_e{{(𝐈 - 𝐧_Γ^n⊗𝐧_Γ^n)δ_ε(ϕ^n)|∇ϕ^n|}} : <<ψ_i>> dΣ. For what concerns the projection step, we apply again the SIP method. We multiply (<ref>) by a test function q ∈ Q_h, we apply Green's theorem and we get ∑_K ∈𝒯_h∫_KM^2/γ^2Δ t^2δ p^n+γ q dΩ + ∑_K ∈𝒯_h∫_K∇δ p^n+γ/ρ^n+γ·∇ q dΩ - ∑_e ∈ℰ∫_e{{∇δ p^n+γ/ρ^n+γ}}·[[q]]dΣ - ∑_e ∈ℰ∫_e[[δ p^n+γ]] ·{{∇ q/ρ^n+γ}} dΣ + ∑_e ∈ℰ∫_e C_p[[δ p^n+γ/ρ^n+γ]] ·[[q]]dΣ = ∑_K ∈𝒯_h∫_K1/γΔ t𝐮^n+γ,**·∇ q dΩ - ∑_e ∈ℰ∫_e1/γΔ t{{𝐮^n+γ,*}}·[[q]]dΣ, where we set σ^p_e,K = k^2diam(e)/diam(K), so that C_p = 1/2(σ^p_e,K^+ + σ^p_e,K^-) if e ∈ℰ^I, C_p = σ^p_e,K if e ∈ℰ^B. The algebraic formulation is once more obtained taking q = ψ_i, i = 1,…,dim(W_h) and considering the expansion for p^n+γ reported in (<ref>). Hence, we get (M^2/γ^2Δ t^2𝐌_p^n+γ + 𝐊_p)𝐏^n+γ = 𝐅_p^n. Here, 𝐏^n+γ denotes the vector of the degrees of freedom for the pressure. Moreover, we set 𝐌_p_ij^n+γ = ∑_K ∈𝒯_h∫_Kψ_jψ_i dΩ 𝐊_p_ij = ∑_K ∈𝒯_h∫_K∇ψ_j·∇ψ_i dΩ - ∑_e ∈ℰ∫_e{{∇ψ_j/ρ^n+γ}}·[[ψ_i]]dΣ - ∑_e ∈ℰ∫_e[[ψ_j]] ·{{∇ψ_i/ρ^n+γ}}dΣ + ∑_e ∈ℰ∫_e C_p [[ψ_j/ρ^n+γ]] ·[[ψ_i]]dΣ and 𝐅_p^n = ∑_K ∈𝒯_h∫_K1/γΔ t𝐮^n+γ,*·∇ q dΩ - ∑_e ∈ℰ∫_e1/γΔ t{{𝐮^n+γ,**}}·[[q]]dΣ. The second TR-BDF2 stage can be described in a similar manner according to the formulations reported in (<ref>), (<ref>), and (<ref>).   Finally, we consider the weak formulation for the reinitialization equation for the level set function (<ref>): ∑_K ∈𝒯_h∫_Kϕ^k+1,*/Δτw dΩ - ∑_K ∈𝒯_h∫_K u_cϕ^k+1,*(1 - ϕ^k,*)𝐧_Γ·∇ w dΩ + ∑_e ∈ℰ∫_e u_c{{ϕ^k+1,*(1 - ϕ^k,*)𝐧_Γ}}·[[w]]dΣ + ∑_e ∈ℰ∫_eλ̃^k/2[[ϕ^k+1,*]] ·[[w]]dΣ + ∑_K∫_K u_cβε(∇ϕ^k+1,*·𝐧_Γ)𝐧_Γ·∇ w dΩ - ∑_e ∈ℰ∫_e u_cβε{{(∇ϕ^k+1,*·𝐧_Γ)𝐧_Γ}}·[[w]]dΣ - ∑_e ∈ℰ∫_e u_cβε{{(∇ v ·𝐧_Γ)𝐧_Γ}}·[[ϕ^k+1,*]]dΣ + ∑_e ∈ℰ∫_e C_ϕ[[ϕ^k+1,*]] ·[[w]] dΣ = ∑_K ∈𝒯_h∫_Kϕ^k,*/Δτw dΩ, where λ̃^k = max(|(1 - (ϕ^k,*)^+)𝐧_Γ^+·𝐧^+|, |(1 - (ϕ^k,*)^-)𝐧_Γ^-·𝐧^-|). Moreover, we set σ^ϕ_e,K = (r + 1)^2diam(e)/diam(K), so that C_ϕ = 1/2(σ^ϕ_e,K^+ + σ^ϕ_e,K^-) if e ∈ℰ^I, C_ϕ = σ^ϕ_e,K if e ∈ℰ^B. One can notice that, following <cit.>, an upwind flux has been employed for the compression term and the SIP has been adopted for the diffusive term. Finally, the algebraic form is obtained considering w = Ψ_i, i = 1,…,dim(X_h) and the representation in (<ref>) so as to obtain (1/Δτ𝐌_ϕ + u_c𝐂_ϕ + 𝐀_ϕ) = 𝐅_ϕ. Here 𝐂_ϕ_ij = -∑_K ∈𝒯_h∫_K(1 - ϕ^k,*)𝐧_Γψ_j·∇ψ_i dΩ + ∑_e ∈𝒯_h∫_e{{(1 - ϕ^k,*)𝐧_Γψ_j}} : [[ψ_i]] dΣ + ∑_e ∈𝒯_h∫_eλ̃^k/2[[ψ_j]] : [[ψ_i]] dΣ 𝐀_ϕ = ∑_K ∈𝒯_h∫_K u_cβε(∇ψ_j·𝐧_Γ)𝐧_Γ·∇ψ_i dΩ - ∑_e ∈ℰ∫_e u_cβε{{(∇ψ_j·𝐧_Γ)𝐧_Γ}}·[[ψ_i]] dΣ - ∑_e ∈ℰ∫_e u_cβε[[ψ_j]] ·{{(∇ψ_i·𝐧_Γ)𝐧_Γ}} dΣ + ∑_e ∈ℰ∫_e C_ϕ[[ψ_j]] ·[[ψ_i]] dΣ. § NUMERICAL EXPERIMENTS The numerical method outlined in the previous Sections has been validated in a number of classical test cases for incompressible two-phase flows using the numerical library deal.II <cit.>, whose adaptive mesh refinement capabilities will be employed to enhance resolution close to the interface. We set h = min{diam(K) | K ∈𝒯_h} and we define two Courant numbers, one based on the flow velocity, denoted by C_u, and one based on the Mach number, denoted by C: C_u = kΔ t U/h C = k1/MΔ t/h, where U is the magnitude of the flow velocity. For the sake of convenience of the reader, we recall here that k and k - 1 are the polynomial degrees of the finite element spaces chosen for the discretization of velocity and pressure, respectively, whereas r is the polynomial degree of the finite element space chosen for the discretization of the level set function. We consider k = r = 2 in all the numerical experiments. §.§ Rayleigh-Taylor instability The Rayleigh-Taylor instability is a well known test case in which an heavier fluid penetrates a lighter fluid under the action of gravity. We consider the configuration presented e.g. in <cit.>, for which ρ_1 = 1.225 and ρ_2 = 0.1694, corresponding to the density of air and helium, respectively, whereas μ_1 = μ_2 = 0.00313. The effect of surface tension is neglected. Moreover, following <cit.>, we consider as reference length the computational width of the box W and as reference time the time scale of wave growth, equal to t_ref = √(W/Ag), where g = 9.81 and A = ρ_1 - ρ_2/ρ_1 + ρ_2 is the Atwood number. Hence, we obtain the following relations: U_ref = √(AgW) Re = ρ_1√(AgW)W/μ_1 Fr = √(A). We consider W = 1 so as to obtain a computational domain Ω = (0,1) ×(0,4). Hence, we get A ≈ 0.757, t_ref≈0.367, U_ref≈2.725, Re ≈ 1066.55, and Fr ≈ 0.87. We take M = 0.008, corresponding to c ≈343, namely the speed of sound in air. The final time is T_f = 2.45. No-slip boundary conditions are prescribed on top and bottom walls, whereas periodic boundary conditions are imposed along the horizontal direction. The pressure is prescribed to be zero on the upper wall. The initial velocity field is zero, whereas the initial level set function is ϕ(0) = 1/1 + exp(2 + 0.05cos(2π x) - y/ε). The computational grid is composed by 160 × 640 elements, whereas the time step is Δ t ≈ 1.63 × 10^-3, yielding a maximum advective Courant number C ≈ 1.36 and an acoustic Courant number C ≈ 32.7. Finally, we set ε = h = 1/160, Δτ = 0.05h, u_c = 0.0125u_max, and β = 1, where u_max is the maximum fluid velocity. The choice to relate u_c with u_max is rather common in the literature, see e.g. <cit.>. Figure <ref> shows the development of the interface at t = T_f, where one can easily notice the expected main behaviour of the Rayleigh-Taylor instability: as the heavier fluid penetrates the lighter one, the interface begins to roll up along the sides of the spike giving the typical “mushroom” shape. Obtained results are similar to those in literature, see e.g. <cit.>. Moreover, for the sake of completeness, we report in Figure <ref> the evolution of the relative variation of the area for the lighter fluid, defined as |Ω_2(t) - Ω_2(0)|/Ω_2(0). The maximum relative variation is 0.034 %, showing that CLS method preserves the area quite well. An interesting analysis regards the influence of the Atwood number. We fix ρ_2 = 0.408, so as to obtain A ≈ 0.5. As a consequence, we obtain t_ref≈0.451, U_ref≈2.215, Re≈ 867.05, Fr ≈ 0.71, and M = 0.006. We set the final time T_f = 2, so that the same final dimensional time of the previous configuration is achieved. The chosen time step is Δ t = 2.5 · 10^-3. One can easily notice from Figure <ref> that, with higher Atwood number, the roll up effect is enhanced. This points out the earlier appearance of the Kelvin-Helmholtz instability, due to the development of short wavelength perturbations along the fluid interface. The deal.II library supports non-conforming mesh adaptation. We employ the h-adaptive version of the scheme for the latter configuration. More specifically, we define for each element K the quantity η_K = max_i ∈𝒩_K|∇ϕ|_i, which acts as local refinement indicator. Here 𝒩_K denotes the set of nodes over the element K. We allow to refine when η_K exceeds 10 and to coarsen below 5. The initial grid is composed by 80 × 320 elements and we allow up to two local refinements, so as to obtain h = 1/320 and a maximum resolution which would correspond to a 320 × 1280 uniform grid. As one can notice from Figure <ref>, the refinement criterion is able to increase the resolution only in correspondence of the interface between the two fluids. The final grid consists of 43147 elements, corresponding to around 40 % of elements of the fixed uniform grid. Figure <ref> shows a comparison of the interface between the simulations with uniform and adaptive grid both at t = T_f/2 and t = T_f. One can easily notice that at t = T_f/2 the two interfaces are indistinguishable, whereas at t = T_f a slightly different development of the instability appears. Since we are analyzing a fluid mechanic instability, every small variation in the flow corresponds to large variations, and, therefore, it is difficult to say which solution is the more reliable. Similar results and considerations have been reported for a Kelvin-Helmholtz instability in <cit.>. §.§ Rising bubble benchmark The rising bubble benchmark is a well-established test case for the validation of numerical methods for incompressible two-phase flows <cit.>. More specifically, the evolution of the shape, position and velocity of the center of mass of a rising bubble is compared against the reference solution in <cit.>. Two configurations are considered with the corresponding physical parameters and non-dimensional numbers listed in Table <ref> and <ref>, respectively. The bubble occupies the subdomain Ω_2. Following <cit.>, we set L_ref = 2r_0 = 0.5 and U_ref = √(g L_ref) = 0.7. We consider as domain Ω = (0, L_x) ×(0, L_y), with L_x = 2 and L_y = 4, whereas the final time is T_f = 4.2. No-slip boundary conditions are imposed on the top and bottom boundaries, whereas periodic conditions are prescribed in the horizontal direction. The initial velocity field is zero. Finally, the initial level set function is described by the following relation: ϕ(0) = 1/1 + exp(R - √((x - x_0)^2 + (y - y_0)^2)/ε), with R = 1, x_0 = y_0 = 1. We compute as reference quantities the position 𝐱_c, the velocity 𝐮_c of the center of mass, and the so-called degree of circularity χ, defined respectively as 𝐱_c = ∫_Ω_2𝐱dΩ/∫_Ω_2dΩ = ∫_Ω_2𝐱dΩ/|Ω_2| 𝐮_c = ∫_Ω_2𝐮dΩ/∫_Ω_2dΩ = ∫_Ω_2𝐮dΩ/|Ω_2| χ = 2√(π|Ω_2|)/P_b, where Ω_2 is the subdomain occupied by the bubble, |Ω_2| is the area of the bubble, and P_b is its perimeter. The degree of circularity is the ratio between the perimeter of a circle with the same area of the bubble and the current perimeter of the bubble itself. For a perfectly circular bubble, the degree of circularity is equal to one and then decreases as the bubble deforms itself. Since ϕ is a regularized Heaviside function, we can compute the reference quantities as follows: 𝐱_c ≈ ∫_Ω𝐱(1- ϕ)dΩ/∫_Ω(1- ϕ)dΩ 𝐮_c ≈ ∫_Ω𝐮(1- ϕ)dΩ/∫_Ω(1- ϕ)dΩ χ ≈ 2√(π∫_Ω(1- ϕ)dΩ)/∫_Ω|∇ϕ|dΩ. We start with the first configuration and we set M = 0.0005, corresponding to c = 1400, which is of the order of magnitude of the speed of sound in water. The computational grid is composed by 320 × 640 elements, leading to h = 1/160, whereas the time step is Δ t = 6 · 10^-3, yielding a maximum advective Courant number C_u≈ 1.4 and an acoustic Courant number C = 1920. Finally, we set ε = h, Δτ = 0.05h, u_c = 0.05 u_max and β = 0.5. We point out here the fact that results in the Figures have been compared with the results of Group 2 in <cit.>. Figure <ref> shows the shape of the bubble at t = T_f and one can easily notice that we are able to recover the reference shape of the bubble. Figure <ref> reports the evolution of the degree of circularity. A good qualitative agreement is established, with only slightly lower values for our numerical results. Figure <ref> reports the evolution of the vertical coordinate of the position of the center of mass. For a quantitative point of view, the center of mass reaches y_c = 2.156, which is in good agreement with the value y_c = 2.162 ± 0.002 reported in <cit.>. Finally, Figure <ref> shows the evolution of the vertical coordinate of the velocity of the center of mass. The maximum rise velocity of the center of mass is v_c = 0.3461, which is again in good agreement with the value v_c = 0.3456 ± 0.0003 present in <cit.>. We analyze now the second configuration. The time step is Δ t = 5 · 10^-3, yielding a maximum advective Courant number C_u≈ 1.4 and an acoustic Courant number C = 1600. We also set ε = h = 1/160, Δτ = 0.05h = 3.125 × 10^-4, u_c = 0.0125 u_max and β = 2. Figure <ref> shows the shape of the bubble at t = T_f. The bubble develops a non-convex shape with thin filaments. The solutions given in <cit.> are different and, in some cases, the thin filaments tend to break off, although it is unclear if such a phenomenon should be observed in the current two-dimensional setting. The obtained profile is however in good agreement with that of Group 2 in <cit.>. Figure <ref>, <ref>, <ref> show the evolution of the degree of circularity, the vertical coordinate of the position of the center of mass, and the vertical coordinate of the velocity of the center of mass, respectively. A good qualitative agreement is established for the quantities of interest, even though deviations from the chosen reference solution are visible. In particular, differences appear for the degree of circularity starting from t ≈ 2.5, when the thin filaments start developing. Moreover, the second peak for the rising velocity reaches a lower value. As mentioned above, there is no clear agreement concerning the thin filamentary regions, and, therefore, their development can strongly affect computations of the reference quantities and can lead to different numerical results. We employ now Adaptive Mesh Refinement (AMR) to increase the resolution in correspondence of the interface. We consider the same refinement criterion (<ref>) and the same thresholds for η_K adopted in Section <ref> and we allow up two local refinements, so as to obtain h = 1/640 and a maximum resolution which would correspond to a 1280 × 2560 uniform grid. Figure <ref> shows both the shape of the bubble and the computational grid at t = T_f. One can notice that the resolution is enhanced close to the interface between the two fluids. The final grid consists of 283094 elements. Figure <ref> reports a comparison for the quantities of interest between the fixed grid simulation and the adaptive one. The results show that we have reached grid independence, since only the degree of circularity slightly differs between the two simulations, whereas the profiles of the vertical coordinates of both velocity and position of the center of mass are visually indistinguishable. A significant difference in the development of the thin filamentary regions depends on the modelling of the viscosity coefficient μ, as pointed out in <cit.> for diffuse interface models. A popular alternative to the linear interpolation model defined in (<ref>) is the so-called harmonic interpolation, defined as 1/μ = H_ε(φ) + μ_1/μ_2(1 - H_ε(φ)). This choice yields results which are more similar to Group 1 in <cit.>, where a break-up occurs (see Figure <ref>). For what concerns the quantities of interest, we notice from Figure <ref> that, since the thin elongated filaments break themselves, the degree of circularity is higher. Moreover, both the second peak of the rising velocity and the final position of the center of mass are significantly higher. The following analysis further confirms how challenging is defining a reference benchmark solution when the bubble undergoes large deformations. § CONCLUSIONS Building on the experience of <cit.>, we have proposed an implicit Discontinuous Galerkin discretization for incompressible two-phase flows. While discretizations of incompressible two-phase flows equations have been proposed in many other papers, we have presented here an approach based on an artificial compressibility formulation in order to avoid some well known issues of projection methods. The time discretization is obtained by a projection method based on the L-stable TR-BDF2 method. The implementation has been carried out in the framework of the numerical library deal.II, whose mesh adaptation capabilities have been exploited to increase the resolution in correspondence of the interface between the two fluids. The effectiveness of the proposed approach has been shown in a number of classical benchmarks. In particular, for the rising bubble test case, the influence of some possible choices for the mixture viscosity when the interface undergoes large deformations has been established, following an analysis previously carried out for diffuse interface models. In future work, we aim to exploit the possibility of considering well resolved interfaces for an analysis on the evolution equations of interfacial quantities, as well as an extension of analogous approaches to fully compressible flows. § ACKNOWLEDGEMENTS The author would like to thank L. Bonaventura and P. Barbante for several useful discussions on related topics. The author gratefully acknowledges N. Parolini for providing the original data of the rising bubble test case discussed in Section <ref>. The simulations have been partly run at CINECA thanks to the computational resources made available through the NUMNETF-HP10C06Y02 ISCRA-C project. This work has been partially supported by the ESCAPE-2 project, European Union’s Horizon 2020 Research and Innovation Programme (Grant Agreement No. 800897). plain
http://arxiv.org/abs/2307.06074v1
20230712105144
Turbulent flows are not uniformly multifractal
[ "Siddhartha Mukherjee", "Sugan D. Murugan", "Ritwik Mukherjee", "Samriddhi Sankar Ray" ]
physics.flu-dyn
[ "physics.flu-dyn", "cond-mat.stat-mech", "nlin.CD", "physics.data-an" ]
[email protected] [email protected] [email protected] [email protected] International Centre for Theoretical Sciences, Tata Institute of Fundamental Research, Bengaluru 560089, India The Frisch-Parisi multifractal formalism remains the most compelling rationalisation for anomalous scaling in fully developed turbulence. We now show that this formalism can be adapted locally to reveal the spatial distribution of generalized dimensions and of how multifractal the energy dissipation field is. In particular, we show that most regions of the flow are close to being mono-fractal and these are interspersed with islands of multifractality corresponding to the most singular structures in the flow. By defining a suitable measure Φ ( x) of the spatial variation of multifractality, we show that this grows logarithmically with the extent to which the energy dissipation varies locally around x. These results suggest ways to understand how singularities could arise in disparate regions of a flow and provides new directions in understanding anomalous dissipation and intermittency. We then employ the same technique to a non-intermittent, model turbulent flow to check the robustness of our conclusions. Turbulent flows are not uniformly multifractal Samriddhi Sankar Ray 13th July, 2023 ============================================== A central scaffold for interpreting and describing out-of-equilibrium pattern forming processes <cit.>, multifractality has inevitably woven itself into turbulence theory <cit.>, phenomenology <cit.> and data analysis <cit.>. The Frisch-Parisi multifractal model <cit.> indeed remains the most powerful theoretical justification for the statistical properties of turbulence like the anomalous scaling of correlation functions of velocity differences <cit.>, strongly non-Gaussian distributions of velocity gradients <cit.> and fluid accelerations <cit.>. Constructed on the premise of an intermittent, infinite Reynolds number flow with a form of local scale-invariance under the transformations r →λ r, u→λ^h u and t→λ^1-ht <cit.>, it provides corrections to the Kolmogorov mean field theory in a way which is consistent with measurements made in experiments and direct numerical simulations (DNSs). The key ingredient in the construction of such theories is the assumption that intermittency effects lead to a range of Hölder exponents h ∈ [h_ min, h_ max] for the (turbulent) velocity field u( x). The simplest interpretation for these exponents relate to the (inertial range) scale-invariance in turbulence of the longitudinal velocity difference δ u_r ≡⟨ u( x + r) - u( x) ⟩∼ r^h <cit.>. These ideas, when applied to the intermittent energy dissipation field ϵ ( x), leads to the remarkable result that the total dissipation in d-dimensional “boxes” of size r, denoted ℰ_r, scales as a fractal power-law with a variable scaling exponent α as ℰ_r ∼ r^α - 1 + d <cit.>. This is a direct consequence of the multifractal interpretation that despite the three-dimensional embedding dimension, the energy dissipation — which is a culmination of the energy cascading process — accumulates in different, entangled fractal subsets with unique dimensions. It is then possible to associate the fractal dimension f_α of these subsets with exponents lying between α and α + dα yielding the well-known singularity or multifractal spectrum f_α - α. The framework of these ideas are a powerful tool bridging the conceptual picture of the energy cascade with intermittency of the dissipation field. Indeed, it is easy to show that α = 3h is exactly the same (arbitrary) scaling exponent which leaves the Navier-Stokes equations invariant under its rescaling transformations <cit.>. Furthermore, within the Kolmogorov non-intermittent phenomenology, there is a single exponent h = 1/3 (i.e. α = 1) <cit.> which leads to the familiar 2/3 and 5/3 laws of turbulence. However, obtaining the measured exponents and the corrections to the Kolmogorov prediction from the Navier-Stokes equation still remains elusive. This singularity spectrum though remains a central pillar in modern statistical theories of fully developed turbulence. Beginning with the earliest measurements of Meneveau and Sreenivasan <cit.>, the robustness of the multifractal nature of the kinetic energy dissipation field has never been in question. And yet, all these measurements pertain to the statistics of the entire field (or signal). This is somewhat surprising because implicit in the ideas of multifractality is the spatial fluctuation of the scaling exponents over the flow field. Is it possible then to actually probe the multifractal nature of turbulence in a local way, i.e., to have estimates of the spatial dependence of the generalized dimensions D_q( x), the singularity spectrum f_α( x)-α( x) and thence of course the distribution of the Hölder exponents h( x)? The closest, so far, have been recent attempts <cit.> using wavelet techniques and local energy transfer concepts to characterize fields similar to the intractable h( x). While characterizing Hölder exponents point-wise may indeed be difficult, a local multifractal analysis, keeping the robustness of the Frisch-Parisi formalism, opens up a way to reveal the crucial underlying variation in multifractality, which has so far remained uncharted. In this paper we show how a locally adapted construction of multifractal measures of turbulence dissipation fields throws up surprises. In particular, we find that most of the flow is essentially monofractal with an almost delta function for f_α at α≈ 1 which corresponds to the Kolmogorov mean field exponent h = 1/3. The few patches of multifractal behaviour (with broad f_α( x)-α( x) curves) are directly correlated with spatial regions of enhanced dissipation and, by extension, intermittency. Indeed, a more accurate description of fully developed turbulence would be intermittent, multifractal islands on a vast and calm Kolmogorovean sea. Our analysis is based on kinetic energy dissipation fields ϵ ( x) ≡ 2ν S_ijS_ij, where S_ij is the symmetric part of the velocity gradient tensor, obtained from 3 different direct numerical simulations, with very different (Taylor-scale based) Reynolds numbers Re_λ, of the three-dimensional (3D), triply-periodic, incompressible Navier-Stokes equation. For the smallest Re_λ≈ 200, we use our own fully de-aliased pseudospectral code with N = 512^3 collocation points and a constant energy-injection rate on the first 2 shells. For higher Reynolds numbers, we use publicly available data from the Johns Hopkins Turbulence Database (JHTD) <cit.> with N = 1024^3 (Re_λ≈ 433) and 4096^3 (Re_λ≈ 610). Our results are consistent across this wide range of Reynolds numbers and independent of simulations; in what follows, we present converged results, using a 4096^2 × 192 subset of the 4096^3 dataset. Let us first recall how the classical multifractal spectrum for a d-dimensional dissipation field is constructed. Denoting the average dissipation on a scale r by ϵ_r, we construct N_r number of (space-filling) d-dimensional boxes of size r in the full domain. This allows us to estimate the total dissipation ℰ_r ∼ϵ_r r^d ∼ r^α - 1 + d within each box. By taking the q-th moment of this and summing over all N_r boxes one obtains the partition function Z_q ≡∑_N_rℰ_r^q ∼ r^(q-1)D_q, where D_q is the generalized dimension and the following relation holds: ∑_N_r r^(α - 1 + d)q∼ r^(q-1)D_q. Further analysis <cit.> yields the exact relations for the singularity spectrum: α = d/dq[(q-1)(D_q -d + 1)] f_α = α q - (q-1)(D_q - d + 1) + d-1 Clearly, within the mean field, monofractal Kolmogorov ideas for 3D turbulence, D_q = d = 3 leading to α = 1 (h = 1/3) and f_α = d = 3 and thence a δ-function like f_α - α curve. In Fig. <ref> we show a representative plot of a 2D (d = 2) slice of the ϵ field constructed from the 4096^3 JHTD data; the insets show the generalized dimensions D_q and the singularity spectrum f_α - α for this slice of data, which is consistent with results reported earlier <cit.>: The broad f_α curve is the most precise indicator that turbulence admits a range of scaling exponents and not just the mean field Kolmogorov exponent α = 1. However, as is well known — and illustrated in Fig. <ref> — the dissipation field is strongly intermittent. It is not immediately obvious, therefore, whether the fractal sets on which ϵ( x) is distributed are themselves uniform in space. There might, in fact, be an equally strong variation in multifractality over x, which would be revealed if it were possible to measure the generalized dimensions D_q( x) and f_α ( x) - α ( x) locally. These variations would not only help connect the ideas of the cascade with multifractality but also provide important insights in the detection of (possible) singular h < 1/3 regions with anomalous dissipation <cit.>. Important and enticing as this question is, the very nature of the multifractal calculation precludes any possibility of a single point x measurement of such quantities. This is because at a practical level the partition function Z_q( x) must be measured over a range of scales to extract the generalized dimensions D_q( x) from which follows the (local) singularity spectrum and scaling exponents. To circumvent this problem, we develop a tiling approach to allow us to self-consistently measure the spatial variation in the multifractality of the field. This is illustrated in Fig. <ref> where a (exaggerated) white grid is superimposed on the 2D slice of the dissipation field, leading to square data-tiles with egde ℒ_T (or cubical divisions used for 3D analysis). We then treat each of these tiles, centered at x, independently and calculate the multifractal measures in them as one would ordinarily for the full domain. The size of these tiles was tested in the range between 2η < ℒ_T < 16η, where η is the Kolmogorov dissipation scale. Larger tiles, which are still smaller than the inertial range, give a wider range of r values over which to construct Z_q( x) while, for an ideal local measure, we would like the tiles to be as small as possible. However, the lower end of ℒ_T is dictated by the constraint that we need enough points to measure the scaling of Z_q( x) unambiguously. At an operational level, this is not obvious since, unlike the full domain, we have fewer points within individual tiles on which the measurement can be made. Calculating the local variation of D_q ( x) with q, requires obtaining a clean scaling of the partition function Z_q ( x) now measured within the tiles. Thus, in Fig. <ref>(A), we begin by showing representative plots of the partition function for q = -25.5 (solid lines) and q = 25.5 (dashed lines), calculated in three-dimensional tiles with ℒ_T ≈ 10η, at randomly chosen x locations. While this plot already suggests that a Z_q vs r scaling can be realiably obtained, we test the overall accuracy of doing so in Fig. <ref>(B) by calculating the cumulative distribution of the Pearson correlation coefficient ρ for linear-regression fits used to obtain D_q, over all tiles. The distribution shows a high degree of confidence, with more than 99.98% of tiles with ρ>0.98, for both q values. Clearly, these plots show that a local D_q( x) can be meaningfully extracted by using the prescription we propose. We found our results to be insensitive to the chosen ℒ_T, indicating the consistency and convergence of our approach. In what follows, we report results from a tiling of ℒ_T ≈ 10η and carry out the analysis in three dimensions. Before stepping into spatially varying multifractal spectra, we pause to look at the special case of D_q for q=2, also known as the correlation dimension, that provides a measure of inhomogeneity in a fractal set <cit.>, or simply D_2. Fig. <ref> shows a planar cross-section of the D_2( x) field, starkly varying in space, with sizeable pockets of coherent regions of similarly valued correlation dimensions. This also shows that the field is far from random, and at some level this is reflective of the structures in the dissipation field. We wish to underline that our method allows, perhaps for the first time, to visualize this field, which further opens up directions to study the structure of these intrinsic, and as yet elusive, features of turbulence. The dissipation field of course varies within these tiles, and we find it useful to keep track, for each tile, of the maximum ϵ_ max(𝐱), the minimum ϵ_ min(𝐱) and mean ϵ(𝐱) dissipation as well as use Δϵ ( x) ≡ϵ_ max(𝐱) - ϵ_ min(𝐱) as a measure of the fluctuation of the field; while all these values are presented as multiples of the global mean dissipation ⟨ϵ⟩. We are now equipped to calculate local measures of multifractality — D_q(𝐱) and f_α(𝐱) - α(𝐱) — and estimate conclusively how uniformly (or not) multifractal turbulence really is. We construct a precise estimate of this through Φ(𝐱) ≡ std(α(𝐱)) = √(⟨α^2 ⟩ - ⟨α⟩^2 ), where ⟨ .. ⟩ denotes an average over all values of α. This provides a quantitative measure, using the spread of singularity strengths, of the degree of local multifractality in the flow. We know that these multifractal measures, even locally, should satisfy bounds such as α⩾α_min=-2, f_α⩽α +2, and f_α⩾ 0 <cit.>. If the f_α spectrum has a peak f_α^⋆ corresponding to some α = α^⋆, then we have f_α^⋆≲ 3 (where α^⋆=1 and f_α^⋆=3 in the Kolmogorov framework). While there is no bound for α _max, it is reasonable to assume α _max≈ 3 for a region with no singular structures. Such monofractal regions can be expected to show Φ≈ 0. However, the largest values of Φ, corresponding to highly multifractal regions, is estimated as Φ≈√(⟨α^2⟩ - ⟨α⟩^2)≈ 1.7 ∼𝒪(1) for α uniformly ranging from -2 to 3. Hence, on such theoretical grounds we expect 0 ≲Φ( x) ≲ 1.7, with the lower and upper bound corresponding to mono and multifractal statistics, respectively. In Fig. <ref>(A) we show D_q ( x) vs q curves measured at different spatial positions, corresponding to different values of ϵ_ max. Clearly, while the shape of each curve is similar to the global statistics (Fig. <ref>, Inset A), a very strong spatial dependence on where we measure the generalized dimensions is unmissable. Furthermore, the spread in D_q ( x) is not trivially related to the magnitude of maximum dissipation around x; the secret to this variation, as we shall demonstrate, lies in how locally fluctuating (within each tile) the dissipation field is. The measurement of the generalized dimension D_q ( x) allows us now to calculate local singularity spectra. In Fig. <ref>(B), we show representative plots of f_α ( x) - α ( x) for the same locations (see legend in panel A) for which the generalized dimensions were calculated. Quite clearly — and contrary to what one sees in the conventional global measurements of the singularity spectrum [see, e.g., Refs <cit.> and Fig. <ref>, inset] — there are several regions where the flow is essentially monofractal (the f_α spectrum being very narrow) and fully consistent with the ideas of Kolmogorov, while other highly multifractal regions lead to broad f_α curves. Furthermore, we find a very slight, but detectable drift to higher values of α where the f_α peaks as ϵ_ max increases. These results already hint that multifractality can be considered as a local property of the field. In Fig. <ref>(C) we show a pseudo-color plot of Φ. Quite remarkably, much of the flow is Kolmogorov-like with Φ≪ 1; the highly multifractal regions — Φ ( x) ∼𝒪(1) — are isolated patches which, as we shall see, correlate completely with the extreme (singular) regions of energy dissipation. This result is remarkable. It illustrates that, surprisingly, turbulent flows are not uniformly multifractal; indeed on the contrary, much of the turbulent flow seems to respect, locally, Kolmogorov's ideas of an exact, self-similar cascade. We also note that the range of Φ( x) is well bounded by the theoretical range that we have discussed above. What determines the magnitude and variation of Φ ( x)? Measurements of the generalized dimensions and singularity spectra suggest that the strength of the local dissipation ϵ_ max is not where the answer lies. We find that the probability distribution function (pdf) of Φ, conditioned on Δϵ ( x), is revealing. In the inset of Fig. <ref> (D) we show this pdf for three different values of Δϵ. Clearly, as evident from the previous measurements, the distribution is sharply peaked at values of Φ≳ 0 with an (likely) exponential tail for Φ∼𝒪(1). We also find that the probability of having a higher degree of multifractality increases, albeit marginally, when there is a greater variation of ϵ ( x) within a tile. The mean value Φ, for a given Δϵ (sampled in windows of Δϵ± 0.25), in fact grows logarithmically, as seen clearly in Fig. <ref>(D). What then is the role of the average dissipation ϵ ( x) in determining the spatial non-uniformity of multifractality Φ ( x)? A joint-distribution (Fig. <ref> (E)) shows that the answer is fairly non-trivial. Clearly, for low values of ϵ ( x) it is far more likely to have Φ ( x) ≪ 1; although, surprisingly, the less-likely extreme values of Φ ( x) also coincide with regions of low ϵ ( x). This reflects that it is not the mean disspation in a region, but the variation of dissipation, that manifests multifractality (as shown in Fig. <ref> (D)). At higher ϵ ( x), the smallest values of Φ ( x) admissible slightly increases with ϵ ( x), while the largest values of Φ( x) also dip. While this result might appear contrary to our notion that extreme dissipation alone begets multifractality, it finds parallel in an equally intriguing finding from a recent study showing local Hölder exponents, measured by proxy, also do not trivially correlate with inertial dissipation <cit.>. In fact, experiments have shown that the most dissipative structures locally resemble Burgers vortices <cit.>. While these intense spots make the entire field highly intermittent and contribute to broadening the global multifractal spectrum, the local multifractal picture can be different. We finally cement these results with a visual illustration of where the Kolmogorov-like regions are embedded. We look at a snapshot (with volume-rendering) in Fig. <ref> (F) of the 3D dissipation field, restricted to large values of ϵ( x) ≥ 1. Superimposed on this is the local measure of Φ ( x) ≤ 0.5. Unlike the sparsely populated high ϵ( x) regions, the more frequent low ϵ( x) regions (hidden from view here) remain largely occupied by low Φ( x) (these regions are also coincident with mild to low kinetic energy). Clearly, then, the regions of monofractal flow are strongly correlated to the more populous regions of mild dissipation, showing that the Kolmogorov-like regions locally dissipate less than the multifractal regions. We have, so far, shown compelling evidence which suggests that multifractality in turbulent flow is not as spatially uniform as one might have suspected. In the absence of a robust theory to explain this singular feature of turbulence, we make a final test of these ideas in a Navier-Stokes-like flow which is guaranteed to be non-intermittent. An obvious choice for this is the so-called decimated turbulence model which was introduced by Frisch et al. <cit.>. The basic principle lies in (numerically) solving the Navier-Stokes equation on a Fourier lattice with a quenched disorder — namely the absence of a pre-chosen set of modes either randomly or fractally — by ensuring both the initial conditions and the non-linearity are projected on this sub-set of remaining Fourier modes. Subsequent to the introduction of this model, we now know <cit.> that such surgical removal of modes lead to a turbulence which is non-intermittent. We take advantage of such a flow to repeat the local multifractal analysis performed on regular turbulence. A confirmation of the conclusions drawn from our results would mean that the decimated flow ought to show lower values of Φ ( x) than what is measured in fully developed turbulence. Indeed, this is what we find in Fig. <ref>, showing measurements of the pdf of Φ ( x) for several different levels of fractal decimation (which leads to decreasing intermittency as seen from the pdfs of ϵ in the inset). The Φ distributions consistently shift toward lower values. Joint-distributions of Φ and ϵ were also found to show a simultaneous reduction in their spreads. This confirms a strong link between intermittency and local multifractality. In conclusion, we wish to highlight two equally important contributions of this paper. First is the finding that turbulence fields are not uniformly multifractal, but instead manifest strong multifractality in localized pockets of intermittency in a quiescent Kolmogorovean background of mild dissipation. This paints over a lacuna in our understanding of turbulence where, owing to the very construction of the classical multifractal analysis, the notion of a spatially varying multifractality remained inconcievable. Secondly, our local analysis framework opens up a completely novel avenue for studying both the structure and dynamics of flow singularities and generalized dimension fields, in tandem with turbulence structures like intense vorticity worms <cit.>, non-locally induced velocity jets <cit.>, or precursors to singular dissipation <cit.>. This projects multifractality out of its role of simply being a statistically reductive tool unrevealing of spatio-temporal minutiae, as noted before  <cit.>, to possible applications in prediction and diagnostics of flows. Moreover, this localized analysis begs to be applied to data from across disciplines, where multifractality has been found emergent including in the areas of physics and chemistry <cit.>, medicine <cit.>, geophysics <cit.>, climate <cit.> and finance <cit.>, and is likely to be revealing, as we demonstrate for the long standing picture of turbulence, in unpredictable ways. We are grateful to Rajarshi Chattopadhyay for help with obtaining the JHTDB data at http://turbulence.pha.jhu.eduhttp://turbulence.pha.jhu.edu. We thank Jason Picardo and Jérémie Bec for insightful discussions and suggestions. S.S.R. acknowledges SERB-DST (India) projects MTR/2019/001553, STR/2021/000023 and CRG/2021/002766 for financial support. The authors acknowledge the support of the DAE, Government of India, under projects nos. 12-R&D-TFR-5.10-1100 and RTI4001.
http://arxiv.org/abs/2307.03966v1
20230708123510
Multi-Intent Detection in User Provided Annotations for Programming by Examples Systems
[ "Nischal Ashok Kumar", "Nitin Gupta", "Shanmukha Guttula", "Hima Patel" ]
cs.AI
[ "cs.AI", "cs.SE" ]
Both authors contributed equally to the paper Work done during internship at IBM Research UMass Amherst United States [email protected] [1] IBM Research India [email protected] IBM Research India [email protected] IBM Research India [email protected] printacmref=false In mapping enterprise applications, data mapping remains a fundamental part of integration development, but its time consuming. An increasing number of applications lack naming standards, and nested field structures further add complexity for the integration developers. Once the mapping is done, data transformation is the next challenge for the users since each application expects data to be in a certain format. Also, while building integration flow, developers need to understand the format of the source and target data field and come up with transformation program that can change data from source to target format. The problem of automatic generation of a transformation program through program synthesis paradigm from some specifications has been studied since the early days of Artificial Intelligence (AI). Programming by Example (PBE) is one such kind of technique that targets automatic inferencing of a computer program to accomplish a format or string conversion task from user-provided input and output samples. To learn the correct intent, a diverse set of samples from the user is required. However, there is a possibility that the user fails to provide a diverse set of samples. This can lead to multiple intents or ambiguity in the input and output samples. Hence, PBE systems can get confused in generating the correct intent program. In this paper, we propose a deep neural network based ambiguity prediction model, which analyzes the input-output strings and maps them to a different set of properties responsible for multiple intent. Users can analyze these properties and accordingly can provide new samples or modify existing samples which can help in building a better PBE system for mapping enterprise applications. Multi-Intent Detection in User Provided Annotations for Programming by Examples Systems Hima Patel August 12, 2023 ======================================================================================= § INTRODUCTION String Transformation in mapping enterprise applications refers to the specific paradigm in the domain of Programming by Example (PBE) approaches, where a computer program learns to capture user intent, expressed through a set of input-output pairs, from a pre-defined set of specifications and constraints <cit.>. The set of specifications and constraints is expressed through the Domain-Specific Language (DSL) which consists of a finite number of atomic functions or string expressions that can be used to formally represent a program for the user to interpret. Most of the PBE systems <cit.> for string transformation use ranking mechanisms that are either built using heuristics or learned using historical data. These kinds of ranking systems are designed to capture the following two important characteristics: small length and simpler programs. Such kind of ranking system mostly depends on the quality and number of input and output (I/O) annotation samples to learn better program. The quality of I/O samples denotes how good the I/O samples are to generate a single intent output. The number of given I/O annotation samples can vary depending on the user intent, but the fewer the better for the user (as the user has to provide less annotations). Therein lies the challenge of learning correct intent i.e., if examples are too few, then many possible DSL functions can satisfy them, and picking one intent (or program) arbitrarily or based on some ranking mechanism that satisfies simplicity and smaller length criteria, might lead to non-desired intent program. This might yield a solution that works well only on the given I/O samples but not on unseen samples. Similarly, the quality of I/O samples (irrespective of high I/O samples count) plays an important role in generating the correct intent program. The above two challenges are critical for PBE kind of systems to understand the user's intent by analyzing the given I/O samples. This can lead to a sub-optimal program that works on seen data but does not give desired outputs for unseen data. Hence, it is important to understand whether the given I/O samples capture the user's desired intent correctly or not. For illustration, let's take an example shown in Table <ref>. "Train" columns denote the columns representing I/O samples used to generate a transformation program and "Test" columns denote the input sample column which is passed to the transformation program to generate an output. GT output column denotes the actual desired output. For each example, the user provides 3 I/O samples to generate a transformation program using <cit.>. In the first example, user intent is to extract substring after ”_" character, but here PROSE system learns the program which transforms test input “B_DS2345" into test output “2345" (see generated output column), which implies that the system learns to extract last numeric substring, which is different from a user-desired intent. This happens because there can be many possible programs to transform one set of inputs into outputs. Sometimes those programs converge to the same intent and other times it can lead to multiple intents. For example, in Table <ref>, for the first set of I/O samples, multiple programs can be possible. For test input sample “B_D2S345", where desired output value is “D2345". However, programs in Program(s) column generate different values for this example, first program generates - “345", second program -“D2S345", third program - “D2S345", fourth program - “345" and so on. This shows that all these consistent programs with I/O samples can lead to multiple intents (or outputs) on unseen data. For the above use case, two clear intents are - (a) Extract numeric substring after “_", and second intent is extract substring after “_". But if we look at the second row in Table <ref>, where we replaced the third sample with a better sample “GE_D443 - D443", then automatically first intent program got eliminated from the programs list. Hence, accessing the quality of annotations with respect to single or multiple intents is required for better PBE systems. If the user provides sufficient and single intent specific samples, the system can easily generalize to the rest of the samples. Hence, there is a need for a system that analyses the I/O samples that can help in finding multiple intent issues in annotations. This would help in informing the user about multi-intent issues before generating a transformation program. Therefore, we propose a framework to understand the quality of I/O samples to accurately predict a single confident program. To achieve this goal, we introduce a set of generic properties which helps to find ambiguity/multiple intents in a given set of I/O annotation samples. These properties are generic enough for most of the PBE systems because these properties are designed by analyzing several PBE systems' DSL. We propose a deep learning-based framework to automatically identify the presence of these properties in the annotations. The proposed framework takes a set of I/O samples annotation pairs as input and analyzes those samples together to classify the annotations to these properties. User can utilize this information to enhance the I/O samples, hence, generating more accurate, single intent, simpler and shorter program. In summary, the core contributions of our work are as follows: * Multi-Tasking Attention-Based Deep Neural Network to address the issues of input and output annotation quality to generate a program with the correct intent. * Defined a set of generic properties after analyzing several PBE systems' DSL that can help to find whether a given set of I/O samples can lead to multiple intents or not. * We present an extensive quantitative analysis of a synthetically generated dataset. We also show the motivation of each module of our proposed framework through an ablation study. * We also demonstrate the impact of detecting multiple intents and correcting them before building any PBE system. § OVERVIEW OF PROPOSED METHODOLOGY In this section, we discuss the overview of the proposed methodology, define the set of properties to detect multiple intents, and formally define the problem setting. For any PBE system, I/O samples play an important role in determining the correct intent program. Examples are an ambiguous form of specification: there can be different programs that are consistent with the provided examples, but these programs differ in their behavior on unseen inputs. If the user does not provide a large set of examples or less but good quality samples, the PBE system may synthesize unintended programs, which can lead to non-desired outputs. Hence, there is a need for a framework that can access the quality of I/O samples with respect to multiple intents before generating the program. To access the quality of I/O samples, the most important aspect is to understand how good I/O patterns are for PBE system DSL. The proposed framework (Figure <ref>) consists of two major modules, (a) For I/O annotations, defining set of properties which can cause ambiguity or multiple intents - we analyzed several string transformation specific DSL's, and came out with a generic set of properties which helps to identify whether given I/O samples can lead to multiple intents or not. However, the proposed system is generic enough that users can always add a new set of properties based on new functions introduced in DSL which can cause multiple intents, (b) Multiple intent analyzer - we designed a multi-tasking attention-based deep neural network to detect the ambiguity in given I/O samples based on given set of identified properties. The system first analyzes the user I/O annotation samples using the proposed deep learning framework to detect properties that cause multiple intents or ambiguities. In the next step, the user analyzes those detected properties and based on that, add or modifies samples in I/O annotations to improve the overall annotation quality to learn the correct intent program. In the next section, we will first discuss the properties that will be helpful to decide whether given I/O samples can cause multiple intents or not. In Section 2.2, we will describe the proposed deep learning-based framework which utilizes these properties to find the presence of multiple intents in given annotations. §.§ Properties to Detect Multiple Intent The most important part in finding the ambiguity or possibility of the multiple intents in a given annotation is to analyze the I/O samples for generic characteristics of operators present in the DSL. Mostly, all the DSLs that exist in the literature for string transformation-based PBE systems use similar kinds of operators like split, substring with regex or constant value as an argument, concat, replace, extract first substring, etc. We analyzed several string manipulation-specific DSLs and come out with five generic properties that can help in detecting the multiple intents in the I/O samples. Figure <ref> shows one of the DSL created by combining several other DSL's commonly used operators. There can be other string manipulations operators such as trim, but these are high-level operators and generally doesn't contribute in the multi-intent scenario. In this paper, we will use the DSL showed in Figure <ref> to illustrate the importance of the defined properties. Properties of I/O to detect the presence of multiple intents should be tightly bound to the DSL used for the PBE system. At the same time, those properties should also be (1) concise enough to capture the implicit or explicit multiple intent and (2) expressive enough to allow transformations to be achieved without any confusion in ranking between the programs. Below, we describe the set of 5 properties and the motivation behind their design. §.§.§ Similar Length Ambiguity - This kind of ambiguity can happen when the output substring of all the I/O pairs can be extracted by applying the same DSL operator on corresponding input annotations and have the same length. For example, in Table <ref>, example 1, following substring or continuous sequence in output “123" and “535" are extracted from the similar continuous sequence in input and also are of the same length, hence it is not clear that whether the user wants to extract everything after second “_" or just three characters. In terms of DSL, mostly this kind of ambiguity can be possible because of the outputs generated by constant length-based operators like substring with constant positions vs pattern-based operators like split, substring with a pattern. Formally, we can define this property as, given an example ((I_1, O_1),(I_2, O_2), ..., (I_l, O_l)) satisfies this property when the continues sequence of string in an output matches to the same continuous sequence of string in input and have the same number of characters across that sequence in all the output samples. I_l denotes the l^th input sample, and O_l denotes the corresponding output sample, and l denotes the total I/O samples in one example. §.§.§ Exact Position Placement Ambiguity - This kind of ambiguity can happen when the output substring of all the I/O pairs can be extracted by applying the same DSL operator on corresponding input annotations and extracted output string always starts or ends on the same position in the input string. For example, in Table <ref>, example 2, following substring or continuous sequence in output “Kumar" and “Williams" are extracted from the similar continuous sequence in input and also starts from the same position in input i.e. 5, hence it is not clear that whether the user always wants to extract substring started from position 5 in input, or the user have some other desired intent (extract something after space character). In terms of DSL, mostly this kind of ambiguity can be possible because of the operators which allow constant positions to detect a position of substring vs operators which uses regex or split-based operation to extract substring. Formally, we can define this property as, given an example ((I_1, O_1),(I_2, O_2), ..., (I_l, O_l)), it satisfies this property when the continuous sequence of string in an output matches to the same continuous sequence of string in the corresponding input and it has a start or end always at the same position. §.§.§ Exact Match Ambiguity This kind of ambiguity can happen when the output substring of all the I/O pairs can be extracted by applying the same DSL operator on corresponding input annotations and extracted output substring across annotations have the same string value. For example, in Table <ref>, example 3, following substring or continuous sequence in output “11" and “11" are extracted from the similar continuous sequence in input and also have the same string value i.e. 11, hence it is not clear that whether the user always wants to have constant value 11 in output or the user want to extract this value from the input string. In terms of DSL, mostly this kind of ambiguity can be possible because of the operators which allow constant positions to detect a position of substring vs operators like split/substring which allow values to be extracted from the string itself. Formally, we can define this property as, given an example ((I_1, O_1),(I_2, O_2), ...., (I_l, O_l)) satisfies this property when the continues sequence of string in an output matches to same continuous sequence of string in input and have same value. §.§.§ Similar in Token Type Ambiguity This kind of ambiguity can happen when the output substring of all the I/O pairs can be extracted by applying the same DSL operator on corresponding input and extracted output substring across I/O pairs is of the same type. For example, in Table <ref>, example 4, following substring or continuous sequence in output “123" and “53" are extracted from the similar continuous sequence in input and also have the same value type, hence it is not clear that whether the user always wants to extract the same data type value or something else. Mostly, three types of tokens, Alphabet Tokens which consists of all uppercase and lowercase English alphabets, Numeric Tokens which consists of digits from 0 to 9 and Special-Character Tokens which consists of all printable special characters on the keyboard, are possible. Hence, we say that an example satisfies a similar token type if all its continuous substring in outputs are either all Alphabet Tokens or all Numeric Tokens or all Special-Character Tokens. In terms of DSL, mostly this kind of ambiguity can be possible because of the operators which allow a specific set of regex positions to detect a position of substring vs operators like split/substring which allows values to be extracted from string itself. Formally, we can define this property as, given an example satisfies this property when the continues sequence of string in an output matches to same continuous sequence of string in input and have same value type. §.§.§ Repeating Characters Ambiguity This kind of ambiguity can happen when the output substring of all the I/O pairs can be extracted by applying the same DSL operator on corresponding input annotations and have multiple instances of that output substring is possible in input. For example, in Table <ref>, example 5, following substring or continuous sequence in output “1" and “2" can be extracted from two similar positions from the input. Those positions can be defined by any low-level operators like constant positions, regex, or high-level operators like split, etc. In this case, that common substring is possible at two constant positions in input i.e. positions 3 and 9. Hence it is not clear that whether the user wants to extract a substring from position 3 or 9. This is DSL independent ambiguity, which can happen because the user provided the samples in the way that it internally it generating such kind of ambiguity. Formally, we can define this property as, given an example satisfies this property when the continues sequence of string in an output matches to multiple instance of continuous sequence of string in input. §.§ Problem Formulation Given a set of l input-output annotations ((I_1, O_1),.., (I_l, O_l)), and a set of p properties (P_1, P_2, .., Pp) which can help to detect multi-intents in I/O annotations. The goal of this task is to answer the question “Is there any multi-intent or ambiguity present in I/O samples", if yes, what kind of ambiguities exist. In this paper, p is set to 5, as we designed and discussed 5 properties in the last section that can hinder the generalization of PBE systems. To learn to detect these sets of ambiguities, we design the multi-tasking attention-based deep neural network model. We first generate a set of I/O annotation examples corresponding to each of the five ambiguous properties. We refer to a single I/O pair (I_1, O_1) as a sample and a group of I/O pairs to learn the program using any PBE system as an example. Here, l denotes the total samples (I/O pair) used for each example. In this work, we used l=3, which means in each example, we have three I/O samples. One example can have multiple properties issues also. Intuitively, the goal of our proposed task is to detect the ambiguities in the user-provided I/O annotations so that the user resolves these ambiguities by adding the new or modifying the existing samples. This will enable PBE systems to generate a single intent program that performs as desired on the unseen samples. In the proposed framework, we train a multi-tasking attention-based deep neural network model as shown in Figure <ref> to learn the ambiguities as expressed in the I/O examples. We define each task as a formulation to learn one type of ambiguity. Consequently, the proposed framework solves the five tasks at a time corresponding to ambiguity detection for five different properties. Our model follows an encoder-decoder architecture where the encoder is shared among all the tasks and the decoder is independent for each task. We pose this problem as a multi-class classification problem. Each example is classified against five ambiguous properties as positive or negative, where positive means that the example is ambiguous for that property and negative means that it is not ambiguous. 0.2in Model Architecture - We model the proposed framework shown in Figure <ref> for detecting ambiguities through a hard-parameter sharing paradigm for multi-task learning. As shown in Figure <ref>, the proposed framework consists of three modules, Common Encoder, Task-Specific Modules, and the Loss module. We discuss each of these modules in subsequent subsections. §.§.§ Common Encoder This module is used for encoding the raw I/O strings (see Figure <ref>) and consists of two sub-modules: * Character Level Embedding Layer - This layer maps each character of the I/O pairs in each example to a 128-dimensional learning space. Given an input (i_1,.., i_n) and an output string (o_1,.., o_m) consisting of a sequence of characters of length n and m respectively, this layer outputs a list of character embedding. Here, n refers to the maximum length of input among all the examples in the dataset. The input strings which are smaller than the maximum length are appended with <pad> tokens to make their length equal to n. The <pad> tokens specify that the current character does not signify the original string but marks the end of it or is used to make all sequences of the same length so that the deep learning tensor computations are easier. A similar procedure is followed with the output strings, where the maximum length of output among all the examples in the dataset is m. Each character i_t and o_s in the input and output sequence is mapped to the 128-dimensional raw embedding e_i_t and e_o_s respectively via a randomly initialized and trainable embedding matrix, where t ∈{1,..n} and s ∈{1,..m}. * Input Encoder - This layer uses LSTM representations <cit.> applied on the embedding e_i_t of the inputs of each example as shown in Equation <ref>. This layer helps to learn the sequential dependencies of the characters of the inputs. It takes the input embedding of each character e_i_t and passes them through a LSTM layer consisting of n separate LSTM cells with a hidden vector size of 512 as shown in Equation <ref>. h_i_t = LSTM(e_i_t,h_i_t-1), t ∈ (1...n) Hence, the Common Encoder takes I/O pair as input and produces two output representations, the raw 128-dimensional embedding of each character in the output sample and the LSTM encoded embedding of the Input sample in the I/O pair. These embeddings will be generated for each I/O pairs in an example. The outputs of the Common Encoder are then utilized by next Modules. §.§.§ Task Specific Modules These modules are designed for the detection of each ambiguity property. We have 5 such modules (one for each ambiguity) with a similar structure, which process the inputs obtained from the common encoder. Each Task-Specific Module contains an Additive Attention Output Encoder, Concatenation Layer, Convolution Neural Networks & Pooling Layer, and Softmax Layer as classification layer. The weights across all these 5 task-specific modules are not shared with each other. * Attention Output Encoder - In our architecture, we use additive attention mechanism <cit.> to selectively impart more importance to the part of the input which has more influence on the output characters and hence obtain better output sample encoding. Specifically, this layer computes the additive attention a_e_o_s of a single embedded output character e_o_s with respect to the encoding of all the input characters h_i_1..n as shown in the equations <ref> and <ref>. For this, we pass the output from the Input Encoder to the Attention Output Encoder which first computes the attention weights α_s_1..n as shown in eq. <ref> and the corresponding attention vector a_e_o_s as shown in eq. <ref> for each output character O_s with respect to all input characters in the I/O pair. Here, W_a and U_a are the learnable weight matrices. W_a corresponds to the output embeddings vector e_o_s and U_a corresponds to the input encodings matrix h_i_1..n. V_a is the learnable vector. The attention output a_e_o_s is concatenated with the output embedding e_o_s to give c_o_s as shown in equation <ref> and is passed through an LSTM layer with hidden vector size 512 as shown in eq. <ref>. α_s_1..n = V_atanh(W_ae_o_s + U_ah_i_1..n), s ∈ (1..m) a_e_o_s = Σ_tα_s_t h_i_t, t ∈ (1..n), s ∈ (1..m) c_o_s= [ a_e_o_s,e_o_s], s ∈ (1..m) h_o_s = LSTM(c_o_s,h_o_s-1), s ∈ (1..m) The Attention Output Encoder outputs m different LSTM encodings h_O_1..m for each output string of length m in l I/O pairs, which further passed to the next Layer. * Concatenation Layer - For this, we concatenate the l encodings corresponding to l I/O pairs for each example. Detecting ambiguity is possible only by analyzing all the I/O pairs in a given example and not just one I/O pair. These encodings are obtained from the Attention Output Encoder in a row-wise manner as shown in equation <ref>. Here, h_1_o_s refers to the attention-encoded output of the s^th character of the Output O_1 from the first I/O pair. Similarly, h_l_o_s refers to the attention-encoded output of the s^th character of the Output O_l from the l^th I/O pair. q_s = concat(h_1_o_s, h_2_o_s, ..., h_l_o_s), s ∈ (1..m) Q = [q_1, q_2, ...., q_m-1, q_m] The output of the Concatenation Layer is a matrix Q as shown in eq. <ref>. There are a total of m different rows in the matrix corresponding to the m characters of the Outputs in an I/O pair. More specifically, each row of the matrix represents the character-level concatenation of the output encodings from l different examples. This matrix is then passed into the next layer. * Convolution Neural Network and Pooling Layers - Convolution Neural Networks (CNNs) are used for finding local dependencies in features. In our architecture, CNNs help us to capture the dependencies between adjacent characters and subsequent encoded Outputs of the I/O pairs. The input to the CNN layer is the matrix Q for each example that we obtain from the Concatenation Layer. In this layer, we apply 2-dimensional convolution operations with 512 output channels where each channel contains a kernel of dimension (2, l*512) on the input from the concatenation layer. We then applying MaxPooling on the outputs of the CNNs across each channel to obtain a single vector r of size 512 dimensions for the I/O pairs in an example as seen in eq. <ref>. This 512 size vector is then passed into the next Layer. r = MaxPool2D(Conv2D(Q)) * Classification Layer - Classification Layer is a fully-connected dense layer with 2 neurons corresponding to either the positive or negative class for each ambiguous property classification to give the classification logits u. This is shown in equation <ref> where W_f and b_f are the weight matrix and the bias vector respectively. u = W_f r + b_f Classification logits from the Classification Layer are then passed through the Softmax Layer. * Softmax Layer - This layer applies the softmax activation function on the classification logits to obtain a probability distribution p over the prediction classes (ambiguous properties) as shown in equation <ref>. Here, z is used for indexing a single class among the positive and the negative classes. p = exp(u_z)/Σ_z exp(u_z), z ∈ (0, 1) §.§.§ Loss Calculation The proposed multitask learning framework uses Cross-Entropy loss between the original and predicted labels as the objective function for all the five task-specific modules. Equation <ref> denotes the loss from the k^th task-specific module. We use k to index the task-specific modules. p_k is the predicted probability distribution for the k^th task-specific module. y_k is the original probability distribution for the k^th task-specific module. We obtain the final loss L by taking a weighted sum of the individual losses L_k of each of the task-specific modules as shown in equation <ref>. Here w_k is the weight corresponding to the kth Loss L_k. L_k = -Σ[y_klogp_k + (1-y_k)log(1-p_k)] L = w_1*L_1 + w_2*L_2 + w_3*L_3 + w_4*L_4 + w_5*L_5 § RESULTS AND DISCUSSIONS §.§ Dataset Creation We created a dataset corresponding to the five different ambiguous properties discussed in Section <ref>. We have written different regexes satisfying each ambiguous property based on a fixed Domain Specific Language (DSL). For each ambiguity property, the regexes generate several examples, and each example consists of 3 I/O pairs. We consider uppercase English characters, lowercase English characters, digits from 0 to 9, and all printable special characters. We generate a total of 100002 individual samples, grouped in an example of 3 samples, to finally produce 33334 examples per ambiguous property. In the next few subsections, we describe the procedure of generating the dataset for each ambiguous property. Table <ref> shows examples corresponding to each property. §.§.§ Similar Length Ambiguity For each output substring in an example, we chose a length from a range of 2-9 characters. We limit the output substrings to a maximum of 4 for each sample. Each output substring will contain a mixture of lowercase, uppercase English alphabets, and digits from 0-9. We add random strings on the front and the back of each output substring to construct the input string. Similarly, we do this for other output substrings, and finally, combine the I/O substrings to make it a single I/O pair. We repeat the above process by fixing the output substrings size across the samples in a single example and combine those I/O pairs to make a single example. In our case, we use a set of three I/O pairs in a single example. We illustrate the process of creating I/O pairs through the following example. In the first step, we first assume an output substring of length three for sample-1 is “abc", for sample-2 is “klp" and for sample-3 is “12j". In the second step, we add random I/O strings before and after the first output substring for sample-1 “dfg1#abc#2311", sample-2 “era#klp#hj1", and sample-3 “h2ral#12j#klj23jk". In the third step, we create a new output substring that can follow this similar length property or not. We then repeat the second step, for example, let us assume that the second output substring is of varied length, let's say “hjuk", “puefhkj", and “jf16hsk". Now, either we append this directly to the input with some delimiter or first add some other random string before or after this string. In this case, we append this directly using delimiter “@", so final input strings become “dfg1#abc#2311@hjuk", “era#klp#hj1@puefhkj" and “h2ral#12j#klj23jk@jf16hsk". We can combine the output substrings using any character or directly. In this example, we are combining it directly which leads to the following output samples corresponding to the input samples - `abchjuk", “klppuefhkj" and “12jjf16hsk". We can repeat the same process by generating more output substrings for an example. §.§.§ Exact Position Placement Ambiguity The process of example generation for this ambiguity will remain almost the same as the “Similar Length Ambiguity" property. The only change is that instead of fixing output substring length across samples, we will fix the output substring's position in the input string. §.§.§ Exact Match Ambiguity - In this case, the process differs with respect to output substring value. The output substring value across the I/O pairs within the same example will remain the same. This property inherently also satisfies the Similar Length Ambiguity. §.§.§ Similar in Token Type Ambiguity In this case, the process differs with respect to output substring type. That is, the output substring's token-type across the I/O pairs within same example will remain the same. In our work, we define two types of token-types viz. alphabets and numerals. More specifically, the two categories of similar token types are when, either the output strings contain only the uppercase and lowercase alphabets or only digits from 0-9. §.§.§ Repeating Characters Ambiguity In this case, the output substring exists (or repeats itself) at multiple positions in the input. §.§ Ablation Studies We compare the results of two major variations in the proposed framework : (a) two different loss functions - Cross-Entropy and Focal Loss, and (b) the importance of each layer by removing it from the framework. We consider the model in Figure <ref> as the main model. This model is referred to as Our in the results table. We carry out various ablation studies of the proposed model by removing various components to ascertain the role played by each component in the model. These models are are discussed below. §.§.§ Our_No_CNN: In this setup, we remove the CNN and the MaxPool layers from the proposed model architecture and only pass the concatenated output encodings to the classification layer. §.§.§ Our_No_AM In this setup, we remove the Attention Mechanism from the proposed model. We retain the same output encoder but set the attention weights for each output characters over all input characters is equal to 1 while calculating the attention vector. §.§.§ Our_GRU: In this, we replace all LSTM layers and cells with GRU <cit.> cells in the proposed architecture. We retain the same overall architecture and keep the GRU hidden size equal to 512. §.§ Discussions §.§.§ Quantitative Results In Table <ref>, we compare the results of the proposed framework with two different loss functions, Cross-Entropy, and Focal Loss. Also, we provide a quantitative analysis highlighting the importance of each layer. For this, we first remove the layer from the task-specific modules and then report the performance of the same. We show the property-wise performance in Table <ref>. From the result table, we can see that overall Cross-Entropy is performing better than the Focal Loss. The model has trained with 26,667 examples corresponding to each ambiguous property for 100 epochs with a batch size of 5 per epoch. We set the weights of each loss corresponding to five different ambiguity tasks equally to 1. We report the results on the 6,667 examples test set. The main model, denoted by our, performs better than the other variations of the proposed framework when using the same loss metric. We can also observe that by removing the attention layer from the main model, the performance of the model got decreased by 10-20% for most of the cases, which highlights the need for an attention layer. A similar kind of pattern can be observed when we remove the CNN layer from the main model. In some cases, performance got dropped to around 50%. Also, we can observe that removing the CNN layer makes the model more worse as compared to removing the attention layer. This shows that the CNN part of the architecture plays an important role in ambiguity detection. Also, we can see a significant drop in performance in most of the cases, if we replace the LSTM units with GRU units. The reason for the same is that LSTM units are able to capture context better than GRU because a sufficient number of samples are provided to our model to learn the context. When a sufficient number of samples are available for training, we can expect the LSTM model to learn the context better than GRU <cit.>. Hence, this analysis shows the importance of different layers in our proposed framework. Combining all these layers, makes the system perform almost 100 percent accurately on the test set, which shows that these ambiguities can be easily learned if we define the architecture which can capture context, interrelationships, and attention of output on input. In some cases, we observe that other variations are also giving perfect results which highlight that for those properties simpler network can also be generalizable on unseen test data. §.§.§ Saliency Maps For better understanding the predictions of the proposed model, we used the integrated gradients <cit.> based saliency on the inputs of the examples for visualization. We use three properties (similar length, exact match, and repeating characters) to illustrate the predictions of the learned model as shown in Figure <ref>. For each of these properties, we use one example (three I/O samples) to visualize the saliency maps. Also, we use a single substring in output just for ease of visualization, as this visualization becomes more complex to interpret if we have multiple substrings in output. The first row in the Figure <ref> denotes the saliency maps corresponding to the Similar Length Ambiguity property for the I/O pair - {"input": ["niti gup", "klop kio", "xyz abc"], "output": ["gup", "kio", "abc"]}. From Figure <ref> (a), we can see that in all the inputs, more importance (shown by lighter colors with high values) is given to the characters which mark the beginning and the end of the part of the string (“gup", “kio", and “abc") which belongs to the output. That is, we can see that a higher saliency score is associated with the hyphen and the @end symbol which mark the beginning and the ending of the output string. Hence, we can conclude that the model is able to learn the Similar Length Ambiguity property. The second row illustrates the saliency maps for the Exact Match Ambiguity property for the I/O pair - {"input": ["niti abc123", "klop abc123", "xyz abc123"], "output": ["abc123", "abc123", "abc123"]}. Here, it can be seen that, on average, more importance is given to the part of the input which contains the output as compared to the one which does not. That is, the characters corresponding to abc123 have higher saliency values as compared to the other parts like niti, klop, and xyz in the three inputs respectively. Hence, we can conclude that the model is able to recognize the output strings clearly and hence correctly classifying them. The third row shows the saliency maps for the Repeating Characters Ambiguity for the I/O pair - {"input": ["M%qSFA8qb%We %qSFA8qb%", "1bN%i6Op4%YK%i6Op4%", "Yp%83cGK3%yRv%83cGK3%"], "output": ["qSFA8qb", "i6Op4", "83cGK3"]}. It can be noticed that the characters in the output string have higher saliency values on an average in the input in their second repetition as compared to their first occurrence. This shows that the model is able to well recognize the repeated characters and hence correctly classify them. We have observed similar kinds of patterns for the other ambiguities. §.§.§ Case Study: Impact of detecting multiple intents and correcting them before building PBE systems In this section, we discuss that how the presence of ambiguity in input and output annotations can affect the output of widely used tools like PROSE <cit.> and Microsft Excel. Table <ref>, shows the different ambiguities detected by the proposed system on 6 examples, and also shows that whether existing PBE systems will able to learn correct intent or not using those sets of I/O pairs. For each example, the user provides three I/O samples to convey the desired intent. However, as we can see from the ambiguities detected column that each of these examples has some kind of ambiguities or multi-intent issues. Effect of the same can be reflected in a mismatch of PROSE/Excel output columns and GT output column. This shows the need for the framework which helps to figure out the multi-intent quality issues in annotation before generating program through any PBE systems. In the first example in Table <ref>, the system detects “Similar in Token Type Ambiguity", because substring (only one substring exist in this case) across the outputs have the same token type. This can lead to multiple intent issues of whether the user wants to extract everything after “_" irrespective of the token/data type, or user is just interested in specific numeric data type content for this case. Same multi-intent confusion can be reflected in the output of two different PBE systems on an input “B_DS2345" - (a) PROSE output is “2345", that means the PROSE framework learn to extract numeric content after “_", and (b) Excel output is “DS2345" which means that excel learns to extract all the content after “_". So, it is good if the user can first analyze the detected ambiguity and if that ambiguity holds for a user's actual intent, then the user can accordingly either provide new samples or change the existing samples. Like for the first example, the user intent is to extract everything after “_" and also detected ambiguity is of similar token type. So, the user can now either modify or add one new sample where the extracted output string also has non-numerical characters. With this new additional I/O sample (highlighted in bold) provided by a user, after analyzing the detected ambiguity, both PROSE and EXCEL are able to learn correct intent. This is reflected through the output columns i.e. the value of these columns is the same as the GT column (see Table <ref>) . Similarly, if we analyze the fifth example in Table <ref>, the system detects multiple ambiguities. Exact Position, Similar Length, and Similar in Token Type ambiguities exist for both the output substrings (Mohan/Abhil/Johny and Mr.). Similar in Exact Match Ambiguity exists only for “MR" substring in the output. For the first output substring (Mohan/Abhil/Johny), the user is fine with Exact Position and Similar in Token Type Ambiguity. However, the user wants to add a new example to remove the Similar Length Ambiguity. Similarly, for the second output substring, the user is fine with all the detected ambiguities except Exact Position Placement Ambiguity, because the user's goal is not to extract this information from the input string, the user wants to add that as a constant string in the output. So, after analyzing these properties, the user can provide new samples which will remove these ambiguities to learn the correct intent. Also, we can see from the table that due to these ambiguities both PROSE and EXCEL system learn the intent wrongly. However, after analyzing the ambiguities, the user provided the new sample as shown in Table <ref>. This new sample helps the system to learn the correct intent, which can be seen through the correct output on the test data. Similarly, by providing new sample as shown in Table <ref> for other examples, the user will be able to resolve the multi-intent quality issue and also be able to learn the correct intent through existing PBE frameworks. This shows the effectiveness of our proposed framework to detect ambiguity in PBE systems specifically in the string transformation domain. § RELATED WORK Task-specific string transformation can be achieved via both program synthesis and induction models. Induction-based approaches obviate the need for a DSL since they are trained to generate required output directly from the input string and used in tasks like array sorting <cit.>, long binary multiplication <cit.>, etc. However, induction models are not feasible for the string transformation domain as they require to be re-trained for each task and have lower generalization accuracy on unseen samples than synthesis models <cit.>. In literature, both neural-guided-based and symbolic-based approaches have been widely used for program synthesis. Several neural-guided approaches have been proposed in the last few years for program synthesis <cit.>. A sequential encoder-decoder network to infer transformation programs that are robust to noise present in input-output strings, where the hand-engineered symbolic systems fail terribly is proposed in <cit.>. A different variant of an encoder-decoder network where input-output string encoders are not cascaded but work in parallel to infer program sequences is proposed in <cit.>. In <cit.>, a novel neural architecture consisting of a R3NN module that synthesizes a program by incrementally expanding partial programs is used. These networks can be trained end-to-end and do not require any deductive algorithm for searching the hypotheses space. However, they do not guarantee that inferred programs are consistent with the observed set of input-output pairs and also, training on synthetically generated datasets results in poor generalizability on real-world tasks. Symbolic Program Synthesis approaches operate by dividing required transformation tasks into sub-tasks and searching the hypothesis space for regex-based string expressions to solve each of them. However, smart search and ranking strategies to efficiently navigate the huge hypothesis search space require significant engineering effort and domain knowledge. One of the earliest attempts to solve the problem of program synthesis pioneered the Flash-Fill algorithm designed to infer specification satisfying string transformation program in the form of Abstract Syntax Trees (AST) <cit.>. The PROSE system from <cit.> employs several hand-crafted heuristics to design ranking functions for deductive searching. Systems like PROSE though perform well on tasks similar to the previously encountered tasks but face a generalizability issue when exposed to new unseen tasks. This is also demonstrated in Table <ref> where the system infers one intent which is satisfied in the seen examples but fails on new unseen test data. Since PBE systems for string transformations rely on input and output annotations, it is necessary to provide non-ambiguous input and output samples to them. There is no work existing in the literature that talks about finding the ambiguities or multiple intent-based quality issues in input and output annotations, and providing that information to the user so that the user can look for those detected ambiguities and accordingly modify existing samples or provide new samples. This kind of system will help to capture the user's intent more clearly and make the system automatically generalizable on unseen data. Hence, in this paper we focused on finding the quality issues in input-output annotations with respect to multi-intent, to learn correct intent. § CONCLUSION This paper aims to solve the problem of detecting ambiguity in the user-provided I/O annotations for PBE systems which leads to the generation of wrong intent programs. To the best of our knowledge, our proposed framework is the first to solve this issue at the input and output annotation level. To solve this, we propose extensible multi-tasking attention-based DNN to find the multiple intents in the I/O samples. We also define a set of generic properties that help in detecting the multiple intents in the annotations. We have done a quantitative analysis of different variations of the proposed model architecture to show the impact of the proposed systems' modules. We have also illustrated the effectiveness of the proposed model through saliency maps and by using an existing PBE system outputs. A natural extension of our work is to use the detected ambiguity properties to automatically generate new input and output samples and to improve the program search space. siam
http://arxiv.org/abs/2307.05592v1
20230710180717
Functional PCA and Deep Neural Networks-based Bayesian Inverse Uncertainty Quantification with Transient Experimental Data
[ "Ziyu Xie", "Mahmoud Yaseen", "Xu Wu" ]
stat.ML
[ "stat.ML", "cs.LG" ]
NCSU]Ziyu Xie NCSU]Mahmoud Yaseen NCSU]Xu Wumycorrespondingauthor [mycorrespondingauthor]Corresponding author [email protected] [NCSU]Department of Nuclear Engineering, North Carolina State University Burlington Engineering Laboratories, 2500 Stinson Drive, Raleigh, NC 27695 Inverse UQ is the process to inversely quantify the model input uncertainties based on experimental data. This work focuses on developing an inverse UQ process for time-dependent responses, using dimensionality reduction by functional principal component analysis (PCA) and deep neural network (DNN)-based surrogate models. The demonstration is based on the inverse UQ of TRACE physical model parameters using the FEBA transient experimental data. The measurement data is time-dependent peak cladding temperature (PCT). Since the quantity-of-interest (QoI) is time-dependent that corresponds to infinite-dimensional responses, PCA is used to reduce the QoI dimension while preserving the transient profile of the PCT, in order to make the inverse UQ process more efficient. However, conventional PCA applied directly to the PCT time series profiles can hardly represent the data precisely due to the sudden temperature drop at the time of quenching. As a result, a functional alignment method is used to separate the phase and amplitude information of the transient PCT profiles before dimensionality reduction. DNNs are then trained using PC scores from functional PCA to build surrogate models of TRACE in order to reduce the computational cost in Markov Chain Monte Carlo sampling. Bayesian neural networks are used to estimate the uncertainties of DNN surrogate model predictions. In this study, we compared four different inverse UQ processes with different dimensionality reduction methods and surrogate models. The proposed approach shows an improvement in reducing the dimension of the TRACE transient simulations, and the forward propagation of inverse UQ results has a better agreement with the experimental data. Bayesian inverse UQ Functional alignment Functional PCA Neural networks § INTRODUCTION Uncertainty Quantification (UQ) is the process to quantify the uncertainties in Quantity-of-Interest (QoIs) by propagating the uncertainties in input parameters through a computer model. In the field of nuclear engineering, most UQ research has focused on forward UQ (FUQ), which involves propagating input uncertainties through computational models to quantify uncertainties in QoIs. However, in FUQ, the input parameter uncertainties are often user-defined or based on subjective expert opinion, which lacks mathematical rigor and can introduce inaccuracies. To address this issue, inverse UQ (IUQ) has been developed in order to quantify input uncertainties based on experimental data. IUQ research in nuclear engineering primarily relies on statistical analysis and the developed methods can be categorized into three groups <cit.>: frequentist (deterministic) <cit.> <cit.> <cit.> <cit.> <cit.> <cit.>, Bayesian (probabilistic) <cit.> <cit.> <cit.> <cit.> <cit.> <cit.> <cit.>, and empirical (design-of-experiments) <cit.> <cit.> <cit.> <cit.>. These methods compare computational simulations with experimental data to estimate the uncertainties of the model input parameters. Frequentist IUQ gives the most likely input parameters that can reproduce the experimental data. Bayesian IUQ quantifies the uncertainties of the input parameters by reducing the disagreement between simulation and experimental data. Empirical IUQ seeks a range of input values based on which the model predictions can envelop the measurement data. See <cit.> for a more detailed review and comparison of these approaches. In addition, there has been a growing interest in IUQ research over the past decade in the nuclear engineering area. For example, multiple international activities have been undertaken to develop and evaluate the effectiveness of IUQ methods. In fact, many of the IUQ methods mentioned above are developed and/or improved within these international projects. Notable among these is the Post-BEMUSE Reflood Models Input Uncertainty Methods (PREMIUM) <cit.> benchmark, which focuses on core reflood problems and employs Flooding Experiments with Blocked Arrays (FEBA) tests to quantify and validate input uncertainties in system thermal-hydraulics (TH) models. The OECD/NEA has also performed two follow-up projects: the Systematic Approach for Input Uncertainty Quantification Methodology (SAPIUM) <cit.> and Application Tests for Realization of Inverse Uncertainty Quantification and Validation Methodologies in thermal-hydraulics (ATRIUM) <cit.> (ongoing). These projects aim to develop a systematic approach for quantifying and validating the uncertainty of physical models in system TH codes. In this paper, we will focus on the Bayesian IUQ method. Several improvements will be developed and implemented based on our previous work on the modular Bayesian approach <cit.> <cit.>. In Bayesian IUQ, Markov Chain Monte Carlo (MCMC) methods <cit.> are usually utilized to explore the posterior distributions of input parameters. MCMC generates samples that follow a probability density proportional to the parameter posterior distribution. However, a typical MCMC algorithm often requires more than 10,000 samples to reach a converged solution. This can be computationally expensive, especially for nuclear Thermal-Hydraulic (TH) system codes. To address this challenge, surrogate models can be employed to significantly reduce the computational cost. Surrogate models give an approximation of the relation of the input and output of the original computer model (also called full model), and they require only a limited number of full model runs for the training process. Some machine learning (ML) methods such as Gaussian process (GP) and deep neural network (DNN) have been widely used as surrogate models. The application of surrogate models to replace the original computational models introduces an additional source of uncertainty, referred to as the code or interpolation uncertainty <cit.> <cit.> in literature. Conventional DNN-based surrogate models give deterministic predictions of the QoIs for given inputs. Consequently, when used as surrogates, DNNs do not provide estimation of the code/interpolation uncertainty directly. To capture the approximation uncertainty introduced by using DNN-based surrogate models, in this study we will implement the Bayesian inference method for UQ of DNNs. Specifically, Bayesian neural networks (BNNs) are trained as surrogate models of the full model. A BNN is a neural network with distributions over parameters. In BNNs, a prior distribution is specified upon the parameters (weights, bias) of neural networks and then, given the training data, the posterior distributions over the parameters are computed, which are used to quantify predictive uncertainty. Our previous work <cit.> benchmarked three methods, namely, Monte Carlo dropout, deep ensembles, and BNN to estimate the prediction/approximation uncertainties of DNNs. In another study <cit.>, these methods were applied to time series data derived from TRACE simulations of the FEBA experiments. In this work, the quantified DNN prediction uncertainties, which are essentially the code/interpolation uncertainties when using DNN as surrogate models, will be incorporated into the Bayesian IUQ process. When performing IUQ for transient problems, the responses typically exhibit time-dependence, resulting in high-dimensional and highly-correlated data. Such high dimensionality and correlation can lead to challenges for surrogate modeling techniques such as GP <cit.> and DNN <cit.>. To overcome this challenge, dimensionality reduction methods such as principal component analysis (PCA) are often employed, and they have shown successful applications in nuclear engineering. In a study by Wu et al. <cit.>, the dimensionality of a time-dependent fission gas release model was reduced using PCA. The experimental data was transferred into the Principal Component (PC) subspace within a Bayesian IUQ framework. Similarly, Roma et al. <cit.> also utilized PCA in an IUQ study. It's worth noting that PCA has been employed in other areas, such as global sensitivity analysis <cit.>. However, conventional PCA may not accurately represent time series data when the transient profiles contain important phase and magnitude information that need to be preserved simultaneously. The standard PCA technique is applied to centered data, which may smooth out the phase and magnitude information of the dataset. To address this limitation, a functional PCA method has been developed, specifically designed to handle time series data and preserve both phase and amplitude information <cit.> <cit.>. By separating the phase and amplitude information of transient data, the functional PCA method overcomes the challenges posed by conventional PCA, enabling more accurate representation and preservation of the essential features in time series data. This functional PCA approach has shown successful applications in the field of TH system code predictions <cit.>, demonstrating significant improvements in dimensionality reduction performance. This work focus on developing a Bayesian IUQ process for time-dependent QoIs with a demonstration example using the FEBA experimental data and the TRACE computer model. We implemented and compared four different Bayesian IUQ processes: (1) conventional PCA with GP surrogate model as a reference solution, as it is one of the most widely used approach in literature for transient data <cit.> <cit.>, (2) conventional PCA with DNN surrogate model without code uncertainty, (3) functional PCA with DNN surrogate model without code uncertainty, and (4) functional PCA with DNN surrogate model with code uncertainty through implementation of BNN. Previously, relevant work has been preformed using FEBA experimental data in Bayesian IUQ <cit.>, but without considering the surrogate model uncertainty. The contribution and novelty in this work can be summarized as: (i) implementation of functional PCA for time-dependent QoI that contains important phase and magnitude information to be preserved during dimensionality reduction, (ii) surrogate modelling with DNN, while accounting for the code/interpolation uncertainties through BNN, solved with variational inference, (3) a comprehensive and systematic investigation of four Bayesian IUQ processes to study the influence of GP/DNN as surrogate models, conventional vs. functional PCAs, as well as the code/interpolation uncertainties introduced by surrogate models. A complete IUQ study usually includes sensitivity analysis to select the most influential calibrated parameters <cit.>, as well as FUQ and validation to test the IUQ results. In this study, we leveraged the sensitivity analysis study in our previous work <cit.>. It has been found that functional PCA improves the dimensionality reduction by a better reconstruction quality using only a few PCs. Using the functional PCA and DNN-based surrogate models while accounting for the code/interpolation uncertainty leads to the best Bayesian IUQ results. Forward propagation of the IUQ results and validation using experimental data not seen in IUQ have shown that the proposed approach has the best agreement with experimental data when compared with the other IUQ methods. The rest of the paper is arranged as follows. Section <ref> gives an introduction to the FEBA experiment and the TRACE computer model. Section <ref> introduces the PCA method with functional alignment, the method used for UQ of DNN model and the Bayesian IUQ methods. Section <ref> presents the results for functional PCA, surrogate modeling, various IUQ methods, forward propagation of the IUQ results and validation. Section <ref> concludes the paper and discusses the future work. § PROBLEM DEFINITION In the 1980s, the Karlsruhe Institute of Technology carried out a series of experiments known as FEBA <cit.> <cit.> to improve the understanding of heat transfer during reflooding. The experiment facility consisted of a full height 5 × 5 bundle of pressurized water reactor rod simulators, with a heater that provided a cosine power profile over the height of the rod, which is shown in Figure <ref> (a). The length of the rod was 4 m, with a heated length of 3.9 m, and the cladding temperature was measured at eight different elevations. This paper focuses on FEBA test series 1 test number 216, which is the baseline test with no flow blockage and undisturbed bundle geometry containing all grid spacers. The experimental conditions are: water flooding velocity of 3.8 cm/s, system pressure of 4.1 bars, feedwater temperatures of 48°C in the first 30s and 37°C at the end respectively, and power starting at 200 kW and decay heat transient corresponding 120% of ANS Standard about 40 seconds after reactor shutdown <cit.>. The reason for selecting this test is that it was well studied in the PREMIUM benchmark <cit.>. The TRACE (v5.0p5) <cit.> system TH code is used to simulate the experiments based on the given initial and boundary conditions in FEBA test 216. Figure <ref> shows the model built for TRACE simulation in this work and a typical peak cladding temperature (PCT) time series profile from TRACE simulation. For FEBA test 216, the time-dependent PCTs were measured at 8 different axial positions over the bundle. In this specific problem, we choose the experimental data at axial position z = 2225 mm for IUQ and data at other 2 axial positions for validation (z = 1135 mm and z = 3315 mm). The QoI is the whole transient PCT profile, which contains major phase and magnitude information including the maximum PCT (T_max), time to reach the maximum PCT (t_max), and the time of quenching (t_quench). In the TRACE model of the FEBA experiment, 36 uncertain physical model parameters in UQ section of TRACE system code <cit.> were initially considered. However, not all of these parameters are significant to the QoIs. To reduce the input dimension by identifying the significant physical model parameters, our previous research <cit.> performed a global sensitivity analysis study, resulting in the selection of four physical model parameters that are significant to the QoIs. These parameters are multiplicative factors that can be perturbed in the TRACE input deck, and their nominal values are 1.0, as shown in Table <ref>. These four physical model parameters will be considered as calibration parameters in the IUQ study. Uninformative uniform distributions were chosen for the priors in the range of [0,5]. The objective of IUQ is to determine the posterior distributions of these calibration parameters based on the chosen experimental data, such that the agreement between TRACE simulation and the FEBA experimental data can be improved. Furthermore, the quantified posterior uncertainties in these physical model parameters are expected to result in better TRACE prediction of experimental tests whose data is not used in the IUQ process. § METHODOLOGIES §.§ Principal Component Analysis Since the QoIs in this problem are time-dependent, which corresponds to an infinite-dimension response, one may pick the PCT values at many time points to adequately represent the time series evolution. This, however, will result in high-dimensional outputs that are also highly correlated. Because a surrogate model for TRACE has to be used in order to reduce the computational cost in MCMC sampling, it is impractical and computationally expensive to create separate surrogate models for all the outputs. To address this challenge, PCA, an unsupervised ML method, is usually employed to reduce the dimensionality of high-dimensional correlated data. PCA is a statistical procedure that uses an orthogonal transformation to convert possibly correlated data into a set of linearly uncorrelated variables. The resulting PCs are orthogonal to each other and the corresponding PC scores are treated as values of new variables, whose number is much smaller than the number of original variables. Furthermore, the limited number of selected PCs can still preserve the majority of the original data variance after dimensionality reduction. Once TRACE is used to simulate the time-dependent PCT profile, p = 1000 points are chosen evenly from the PCT profile. Note that a series of numerical tests has shown that such a number of points is sufficient, as it gives the same PCA results with cases when much larger values for p are used. Next, N = 500 samples are generated from prior distribution of input parameters by Latin-hypercube sampling (LHS), which is listed in Table <ref>. This results in a p× N data matrix 𝐀, in which the rows represent the high-dimensional correlated outputs and the columns represent different samples. To transform the original data matrix 𝐀 into an uncorrelated set of variables, we seek to find a p× p linear transform matrix 𝐏. The linear transformation 𝐏𝐀=𝐁 will result in a new p× N data matrix 𝐁, which contains the samples of the transformed uncorrelated variables. A typical 2-dimension PCA process is shown in Figure <ref>. To find the matrix 𝐏, we use the following steps: * Center the original data matrix 𝐀 by defining the row means as a column vector 𝐮. The centered data matrix 𝐀_centered is obtained by subtracting 𝐮 form each column of 𝐀. * Find the singular value decomposition (SVD) of 𝐀_centered: 𝐀_centered = 𝐔Λ𝐕^⊤ Where 𝐔 is a p× p orthogonal matrix , 𝐕 is a N× N orthogonal matrix, and Λ is a p× N diagonal matrix with non-negative real numbers on the diagonal. The diagonal entries of Λ are called the singular values of 𝐀_centered and are arranged in descending order. * Choose 𝐏 = 𝐔^⊤, then we have: 𝐏𝐀_centered = 𝐔^⊤𝐀_centered = Λ𝐕^⊤ = 𝐁 In this case, it can be proven that the new variables in the new data matrix 𝐁 are uncorrelated because its covariance matrix is diagonal. The matrix 𝐏 provides a linear transformation from the original data basis to the PC basis. The rows of 𝐏 are the PCs. The columns of 𝐁 contain the samples of the transformed variables, also called PC scores. * Determine the reduced dimension of the PC subspace p^* which could be much smaller than p based on the total variance explained by the PC subspace, using the diagonal entries in Λ. Usually, the variances explained by the PCs decrease rapidly, only a few PCs can explain 95% to 99% of the total variance. Using a small value of p^*, define a p^*× p transformation matrix 𝐏^* 𝐏^*𝐀_centered = 𝐁^* where 𝐁^* is a new data matrix with low-dimensional uncorrelated variables. * To reconstruct the original PCT time series profile based on a sample 𝐛^* in the PC subspace, we use the following relation: 𝐚_centered = (𝐏^*)^⊤𝐛^* Then the mean vector 𝐮 computed in step 1 will be added to 𝐚_centered, to obtain the original data series profile. Through this PCA process, we can reduce the dimension of the QoIs from p = 1000 (or even more, depending on how many points are picked from the transient curve) to less than 10. If the selected PCs are used as QoIs in IUQ process, the experimental data also need to be transferred into the PC subspace. The uncertainty of experimental data also needs to be transformed in a similar way using the following relation: Σ^*_data = 𝐏^*Σ_data(𝐏^*)^⊤ where Σ_data is a p× p matrix that includes the uncertainty of experimental data. It can be a full matrix if the correlations between the high-dimensional correlated responses are known. However, such information is usually not available so one may assume Σ_data is a diagonal matrix. From equation (<ref>) we can find that the new variance Σ^*_data in the PC subspace is usually a p^*× p^* full matrix with non-zero off-diagonal entries. This new data uncertainty matrix needs to be considered in the Bayesian IUQ process. §.§ Functional PCA In the conventional PCA method described above, the original data matrix is centered in the first step, and the mean vector has to be added back in order to reconstruct the data. As shown in Figure <ref>, each TRACE simulated PCT profile has its own phase and magnitude information, t_max, t_quench, and T_max. Using the mean vector will “smooth out” such important information. As a result, the conventional PCA method may not be able to recover such phase and magnitude information accurately using only first few PCs, even though they explain more than 99% of the total variance. There may be non-negligible fluctuations in the reconstructed PCT profiles near the quenching time, which is shown in Figure <ref>. To solve this problem, functional alignment <cit.> <cit.> is used to separate the phase and magnitude information of the original data matrix before dimensionality reduction. The combination of functional alignment and conventional PCA will be referred to as functional PCA (fPCA) in the following. Functional alignment aims in aligning the “landmark” points, which are t_max and t_quench in this problem, of the whole dataset to the same points. A composite function f(t) = f(γ(t)) is used to adjust the original function, where γ(t) is called the warping function, and f(t) is the warped data. The set of all warping functions γ(t) (Γ) will have the following property: Γ = {γ : [0,t]→ [0,t] |γ(0) = 0, γ(t) = t, γ is a monotonically increasing function} The main problem is to find the warping functions γ(t) that can align all the functions at the landmark points. Many methods have been developed for determining γ(t) through minimizing the cost function inf _γ∈Γ ||f_1(t) - f_2(γ(t))|| <cit.> <cit.>. Here, we will introduce the square root slope function (SRSF) method <cit.> <cit.>, which uses the square root slope to represent the original function. The SRSF of the original function f(t) is defined in the following form: q(t) = sign(f(t))√(|f(t)|) If f(t) is a continuous function, then the SRSF q(t) is square-integrable. The function f(t) can be calculated using the integral f(t) = f(0)+∫_0^t q(s)| q(s)| ds, since q(s)| q(s)| = ḟ(s). If we warp a function f by γ, the SRSF of f(γ(t)) is given by: q(t) = q(γ(t))√(γ(t)) A new cost function is defined based on the norm of two SRSFs. The warping function γ(t) is determined by minimizing this cost function. D_y(f_1, f_2)=inf _γ∈Γq_1-(q_2(γ)) √(γ̇) After the separation of amplitude and phase information for all of the samples, the original samples are transferred into a series of warped data with aligned landmarks points and warping function includes phase information. Figure <ref> shows an example of functional alignment of a series of functions. Afterwards, conventional PCA is applied to all warped data f(t) and warping functions γ(t) for dimensionality reduction. The data of f(t) and γ(t) could be represented by the first few PCs. To reconstruct the original function f(t) using the limited number of PCs, warped data f(t) and warping function γ(t) are first reconstructed based on the related PCs based on inverse PCA. Finally, phase and amplitude reconstructed functions are combined through f(t) = f(γ^-1(t)) to generate the original function before functional alignment. Since the warping function should be monotonically increasing, a smoothing function is applied to the PCA reconstructed γ(t) functions to avoid non-monotonic issues calculating the inverse function γ^-1(t). Note that previously curve registration and alignment for FEBA benchmark has been applied in an earlier work <cit.>. The major focus of this work is building DNN-based surrogate models for the PC scores after functional alignment and its application in IUQ and FUQ. Figure <ref> shows the procedure of fPCA application. In this framework, surrogate models like DNN are used to represent the PC scores from phase and amplitude information, respectively. When new samples are given, the predictions of DNN surrogate models go through an “inverse fPCA” process, as discussed above, to reconstruct the original time series data. §.§ Bayesian Neural Networks In a standard neural network structure, the learnable parameters (weights and biases) are initialized randomly with deterministic values. During the prediction stage, one would anticipate a deterministic output for a given input since the weights and biases have fixed values after training. In contrast, a BNN <cit.> <cit.> is a neural network in which the learnable parameters follow random distributions. To train a BNN, prior distributions are assigned to the neural network parameters, and then, based on the training data, posterior distributions of the parameters are computed by updating the prior distributions during training process. Figure <ref> compares a standard neural network with a BNN. Following training, the BNN is evaluated at the same input several times, each time with its parameters sampled from the posterior distributions, resulting in different values for the prediction that can be used to obtain the predictive uncertainties. The inference of the posterior distributions is challenging since most DNNs nowadays have large number of parameters. To address this issue, various methods have been developed for Bayesian inference of neural networks, including sampling-based methods such as MCMC <cit.> and optimization-based methods like variational inference <cit.> <cit.>. Variational methods are advantageous because they converge faster, making them more suitable for large neural networks. In this study, we used variational inference to train the BNN. Note that we have treated the bias parameters as deterministic, as neural network predictions are less sensitive to these parameters than to weights. This is because the bias term is added to the product of weights and the activation from the previous hidden layer, and thus the impact of weights on DNN predictions is more significant than that of bias. A probabilistic model is assumed for the BNN, in which the weights are learned by using Maximum Likelihood Estimation (MLE). The posterior weights (𝐰) are computed during training based on Bayes' rule for a given training dataset (𝒟): P (𝐰 | 𝒟) = P (𝒟 | 𝐰) · P (𝐰)/P (𝒟) where P (𝐰) is the prior distribution for 𝐰 and it is assumed to be certain non-informative distribution, P (𝒟 | 𝐰) is the likelihood function, and P (𝐰 | 𝒟) is the posterior distribution for 𝐰. Prior and posterior represent our knowledge of 𝐰 before and after observing 𝒟, respectively. P (𝒟) does not contain 𝐰 so it is usually treated as a normalizing constant. It is sometimes referred to as the evidence term. When making predictions at a test data 𝐱^*, the predictive distribution of the output 𝐲^* is given by: P (𝐲^* | 𝐱^*) = 𝔼_P (𝐰 | 𝒟)[ P (𝐲^* | 𝐱^*, 𝐰) ] where the expectation operator 𝔼_P (𝐰 | 𝒟) means we need to integrate over P (𝐰 | 𝒟). The term P (𝐲^* | 𝐱^*, 𝐰) represents the probability of the prediction at a test point 𝐱^* and the posteriors of the weights. Each possible configuration of the weights, weighted according to the posterior distribution P (𝐰 | 𝒟), makes a prediction about 𝐲^* given 𝐱^*. This is why taking an expectation under the posterior distribution on weights is equivalent to using an ensemble of an infinite number of neural networks. Unfortunately, such expectation operation is intractable for neural networks of any practical size, due to a large number of parameters as well as the difficulty to perform exact integration. This is the main motivation to use a variational approximation for P (𝐰 | 𝒟). Variational inference methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and ML. It is used to approximate complex posterior probabilities that are difficult to evaluate directly as an alternative strategy to MCMC sampling. An alternative variational distribution is proposed to approximate P (𝐰 | 𝒟), it consists of a distribution set whose parameters are optimized using the Kullback-Leibler divergence. For more mathematical and implementation details, interested readers are recommended to look at <cit.> <cit.> <cit.>. §.§ Bayesian Framework for IUQ FUQ requires knowledge of computer model input uncertainties to generate the uncertainty of simulation outputs. These inputs uncertainties are often determined by “expert opinion” or “user self-evaluation”. However, such determination lacks mathematical rigor and may be subjective, which may lead to misleading FUQ results. IUQ is a method to inversely quantify the input uncertainties based on the given experiment data while keeping the simulation results consistent with the experiment data. A modular Bayesian framework for IUQ has been developed <cit.> previously and it will be used in this work. In the following we will provide a brief introduction to Bayesian IUQ. Consider a computer model 𝐲^M( 𝐱, θ), where 𝐲^M is the model response, 𝐱 is the vector of design variables, and θ is the vector of calibration parameters. The differences of design and calibration variables have been discussed in our previous work <cit.>. Given the design variable 𝐱, the reality 𝐲^R( 𝐱) can be learned by (1) running model simulation, which involves model uncertainty δ(𝐱), and (2) performing experiments, which involves measurement uncertainty ϵ. These terms can be combined in the so-called “model updating equation” <cit.>: 𝐲^E (𝐱) = 𝐲^M( 𝐱, θ^*) + δ(𝐱) + ϵ where δ(𝐱) is the model uncertainty/discrepancy, due to missing/incomplete physics and numerical approximation errors during the modeling process. θ^* is the “true" but unknown values of θ. ϵ∼𝒩( 0, Σ_exp) represents the measurement/experimental uncertainty which is considered as a normal distribution. The model discrepancy term δ(𝐱) is dependent on 𝐱 which stands for design variables such as initial conditions or boundary conditions. Since only one experimental test is considered in this transient problem, we do not have enough cases of 𝐱 to learn δ(𝐱). Therefore, the model discrepancy is not considered in this study. Based on the assumption that the experimental uncertainty is Gaussian, ϵ = 𝐲^E (𝐱) - 𝐲^M( 𝐱, θ^*) follows a multi-dimensional normal distribution. As a result, the posterior distribution p ( θ^* | 𝐲^E, 𝐲^M) can be written as: p ( θ^* | 𝐲^E, 𝐲^M) ∝ p ( θ^*) · p ( 𝐲^E, 𝐲^M | θ^*) ∝ p ( θ^*) ·1/√(|Σ|)·exp[ - 1/2[ 𝐲^E - 𝐲^M]^⊤Σ^-1[ 𝐲^E - 𝐲^M] ] where p ( θ^*) is the prior distribution that can be provided by user evaluation or expert opinion. p ( 𝐲^E, 𝐲^M | θ^*) is the likelihood function. Prior and posterior distributions represent our knowledge of θ before and after observation of measurement data, respectively. Σ is the covariance of the likelihood which consists of two parts: Σ = Σ_exp + Σ_code where Σ_exp is the experimental uncertainty due to measurement error, and Σ_code is the code/interpolation uncertainty due to the use of surrogate models to reduce the computational cost. The term Σ_code = 0 if the computer model is used directly in the IUQ process, instead of the surrogate models. Note that a component for model uncertainty/discrepancy should be included in Σ_exp if possible. As discussed above, due to the very limited amount of data, it is not considered in this work. To calculate the posterior distribution, an adaptive MCMC algorithm <cit.> is used to generate samples following the probability densities of posterior distributions. To reduce the computational cost of MCMC sampling, surrogate models built with DNN are used to represent the simulation of computer models. In this project, we will compare GP and DNN as surrogate models to determine which approach lead to better IUQ results. Note that when we say “DNN-based surrogate models” with consideration of the code uncertainty Σ_code, we essentially mean BNN. BNN is a special implementation of DNN that accounts for the prediction uncertainty. § RESULTS AND DISCUSSIONS In this paper, we compare four different methods for IUQ with different dimensionality reduction and surrogate modeling approaches. Table <ref> lists the details of these methods. The conventional PCA process follows Section <ref>, while the fPCA process follows Section <ref>. In Method 1, the combination of conventional PCA and GP serves as a reference solution. GP has a unique feature compared to other ML methods, which is that the mean square error (MSE), also called the variance of the prediction is directly available. Therefore, Σ_code can be easily included in the IUQ process. Compared to DNN, GP is mainly used for problems with low-dimensional features and smooth responses. The combination of fPCA and GP has been explored in <cit.>. Since the main focus of this work is to demonstrate the applicability and benefits of fPCA with DNN while accounting for Σ_code, we will not repeat this approach. Methods 2 and 3 use conventional DNNs as surrogate models, while Method 4 uses BNNs. This section is arranged as follows. Section <ref> introduces the functional alignment results of TRACE simulation samples and gives a comparison of conventional PCA with fPCA. Section <ref> presents the results for validation of surrogate modeling and the UQ process for BNN models. Section <ref> presents the IUQ results, including the posterior distributions of the calibration parameters and their mean values and standard deviations. Section <ref> presents the validation results for the IUQ results, including FUQ for experimental tests at 3 different axial positions and comparison with model simulations based on the prior distributions and the experimental data. §.§ Results for fPCA To train the fast-running and accurate surrogate models for TRACE, 500 random samples are generated based on the prior distributions of 4 calibration parameters using LHS. Next, the conventional and functional PCA methods are performed to obtain the PC scores for the samples, which will be used as the training data for the surrogate models. For conventional PCA, we apply PCA to the PCT profiles directly. For fPCA, functional alignment is applied first to the PCT profiles before the PCA process. During fPCA, the original TRACE simulation data was separated into warped data which includes amplitude information, and warping function which contains phase information. The results are shown in Figure <ref>, the time to reach maximum PCT (t_max), and the time of quenching (t_quench) for all the 500 TRACE simulated PCT profiles are aligned to the same positions, respectively. After functional alignment, we have two series of datasets, f(t) for warped data and γ(t) for warping functions. Both datasets have 1000 time steps, which form two 1000 × 500 matrices, whereas the original dataset has only one 1000 × 500 data matrix. PCA is then applied to both datasets. Figure <ref> shows the total variance explained by the first 10 PC scores for PCA of the warped data, the warping function, and the original TRACE simulation data without functional alignment. The fPCA process shows a significant improvement compared to the conventional PCA because we need fewer PCs to account for the 99% of the total variance. The first 2 PCs for warped data and the first 4 PCs for warping functions are chosen as the new QoIs by fPCA, which can explain over 99% of the total variance. For conventional PCA, we choose the first 4 PCs as QoIs, which explain 95% of the total variance. The first 10 PCs would have to be chosen to account for 99% of the total variance. Figure <ref> shows the comparisons of TRACE simulation samples and reconstructed PCT profiles from PC scores with and without functional alignment. The reconstruction process means obtaining the original PCT profiles from the PC scores, as explained in Section <ref>. For the PCA method without functional alignment, we used 10 PCs, which explains the same variance as the PCs used in our fPCA study. In Figure <ref>, the reconstructed PCT profiles based on 6 PCs from fPCA show a good agreement with the TRACE simulation results, while the reconstructed PCT profiles by conventional PCA show oscillations as expected, especially near the quenching time when the PCT has a sudden drop. §.§ Results for surrogate modeling The QoIs, surrogate models and validation results of the four IUQ methods in Table <ref> are summarized in Table <ref>. Before training the surrogate models, all the PC scores are standardized to zero mean and unit variance and separated into three groups, 70% for training, 15% for validation and 15% for testing. All the surrogate models take the four calibration parameters in Table <ref> as inputs. For Method 1, one multi-dimensional GP model was trained for 4 PCs as outputs. For methods 2-4, separate DNN/BNN models were used to represent each PC. Neural network models can certainly represent multi-dimensional responses. However, it was found that the accuracy was not as good as training separate DNNs/BNNs for each PC as response. One possible reason is that the PCs scores are essentially samples for transformed uncorrelated variables. When they are used to train a single DNN/BNN, all the layers/neurons before the output layer are shared, while only the weights from the last hidden layer to the output layer are different. This can cause the DNN/BNN to have a less satisfactory performance. We have also performed hyperparameter tuning with grid search to find optimized neural network architectures, learning rates, etc. Because the amount of training data is small for this simple problem, the DNN models only have 3 hidden layers with 10, 20, and 10 hidden neurons, and 1 output layer with one neuron to represent the PC. Figure <ref>, <ref>, <ref> show the validation results for methods 1-3, using the testing dataset (note that the validation dataset has been used for hyperparameter tuning). Most of the surrogate models show a good prediction accuracy, with a R^2 (the predictivity coefficient) value larger than 0.95. However, the GP/DNN model for the fourth PC of conventional PCA (Figures <ref> and <ref>) and the DNN model for the fourth PC of the warping function (Figure <ref>) do not perform as well. Nevertheless, these higher order PCs are not as important as the previous ones due to their much smaller contribution to the total variance, as shown in Figure <ref>. Figure <ref> shows the results for BNNs. In the training process using variational inference, the weight parameters are assumed to following Gaussian distributions, whose means and variances are learned. Once a BNN is trained, it can be evaluated multiple times at the same input, each time with different samples of the weight parameters. The resulting predictions can be collected as samples of the responses from which the mean values and variances (uncertainties) can be computed. To quantify the uncertainty of BNN predictions, we perform 200 predictions for each sample with different network parameters from the posterior distributions of BNN parameters. Figure <ref> presents the mean values and one standard deviations (std in the figure) compared to the test samples. It can be seen that for the majority of the test samples, either the BNN mean values are close to the true PC scores, or the true PC scores are with one standard deviation of the BNN predictions. One exception is the fourth PC of warping function just like Method 3 in Figure <ref>. As discussed above, this is not expected to cause an issue in further analysis. The uncertainties of the BNN predictions will be included in the Bayesian IUQ process as Σ_code. In Method 1, the variance from GP is directly available without further computation. However, this is not true for BNN surrogate models in Method 4, because one needs to run the BNN many times in order to obtain the prediction uncertainties as shown in Figure <ref>. This will significantly slow down the MCMC sampling process and it contradicts our intention to use surrogate models. Figure <ref> shows the relationship between BNN predictions and the corresponding standard deviations for the testing samples, which indicates linear relationships for most cases. Based on this observation, we have made a simplification by fitting the standard deviations as linear functions of the BNN predictions. During MCMC sampling, the BNNs are only evaluated once for every random walk, and the uncertainties of the surrogate models (in terms of the standard deviations) are evaluated by these linear relations. Note that the test samples for the second PC of the warped data appears to form two clusters instead of a linear relationship. In this case, we simply take the centroids of the two clusters, and determine the standard deviation based on which cluster the BNN prediction is closer to. §.§ Results for IUQ For each IUQ method, 25,000 MCMC samples were generated to explore the posterior distributions of the calibration parameters. The MCMC sampling process takes about 1-2 hours using the surrogate models, which would otherwise take a few thousand hours using the TRACE system code. The first 5,000 samples were abandoned as burn-in since the MCMC chains are not converged in the beginning, which is shown in Figure <ref>. Afterwards, we picked every 20 remaining samples for the purpose of “thinning" to reduce the auto-correlation among the MCMC samples. The remaining 1,000 posterior samples were investigated for the posterior distributions of the calibration parameters. Table <ref> and Figures <ref>, <ref>, and <ref> present the posterior distributions of the four calibration parameters by all the four IUQ methods. For Method 1, the posterior distributions based on the GP model have a larger differences from other methods. A potential reason for this could be the prediction accuracy for Methods 2-4 based on DNN surrogate models are better. For parameter , which is the most sensitive parameter among the four (see reference <cit.> for more details), the mean values between different methods are similar, which gives similar results in the PCT profile (as shown in Section <ref>). The posterior results for all four methods have demonstrated a significant reduction of uncertainty from the prior distributions. Compared with Method 2 and 3, the results from Method 4 have a larger uncertainty, since the code uncertainty from the BNN models is considered. One advantage of the Bayesian IUQ method is that it can identify the correlations between different calibration parameters through random walk of the MCMC samples, even though the prior distributions are assumed to be independent. The parameters' marginal distributions and pair-wise joint distributions are shown in Figures <ref> and <ref>. For examples, for all of the four IUQ methods, and have a strong negative correlation. As a result, when generating new samples from posterior distributions, the correlations between the calibration parameters should be considered. There are some differences in the posterior marginal/joint distributions by different IUQ methods, this is expected since inverse problems are usually ill-posed with many different solutions. To validate the IUQ results, we will determine whether the posterior distributions will make TRACE simulations more consistent with the FEBA experimental data, not only with the test case that has been used in IUQ, but also test cases unseen during IUQ. §.§ Results for FUQ and validation To determine which IUQ method produces the best IUQ results, we propagated the quantified posterior distributions of the calibration parameters through the TRACE model to obtain the updated prediction uncertainties in the PCT profiles. This step can take advantage of the existing GP/DNN surrogate models that were trained during the IUQ process to reduce the computational cost in the FUQ process. We generated 1000 random samples from the joint distributions of the posterior distributions from each IUQ method, then we used the surrogate models in each IUQ method to generate the PC scores and subsequently the reconstructed PCT profiles. The results of the FUQ process based on both the posterior and prior distributions at axial position z = 2225 mm are shown in Figure <ref>, together with the FEBA experimental data. Note that this is only a proof-of-concept of Bayesian IUQ, rather than a valid “validation” process, because the same data has already been used in IUQ. It can be observed that: (1) the 95% confidence intervals based on the posteriors are much smaller than those based on the prior, due to reduction of uncertainty by IUQ, (2) compared with Methods 1/2 that used conventional PCA, the FUQ results of Methods 3/4 with fPCA have a better agreement with the experimental data, especially around t_max. After considering the code uncertainty using BNN surrogate models, the FUQ results of Method 4 show a larger 95% confidence interval than Method 3, which covers most of the experimental data. To perform a more rigorous validation of the IUQ results, new experimental data that is not seen during IUQ should be used. We performed FUQ and validation at two other axial positions (z = 1135 mm and 3315 mm) of the FEBA experiment test 216. Since the surrogate models are not applicable for these datasets, we need to use TRACE to run the samples used for FUQ. We generated 300 random samples from joint posterior distributions of the four different IUQ methods and ran the TRACE model to get the PCT profiles. The FUQ and validation results at the two axial positions are shown in Figure <ref> and <ref>, respectively. For all of the FUQ results, the mean values based on the posteriors show a better agreement with the experimental data. Other observations are similar with Figure <ref>. Methods 1/2 produce results that have larger disagreement with the data before and around t_max, while Methods 3/4 produce results that have slightly larger disagreement with the data at around t_quench. Overall, the FUQ results from Method 4 has the largest posterior uncertainty range and the best coverage of the experimental data. This proves that the combination of fPCA and DNN-based surrogate model while accounting for the code uncertainty has improved the Bayesian IUQ process for this transient dataset. § CONCLUSIONS This paper proposed a Bayesian inverse Uncertainty Quantification (IUQ) process for time-dependent responses using four methods that used different dimensionality reduction processes and surrogate models. We proposed a framework for Bayesian IUQ that combine functional principal component analysis (PCA) and deep neural network (DNN)-based surrogate models while accounting for the code/interpolation uncertainty. Functional PCA separates the phase and amplitude information of the time series data before dimensionality reduction, which shows an improved performance over the conventional method. The use of DNN-based surrogate models has also proven to be very effective in representing the PC scores, and it significantly reduces the computational cost in Markov Chain Monte Carlo (MCMC) sampling sampling. We also considered code uncertainty for surrogate models in Bayesian IUQ by adopting Bayesian neural networks (BNNs). Since the sampling-based UQ process for BNN will increase the computation cost in the IUQ process, we estimate the BNN uncertainty with a linear regression model since there is a clear linear relationship between BNN prediction and uncertainty. The proposed approach has been applied to the peak cladding temperature in the FEBA benchmark. Forward Uncertainty Quantification (FUQ) and validation of the proposed IUQ method have demonstrated that the code simulations based on the posterior distributions have an improved agreement with the experimental data while the uncertainty ranges can envelop the majority of the experimental data. The primary limitation of this framework is that the model uncertainty is not considered in this IUQ study since there is only one experimental test is considered. In further study, we will include the model discrepancy term that comes from the missing and inaccurate physics in the system code, in order to design a more comprehensive IUQ process. We seek to find a mathematical representation for FEBA transient data. In addition, an IUQ method based on hierarchical Bayesian modeling can be applied since data at different axial position can be considered through this model.
http://arxiv.org/abs/2307.05269v1
20230711140213
Stochastic equations and cities
[ "Marc Barthelemy" ]
physics.soc-ph
[ "physics.soc-ph", "cond-mat.dis-nn", "cond-mat.stat-mech" ]
[email protected] Université Paris-Saclay, CEA, CNRS, Institut de Physique Théorique, 91191, Gif-sur-Yvette, France Centre d'Analyse et de Mathématique Sociales, (CNRS/EHESS) 54, Boulevard Raspail, 75006 Paris, France Stochastic equations constitute a major ingredient in many branches of science, from physics to biology and engineering. Not surprisingly, they appear in many quantitative studies of complex systems. In particular, this type of equation is useful for understanding the dynamics of urban population. Empirically, the population of cities follows a seemingly universal law - called Zipf's law - which was discovered about a century ago and states that when sorted in decreasing order, the population of a city varies as the inverse of its rank. Recent data however showed that this law is only approximate and in some cases not even verified. In addition, the ranks of cities follow a turbulent dynamics: some cities rise while other fall and disappear. Both these aspects - Zipf's law (and deviations around it), and the turbulent dynamics of ranks - need to be explained by the same theoretical framework and it is natural to look for the equation that governs the evolution of urban populations. We will review here the main theoretical attempts based on stochastic equations to describe these empirical facts. We start with the simple Gibrat model that introduces random growth rates, and we will then discuss the Gabaix model that adds friction for allowing the existence of a stationary distribution. Concerning the dynamics of ranks, we will discuss a phenomenological stochastic equation that describes rank variations in many systems - including cities - and displays a noise-induced transition. We then illustrate the importance of exchanges between the constituents of the system with the diffusion with noise equation. We will explicit this in the case of cities where a stochastic equation for populations can be derived from first principles and confirms the crucial importance of inter-urban migrations shocks for explaining the statistics and the dynamics of the population of cities. Stochastic equations and cities Marc Barthelemy August 12, 2023 =============================== § INTRODUCTION Even if there is no universally accepted definition of complex systems, cities can certainly be considered as one of their most prominent example <cit.>. Cities are made of a large number of different constituents interacting on multiple spatial and temporal scales, leading to the emergence of multiple collective behaviors such as traffic jams, gentrification, etc. Obviously, there are many aspects in cities and a single`unified' model describing all of them seems out of reach for the moment. There are however simple quantities that describe the importance of a city and help monitoring their evolution. Among these quantities such as the surface area, the GDP, energy consumption, etc., the population appears as fundamental. Knowing this quantity about a city gives informs us about its structure <cit.>, and when combined with its location about its importance in the country <cit.>. The population of a city depends on the definition of its frontiers. Administrative boundaries are certainly now outdated and it is now more accurate to speak about urban areas. Standard definitions used now involve the identification of a urban center and connect this center to surrounding areas exchanging a number of commuters large enough (the so-called Metropolitan Statistical Areas in the US and the Larger Urban Zones - LUZ - in Europe). Another interesting definition relies on percolation and defines the city as the giant component of built-areas <cit.>. It is important to be aware of this problem, but we won't focus on this here, and we will mostly consider data for the population of urban areas defined in standard ways. The population of cities displays a very large range of variation, from small towns with about 10^4 inhabitants to megacities with a population of about 10^7 individuals. This broadness rules out any type of optimum argument where the concentration of individuals would lead to some cost-benefit equilibrium implying a unique typical size of cities. This sort of optimum discussion was very common in economics <cit.>, but seems in contradiction with empirical facts. Indeed, the statistics of urban population follows a universal law ­- called Zipf's law <cit.> ­- and was discovered almost a century ago by the physicist Auerbach <cit.>. Auerbach sorted the population P of german cities in decreasing order (the rank r=1 corresponds then to the largest city, Berlin at that time), and noticed that the product P× r was roughly constant. Equivalently, Zipf's law for cities (a law extended by Zipf to many other systems) states that the population P of a city varies approximately as the inverse of its rank r. This empirical discovery led to almost a century of theoretical studies (see <cit.> for a recent review) mostly in economics <cit.>, in geography <cit.> and in physics <cit.>. The recent availability of data for different countries and time periods brought however new facts. First, the validity of Zipf's law has been challenged <cit.>. In many cases, the Zipf plot is not displaying a simple 1/r behavior <cit.> and the distribution of city size can appear to be much more complex as a power law with candidate functions such as the lognormal or the stretched exponential <cit.>. Second, the ranks of cities follow a turbulent dynamics with sometimes very large fluctuations <cit.>: some cities rise while other fall and disappear. Both these aspects - Zipf's law and the turbulent dynamics - are obviously connected and need to be explained in the same theoretical framework. We will describe here the various attempts to describe these empirical facts, and that are all based on stochastic equations governing the time evolution of urban populations. A stochastic differential equation (SDE) is a differential equation where one or more terms or coefficients are random (usually white noise but as we will see the noise can be more complex). These random terms can be constants or functions and their statistical properties are generally supposed to be given. As a result, the solution is also a stochastic process and can be described by its distribution or by its first moments <cit.>. These ubiquitous equations were discussed at length in the mathematical literature and are naturally found in physics, but also in many other fields ranging from ecology to finance. The random terms in these equations usually describes the effect of the interaction of many elements for which it is impossible to describe all the details <cit.>. The broad and interdisciplinary field of complex systems concerns essentially studies of individual entities that interact and produce collective dynamics. It is therefore not a surprise that stochastic equations appear in this context as we will verify it for cities. In the first part of this article, we describe the most salient empirical features about urban populations: the distribution of population and Zipf's law, and the dynamics of city ranks. We will then discuss recent empirical results that demonstrate the existence of important deviations from the Zipf law that was thought to be universal. In a second part, we introduce the historical models for understanding Zipf's law: the Gibrat model of random growth and its limits, and then Gabaix' model which is equivalent to a random walk with a reflecting barrier and which allows to understand certain aspects of the population distribution. We note here that other models were proposed for explaining the emergence of a power law or other functions and go from simple stochastic processes to economic models with a large number of parameters. In <cit.>, the authors distinguish models with a constant number of cities and redistribution between these cities, while other models propose mechanisms for city growth and a varying number of cities (<cit.> and references therein). Here, we focus on the Gabaix model as it is so far commonly considered as the natural explanation for Zipf's law. As a digression, we will describe a general phenomenological stochastic equation that allows to understand rank fluctuations in various systems and which displays an interesting noise-induced transition. In the last part of this paper, we will see that these models (Gibrat, Gabaix) are not able to explain the turbulent dynamics of cities or the observed deviations from Zipf's law. This will lead us to review the diffusion with noise equation discussed in another context (the condensation of wealth in a simple model of economy). In the city context, this suggests the critical importance of interurban migration flows for understanding the population statistics. Indeed, as we will show in the last part of the paper, by starting from first principles we can derive an equation similar to the diffusion with noise and that describes the growth of cities. The derivation combines empirical arguments, and the generalized central limit theorem. This stochastic equation with multiplicative noises predicts a complex shape for the distribution of city populations and shows that, owing to finite-time effects, Zipf’s law does not hold in general, implying a more complex organization of cities. It also predicts the existence of multiple temporal variations in the city hierarchy, in agreement with observations. We will then end this paper with a general discussion and review some of the current challenges in this field. § EMPIRICAL STUDIES §.§ Rank-size plot and population distribution The first observation of a statistical regularity in the urban population distribution was made in 1913 by Felix Auerbach <cit.>, a german physicist interested in interdisciplinary studies. Auerbach first ranks in decreasing order cities of Germany (rank 1 is the largest city, Berlin at that time, and the last rank N corresponds to the smallest city in the study). A given city has then a rank r and a population P, and Auerbach noticed that the product of these quantities is approximately constant r× P≈const. where the constant for Germany was at that time is ≈ 50. This first observation was generalized by Zipf <cit.> who ranked many items (such as the number of times a word appears in a book) and proposed to plot the score of an item versus its rank. This rank plot - also called Zipf plot now- became a useful tool for the analysis of many complex systems. This relation Eq. <ref> means that the rank r scales as 1/P. In general, we observe the more general power law P∼1/r^ν where ν was found initially approximately equal to 1 by Auerbach and Zipf <cit.>, but was found in subsequent studies to take various values <cit.> (see Fig. <ref> for an example of a Zipf plot with ν≠ 1). This Zipf relation Eq. <ref> is all but trivial and has many consequences that were discussed many times in the literature (see for example <cit.> and references therein). It implies that for a given system (a country for example), if we know the population of a city (for example the largest one P_1), we can then determine the values of all the others using P(r)=P_1/r^ν. The second largest city is then the half of the largest one, etc. In addition, if we denote by ρ(P) the distribution of the population, the rank is then approximately given by r(P)∼ N∫_P^P_maxρ(P)dP Indeed, if P=P_min, we obtain r(P)=N, and if P=P_max, the integral is of order 1/N and r(P_max)=1. From this expression, we then obtain for P_max≫ P that the distribution is also a power law ρ(P)∼1/P^μ with μ=1+1/ν. For the value ν≈ 1 initially observed by Auerbach, this relation implies that the population distribution has a fat tail decaying as ρ(P)∼ 1/P^2. The Zipf law also allows to estimate how the size of the largest city or the number of cities in a country vary with the total population. Indeed, if we denote by N the number of cities, by W the total urban population in the country, we have the following relation W=∑_i=1^NP_i The exponent μ in Eq. (<ref>) is around 2 and precise values are either below or above. When the exponent is μ>2, the dominant behavior of Eq. (<ref>) is given by the usual central limit theorem W∼ N and the number of cities is thus proportional to W. When the exponent is smaller than 2, we have a sum of broadly distributed variables with divergent average and the sum is dominated by the largest and behaves as (see for example <cit.>) W∼ N^1/(μ-1) which implies that the number of cities scales as N∼ W^μ-1. We thus see that in both cases the number of cities varies as N∼ W^ζ where ζ is either exactly or numerically close to one. Most of the empirical measures used the generalized form Eq. <ref> and determined the so-called Zipf exponent ν. The original works of Auerbach and Zipf discussed essentially the case ν=1. Many measures were done since then, for various countries, different epochs, and most of these results were compiled in the study <cit.>. The resulting distribution of the Zipf exponent ν is shown in Fig. <ref>. We see in this figure that even if the value ν=1 is indeed the most probable, there is a non negligible dispersion around this value. In addition, it must be noted that the Zipf plot is not always a clear power law and can display an important curvature (in loglog) signaling the emergence of more complex functions that just a simple power law. All these empirical results need to be explained and this will be the goal of the theoretical analysis presented below. §.§ Dynamics of ranks: rises and falls of cities In addition to the existence of an apparently stationary distribution, the ranks of cities follow a turbulent dynamics: some cities rise while other fall and disappear. The dynamics of populations and their rank in the country can be visualized with the help of `rank clocks' proposed in <cit.>. In these rank clocks, the radius is proportional to the rank (rank 1 has the largest radius) and the angle is proportional to time. We can then follow the trajectory of a city during time and see how its rank evolves. We show an example (Fig. <ref>) of such a rank clock for few cities in France. In <cit.>, the geographer M. Batty showed that the micro-dynamics of cities is very turbulent and displays rises and falls of entire cities at many times and on many scales. As we will see in the next sections, this puts important constraints on possible models. The dynamics cannot be smooth but somehow must explain the appearance of shocks that can significantly modify the ranks of certain cities. § EXPLAINING ZIPF'S LAW WITH STOCHASTIC EQUATIONS Most of the first studies on Zipf's law focused on the case ν=1 where the rank r and the population P are inversely proportional r∼ 1/P. We will here discuss one of the most important historical model proposed by Gabaix <cit.> which is able to explain this value for the Zipf exponent. We will first discuss the simplest stochastic model for describing the random growth of a city: the Gibrat model. §.§ The Gibrat model (proportionate growth) Proposed originally by Gibrat in 1931 <cit.> for explaining the growth of firms, this model assumes a random growth where the rate is a random variable, independent from the size of the firm. Transposed to the population P_i of city i, this reads as P_i(t+1)=η_i(t)P_i(t) where the growth rates η_i(t) are random positive, independent variables, and distributed according to the same distribution f(η). We assume here that time is discretized and typically t represents a year. If this relation is always valid, by taking the logarithm we obtain ln P_i(t)=ln P_i(0)+∑_τ=0^t-1lnη_i(τ) and according to the central limit theorem (see for example <cit.>), the variable ln P is Gaussian, leading to a lognormal distribution as the variance var(ln P)∼ var(lnη)t and diverges at large times. In this case, the absence of a steady state leads to a lognormal distribution in disagreement with empirical observations. In order to reconciliate a random growth and the empirical observation of Zipf, it is necessary to introduce some sort of friction that implies the existence of a steady state. This is the main point of the Gabaix model discussed in the next section. §.§ The Gabaix model (random walk with a reflecting barrier) In order to understand the Zipf law, Gabaix <cit.> proposed a variant of the Gibrat model based on the idea that small cities cannot shrink to zero. In other words, the process considered by Gabaix is a random walk with a lower reflecting barrier which can, in some conditions, produces a power law distribution of populations. There are two essential assumptions here. The first ingredient is the existence of a perfectly reflecting barrier which prevent cities from becoming too small. The second ingredient is that ⟨lnη⟩<0 (the brackets denote here and in the following the average over the distribution of η). Indeed, if it is not the case, the walk will escape to infinity and we recover the central limit theorem and a lognormal distribution for P. Instead, if the walker is pushed towards zero ( ⟨lnη⟩<0), it will bounce on the reflecting barrier and create an effective flow going to large values. This interplay between a drift towards the barrier and the reflection at the barrier is what gives rise to a power law. This problem of a multiplicative noise with a reflecting barrier was solved by Levy and Solomon <cit.> and discussed in more detail by Sornette and Cont <cit.>. We first introduce w=P/P where P is the average population. If we assume that dlnP/dt is negligible compared to the average growth rate, we obtain for w an equation of the form w(t+1)=η (t)w(t) where we redefined the average of η. We change variables x = ln w and ℓ =lnη and assume that the reflecting wall is at x_0 = ln w_0. In this process, the position of the walker at time t + 1 is given by (we also assume here that time is discretized) x(t+1)=max (x_0,x(t)+ℓ) where ℓ is the value of the increment (and distributed according to π(ℓ)). This equation means that x_t+1 = x_0 if the jump brings the walker below the barrier, and x_t+1 = x_t + ℓ is the new position if it is above the barrier. The walker will then be at x_0 with probability P(x=x_0)=∫_-∞^x_0dx∫_-∞^+∞P(x-ℓ)π(ℓ)dℓ where P(x-ℓ) is the probability to be at position x-ℓ and π(ℓ) is the probability to jump a distance ℓ. For x>x_0, we have P(x)=∫_-∞^+∞P(x-ℓ)π(ℓ)dℓ This is a Wiener-Hopf equation and solving it can be mathematically involved. Here, we proceed in a simplified way and assume an exponential ansatz e^γ x for the solution. Plugging this form into Eq. <ref> leads to the equation that γ must satisfy 1=∫e^γℓπ(ℓ)dℓ The stationary limit is then P(x) = e^γ x which implies that the distribution of w is given by a power law of the form P(w)∼C/w^1+γ for w>w_0. This result was discussed by Sornette and Cont <cit.> who proposed another derivation for it. In particular, they proposed the following intuitive explanation for the appearance of a power law: in the presence of a reflecting barrier which prevents the random walk to escape towards very small values, there is a continuous flow of particles that can sample the large positive values of x, leading to a broad tail for P(w). If we now write the two conditions ∫_w P(w)=1 and ⟨ w⟩ =1, we obtain the relation γ = 1/1+w_0 which shows that if w_0 is small enough, we have γ≈ 1. This framework thus justifies the Zipf law (and equivalently a power distribution for population) with an exponent ν=γ≈ 1. The price to pay is to introduce a minimal population and that ⟨lnη⟩ <0. These assumptions are a bit difficult to justify. In addition, as we will see in the part about the dynamics, even if the Gabaix model is able to explain some of the empirical features about the population distribution, it is not able to explain dynamical features such as large rank variations. We note here that this model is equivalent to the process discussed by Kesten <cit.> and which reads dw/dt=η(t)w(t)+ϵ(t) where ϵ(t)>0 is a positive noise. §.§ Digression: A phenomenological stochastic equation for the ranking dynamics The problem of the rank dynamics of cities is in fact more general as we can rank approximately anything and as such constitutes an interesting problem with many applications. Indeed, we can consider many examples such as the number of times individual words are used in published journal during a year, the daily market capitalization of companies, the number of diagnoses of a particular disease, the number of annual citations each paper received in the Physical Review corpus, etc. <cit.>. The dynamics of ranking was discussed phenomenologically in <cit.> and we will follow this analysis even if it was not designed specifically for cities. In the ranking problem, we have in general a list of N items with some score X_i(t) which determines their ranking (for cities X is their population). The highest score corresponds then to the first rank X_i=max{X_j}⇒ r_i=1 and the smallest score to the last rank r=N. The total number of times and the total score can vary and it is natural to consider the normalized score <cit.> x_i(t)=X_i(t)/∑_jX_j(t) In general, the distribution of x follow a broad law spanning many orders of magnitude. In some case, we observe a clear power law behavior, while for other there is cut-off for large x <cit.>. The dynamics can also be different with cases presenting a stationary law while others display time variations. Despite these differences, there are some `universal' features (see Fig. <ref>). For example, the dispersion of the change Δ x as a function of x follows the behavior σ_Δ x|x∼ x^β where β is in general smaller than 1. In order to discuss the dynamics of ranking, the authors of <cit.> assumed that the score of an item i follows a simplified Langevin equation of the form (already discussed in <cit.>) dx_i/dt=f(x_i)+g(x_i)ξ_i(t)-ϕ(t)x_i where f(x) represents the deterministic process that governs the score and captures many attributes such as the impact of research papers, the utility of a word, etc. The multiplicative noise term of the form g(x_i)ξ_i(t) captures the inherent randomness in the system. The noise ξ_i(t) is assumed to be gaussian with ⟨ξ_i(t)⟩=0 and ⟨ξ(t)ξ(t')⟩=δ(t-t')) and the noise amplitude g(x_i) can in general depend on the score x_i. Finally, the last term ϕ(t)x_i ensures that the scores are normalized ∑_ix_i(t)=1. If we sum over i Eq. <ref>, we find ∑_iẋ_i =0 =∑_if(x_i)+∑_ig(x_i)ξ_i(t)-ϕ(t)∑_ix_i which implies that ϕ(t) =∑_i[f(x_i)+g(x_i)ξ(t)] =ϕ_0+η(t) where the constant is ϕ_0=∑_if(x_i) and the global noise term is η(t)=∑_ig(x_i)ξ_i(t). The equation (<ref>) is obviously too general and the empirical data suggest some simplifications. First, the drift term f(x) can be written as f(x_i)=A_ix_i^α where it is assumed that the exponent 0<α<1 (in order to get a positive solution) is the same for all i. The prefactor A_i can be interpreted as the fitness of item i and captures its ability to increase its share x_i (the larger A_i and the larger the growth rate ẋ_i). The authors of <cit.> also assume that the noise amplitude behaves as g(x_i)=Bx_i^β which is suggested by the empirical measurements on the variance shown in Fig. <ref>(c,d) (indeed Eq. <ref> implies that σ^2_Δ x_i|x_i≈ g(x_i)^2Δ t). Empirically, the exponent β seems to be comparable for all systems and the amplitude B displays significative differences and varies from B≈ 10^-3 to 10^-1 <cit.>. The coefficient B is a measure of the noise magnitude and we expect that the stability of the system will be affected by its value. Indeed, if we denote by P(x_i,t|A_i), the probability that an item i has score x_i at time t given its fitness A_i, its temporal evolution is governed by the Fokker-Planck equation (in the Ito convention) ∂ P/∂ t=-∂/∂ x_i[(A_ix_i^α-ϕ(t)x_i)P]+1/2∂^2/∂ x_i^2(B^2x_i^2βP) This equation cannot be solved in the general case, but if we neglect fluctuations of ϕ(t)≈ϕ_0, the time independent steady-state solution P_0(x_i|A_i) of Eq. <ref> reads (up to a normalization constant) P_0(x_i|A_i)∝ x_i^-2βe^2A_i/B^2x_i^δ/δ[1-(x_i/x_c)^1/γ] where δ=1+α-2β, γ=1/1-α, and x_c=(A_i/ϕ_0) ( δ+1/γ/δ)^γ The most probable value x_i^* for x_i satisfies dP/dx_i=0 which implies from Eq. <ref> F(x_i) = Ax_i^α-ϕ_0x_i-B^2β x_i^2β-1=0 For B=0, we obtain x_i^*=0 or x_i^*=(A_i/ϕ_0)^γ Writing ∑_ix_i^*=1 then gives ϕ_0=(∑_iA_i^γ)^1/γ. The stability of the solution is given by the sign of F'(x_i) and in this case it is easy to show that the non-zero solution is stable for A_i>0 and α<1. When there is noise (B≠ 0), it shifts the deterministic solution x_i^* to a new value. If the noise is not too strong (B<B_c), there is a non-zero stable solution and the solution x_i=0 which is unstable. In this low noise case, the score of an item i will be localized around a value given by the interplay between its fitness and the noise. At B=B_c, the non-zero solution disappears and for B>B_c, the distribution behaves as P(x_i|A_i)∼ x_i^-2β and x_i has large fluctuations. In this case, the value of x_i can have values very different from x_i^* and can vary over many orders of magnitude. This last discussion is about the value of the score x_i, but knowing its value is not enough for determining the rank of item i. The rank indeed depends on the values of the other items and as such, is a collective measure: the rank of i depends on the score x_i and also on the scores of all the other items j. An item can be `score-stable' with small fluctuations around x_i^* but large enough so that items with similar x^* can swap ranks. The score and rank stability can occur in the small fluctuations regime B<B_c only, and the rank stability condition reads ⟨ x⟩_r-⟨ x⟩_r+1>σ_r where σ_r denotes the rank fluctuations of an item at rank r and ⟨ x⟩_r(r+1) the average score value at rank r (r+1). This condition predicts a second critical value B_r of the noise coefficient. The resulting stability diagram is shown in Fig. <ref>. The top (B>B_c) corresponds to the unstable region where the score is broadly distributed and neither score nor rank stability are possible. In the region B<B_c, we observe score stability but as discussed above this is not enough to ensure rank stability. In the region B_r<B<B_c the score is stable but not the rank, each item has a score that fluctuates around its steady state x^* determined by its intrinsic fitness. For lower noise B<B_c, both scores and ranks are stable. These predictions are verified by the temporal profile of ranks shown in Fig. <ref>(b) in each case. This simple model thus allows to discuss the variety of behaviors resulting from the interplay between the intrinsic fitness of an item and the noise existing in the system. In the context of cities, we could well imagine that the fitnesses A_i can be very different leading to various simultaneous behavior <cit.>. For large cities, the fitness could be very large and most cities would then be in the rank-stable region. For smaller cities with a smaller fitness, depending on the noise, we could be in different regions of the phase diagram Fig. <ref>(a). Of course, this model is not justified from basic principles and should probably not be trusted in its details, but has the virtue to connect fluctuations of scores and ranks. In this respect, it is a very useful guide for understanding the rank dynamics. There is however an important ingredient that this phenomenological model misses: the number of items is not fixed: for cities, new cities can emerge, others can disappear and there is a total net flux of new element in the ranking list. Indeed, ranking lists have typically a fixed size N_0 such as the top 100 universities or the Fortune 500 companies etc. The different items can then enter of leave the list at any time of the observation period (denoted here by t=0,1,…,T-2). For given values of N_0 and T it is then possible to introduce two measures of the flux <cit.>. In order to understand the dynamics of ranks in various systems, Iniguez et al. <cit.> proposed a simplified model that implements only two mechanisms: (i) random displacements of elements across the list, and (ii) random replacements of elements by new ones. A more precise discussion of this problem and modeling in the context of cities is missing and would be extremely interesting. More generally, the relation between scores and ranks needs still to be elucidated for general random dynamics. § INTEGRATING INTERURBAN MIGRATION As we saw in the empirical section, the Zipf exponent is not always equal to one, and the rank plot is not always a nice power law. These numerous and important deviations suggest that the Gabaix model proposed to explain this ν=1 behavior might miss some important ingredient. Here, we will follow another line, where we won't try to explain Zipf's law but rather try to construct the evolution equation for urban population starting from first principles. §.§ Diffusion with noise: The Bouchaud-Mezard model The `regularization' of the Gibrat model proposed by Gabaix relies on the assumption of a minimum size of cities which introduces some kind of friction and allows the system to converge to a steady state. We however know from history that cities can shrink, and can even disappear. The assumptions used in the Gabaix model are therefore difficult to defend. A different, simple way to regularize this behavior is to introduce exchanges between the different constituents of the system (which would then correspond to migration between different cities). In <cit.>, Bouchaud and Mezard wrote a stochastic dynamical equation in the case of the wealth W_i(t) of an agent i at time t that takes into account trading between individuals. The evolution equation they consider is the following <cit.> dW_i/dt=η_i(t)W_i(t)+∑_jI_jiW_j(t)-I_ijW_i(t) where the first term describes the spontaneous variation of wealth (due to investment for example), and the other term involving the matrix I_ij describes the amount of wealth that an agent j spends buying to agent i. In the context of cities, the first term is the natural demographic growth, while the second term represents inter-urban migrations (see Fig. <ref>). The random variables η_i(t) are assumed to be identically independent gaussian variables with the same mean m and variance given by 2σ^2. The flow (per unit time) between agents i and j is denoted by J_ij=I_ijW_j and for a general form of these couplings the solution of the equation <ref> is unknown. We can however discuss the simple case of the complete graph where all units are exchanging with all others at the same rate taken equal to I_ij=I/N where N is the number of agents (this scaling ensures a well-behaved large N limit). The equation for W_i becomes <cit.> dW_i/dt=I(W-W_i)+η_i(t)W_i where W(t)=∑_iW_i(t)/N. We see that the first term act as a homogenizing force towards the average W. All agents feel the same environment which shows the mean-field nature of this simplified case. Formally we can treat this equation as an equation for W_i subjected to a source W(t) and by integrating formally we obtain W_i(t)=W_i(0)e^∫_0^t(η_i(τ)-I)dτ+ I∫_0^tdτW(τ)e^∫_τ^t(η_i(τ')-I)dτ' The average quantity W is still unknown at this point and if we sum this equation over i we still can not solve it. However, if we assume that in the large N limit, the quantity W is self-averaging W≈_N→∞⟨W⟩ where the brackets ⟨·⟩ denote the average over the variable η, we have W(t)≃W(0)e^(m+σ^2-I)t+ I∫_0^tdτW(τ)e^(m+σ^2-I)(t-τ) (we assumed here that the initial conditions are independent from the η's). This equation can be easily solved (by Laplace transform for example) and we obtain W(t)=W(0)e^(m+σ^2)t In order to observe a stationary distribution we normalize W_i by W(t) and construct the variables w_i=W_i(t)/W(t). These quantities obey the following Langevin-type equation dw_i/dt=f(w_i)+g(w_i)δη_i(t) where f(w)=I(1-w)-σ^2w and g(w)=w (and δη=η-m). In order to write the Fokker-Planck equation for the distribution ρ(w), we have to give a prescription about the multiplicative noise. For this we have specify the correlations in the product and there are basically two main prescriptions, the Ito and Stratonovich ones. Bouchaud and Mezard used the Stratonovich prescription and obtained the Fokker-Planck equation for the distribution ρ(w) under the form ∂ρ/∂ t=-∂/∂ w[fρ]+σ^2∂/∂ w[g∂/∂ wgρ] The equilibrium distribution which satisfies ∂ρ/∂ t=0 leads to ρ(w)=1/Ne^-μ-1/w/w^1+μ where N is a normalization constant and where the exponent is given by μ=1+I/σ^2 This result implies a number of consequences. First, when I is small (and non zero), we observe that this regularization changes the lognormal distribution to a power law (at large w) with exponent between 1 and 2 (for I/σ^2 small). This distribution has a finite average ⟨ w⟩=1 but an infinite variance, signaling large fluctuations. In the case where we consider cities (instead of the wealth W_i we then have the population P_i of city i), this regularization thus provides a simple explanation for the diversity of exponents observed in various countries (see Fig. <ref> and <cit.>). The result is similar to what is obtained with the Gabaix model but with an exponent whose value depends on migration between cities. The diffusion model discussed here shows that the origin of the Zipf law lies in the interplay between internal random growth and exchanges between different cities (we note that this idea that interactions between individual is crucial for understanding Zipf's law was already suggested in <cit.>). An important consequence is that increasing mobility should actually increase μ and therefore reduces the heterogeneity of the city size distribution. Few words of caution are however needed here: first the relation Eq. (<ref>) should be tested empirically, and second, the important assumption that I_ij is constant is by no means obvious. §.§ From first principles to the growth equation of cities The Bouchaud-Mezard equation <ref> tells us that the distribution results from the interplay between random growth and exchanges between the units of the system. This leads to the idea that inter-urban migration plays a crucial role in shaping the population distribution. Instead of trying to explain Zipf's law, we then proceed differently and start from first principles <cit.>. More specifically, we can enumerate all the possible sources of population variations. The dynamics of a system of cities (such as a country) can be decomposed into a sum of various terms, that we write under the following form ∂_t P_i=F_demo+F_internat+F_interurb This expresses the fact that the variation of the population has three main sources: the first one F_demo is the demographic growth which captures both births and deaths and that we can write under the form F_demo=η_i P_i where η_i is the demographic growth rate of city i. The second term F_internat describes international migrations (individuals coming from other countries to the city i). This term is usually small and we will integrate it in the growth rate η_i. The last term F_interurb describes interurban migrations, ie. individuals moving from a city to another and can be written as F_interub=∑_j∈ N(i)[J_j→ i-J_i→ j] where N(i) the set of neighbors of city i (ie. cities that exchange a non-zero number of inhabitants), and where J_j→ i is the migration rate from city j to city i. The time evolution of the population size P_i can then be written as ∂_t P_i=η_i P_i+∑_j∈ N(i)[J_j→ i-J_i→ j] We note that if there is an exact balance of migration flows J_i→ j=J_j→ i, the equation becomes equivalent to Gibrat’s model discussed above and which predicts a lognormal distribution of populations. We started from general considerations (as it amounts to write the balance of births, deaths and migrations), and ended up with this general stochastic equation <ref> which is the diffusion with noise equation discussed in the previous section. This equation is very general, and we have to specify the various terms. In particular the main difficulty is to find a reasonable simplified expression for the migration terms. The idea is then to characterize these different terms empirically. In <cit.>, we focused on four recent datasets of migrations (USA for 2012-2017, France for 2003-2008, UK for 2012-2016 and Canada for 2012-2016) and in <cit.>, measures were redone on other datasets confirming these results. We found that the quantity η_i is an uncorrelated random variable distributed according to a Gaussian with average r (the average growth rate) and variance σ (see Fig. <ref>). We also found in these datasets that for France and the US, the number of neighbors exchanging a non-zero number of individuals is N(i)∼ P_i^γ where γ≈ 0.4-0.5. For the UK and Canadian datasets (that are smaller), all pairs of cities exchange individuals between each other and we get γ=0 (see Fig. <ref>). The important and difficult term is the interurban migration sum. The migration flow J_i→ j depends a priori at least on the populations P_i and P_j, and the distance d_ij between cities i and j. Using the standard so-called `gravitational' model of the form <cit.> J_i→ j=KP_i^μ P_j^ν/d_ij^σ we showed that the dominant contribution to the migration flows comes from the populations and that the distance is a second-order effect. This is actually reasonable as distance is not a strong barrier when it comes to move from a location to a new one. This suggests that we can write the migration term under the following form <cit.> J_i→ j=I_0 P_i^ν ' P_j^ν x_ij where the random variables x_ij have an average equal to 1 and encode all the noise and multiple effects, including distance. The data also shows that we have ν'=ν and that on average there is a sort of detailed balance with J_i→ j=J_j→ i, but also and crucially, that there are fluctuations that can be large. More precisely, if we denote by X_ij=[J_i→ j-J_j→ i]/(I_0 P_i^ν) these random variables are distributed according to a broad law with heavy tails that decay as power law with exponent α <2. The sum in the second term of the r.h.s. of Eq. <ref> can then be rewritten as ∑_j ∈ N(i)J_j→ i-J_i→ j=I_0P_i^ν∑_j∈ N(i)X_ij This expression is a sum of random variables and its distribution can be found using central limit theorems <cit.>. There is however a subtlety here: the random variables are broadly distributed, and have an infinite second moment. The usual central limit theorem cannot be applied in this case and the limiting distribution is not the usual gaussian law. Instead, we have to use the generalized version of the central limit theorem <cit.> (and assuming that the correlations between the variables X_ij are negligible), that shows that the random variable ζ_i=1/N(i)^1/α∑_j∈ N(i) X_ij follows (for a large enough N(i)) a Lévy stable law L_α of parameter α and scale parameter s. Putting all these elements together, we are led to the conclusion that the growth of systems of cities is governed by a stochastic differential equation of a new type with two independent noises: ∂_t P_i=η_i P_i+DP_i^βζ_i where D=sI_0, β=ν+γ/α and where η_i is a gaussian noise of mean the average growth rate r and dispersion σ. Empirically, we observe that β<1 for all cases. Eq. <ref> is the stochastic equation of cities that governs the dynamics of large urban populations and which is our main result here. In this equation both noises η and ζ are uncorrelated, multiplicative and Ito’s convention <cit.> seems here to be the more appropriate as population sizes at time t are computed independently from inter-urban migrations terms at time t+dt. The central limit theorem together with the broadness of inter-urban migration flow allowed us to show that many details in Eq. <ref> are in fact irrelevant and that the dynamics can be described by the more universal Eq. <ref>. Starting from the exact equation <ref> is therefore not only doomed to failure in general, but is also not useful. The importance of migrations was already sensed in <cit.>, but the authors derived a stochastic differential equation with multiplicative Gaussian noise, which we show here to be incorrect: we have indeed a first multiplicative noise term but also crucially another term that is a multiplicative Lévy noise with zero average. This is a major and non-trivial theoretical shift that was missed in all previous studies on urban growth and which has many capital implications, both in understanding stationary and dynamic properties of cities. §.§.§ Distribution The equation (<ref>) governs the evolution of urban population and analyzing it at large times gives indications about the stationary distribution of cities. In order to discuss analytical properties of this equation, we assume that Gaussian fluctuations are negligible compared to the Lévy noise and write η_i≈ r. The corresponding Fokker-Planck equation (with Ito’s convention) can be solved approximately in the limit of large populations using the formalism of Fractional Order Derivatives and Fox functions <cit.>. We then obtain the general distribution at time t that can be expanded in powers of P as <cit.>: ρ(P,t)=∑_k=1^∞C_k(1/a(t)P)^αβ+α(1-β)k where C_k is a complicated prefactor independent from time and P and where a(t) decreases exponentially at large times. Eq. <ref> shows that there is a scaling law of the form ρ(P,t)=1/P F(P/P(t)) with a scaling function F that depends on the country only. We confirmed this scaling form for France (the only country for which we had sufficient data) and the resulting collapse is shown in Fig. <ref>. This good collapse confirms the validity of the expansion Eq. <ref>. This expansion Eq. <ref> allows us to discuss the validity of Zipf's law in general. Indeed, we observe that the population distribution is dominated at large P by the order k=1 and converges towards a power law with exponent α≠ 1. The speed of convergence towards this power-law can be estimated with the ratio λ(P,t) of the second over the first term of the expansion Eq. <ref> and reads λ (P,t)=D^α/r(P(t)/P)^α (1-β) where P(t) is the average city size. If λ (P)≥ 1, the power law regime with exponent α is not dominant (and cannot be directly observed). Estimates of α and β for the four datasets show that finite-time effects are very important in general and that a power-law regime is only reached for unrealistically large city sizes. For most cases, there is therefore no reason to observe a clear power law with a universal exponent, and a fortiori a universal Zipf's law of the form P∼ 1/r^ν. We can however explain why empirically such a Zipf's law for cities was apparently measured in many cases. If we perform a power law fit directly on the solution Eq. <ref>, we obtain an `effective' exponent value that depends on many details (such as the range over which the fit is made). We show on Fig. <ref> an example of such a fit over a range [S_min,S_max]. The possibility to have a good power law fit might explain why an apparent Zipf's law was observed for such a long time, and also why it is not universal. The effective exponent depends on many details and this might explain the large diversity of exponents collected in <cit.>. §.§.§ Dynamics When we observe these historical developments about Zipf's law, it is natural to think that it is not enough for an equation to explain the population distribution for demonstrating its validity. Further tests are needed, and in particular such an equation as Eq. <ref>, if correct, should also give information about the dynamics of cities over time. This can be done by following the populations and ranks of the system’s cities at different times with the help of rank clocks introduced in <cit.>. In this work, it was proven that the micro-dynamics of cities is very turbulent with many rises and falls of entire cities that cannot result from Gabaix’s model. In order to compare various models, we show in Fig. <ref> the empirical rank clock for France (from 1876 to 2015) and for numerical results obtained with Gabaix’s model and Eq. <ref>. We see that in Gabaix’s model Fig. <ref>(middle), city ranks are on average stable and not turbulent: the rank trajectories are concentric and the rank of a city oscillates around its average position. In the real dynamics (left), cities can emerge or die. Very fast changes in rank order can occur, leading to a much more turbulent behavior. In the model described by Eq. <ref>, the large fluctuations of Lévy’s noise are able to statistically reproduce such fast surges and swoops of cities (Fig. <ref>(right)). We can quantity this a bit more precisely <cit.>, For example, we compare the average shift per time d=1/NT∑_t∑_i=1^N r_i(t)-r_i(t-1) over T years and for N cities in the three cases, or study the statistical fluctuations of the rank. For all these measures, we find that Lévy fluctuations are much more able to reproduce the turbulent properties of the dynamics of cities through time. Indeed, the fast births or deaths of cities due for example to wars, discoveries of new resources, incentive settlement policies, etc. are statistically explained by broadly-distributed migrations and incompatible with a Gaussian noise. We can also compare with the empirical data the predictions of the different models for the time needed to make the largest rank jump (see Fig. <ref>). We observe that for France, there is typically a duration of order 80 years to make a very large rank jump. We confirm here that Gabaix’s model is unable to reproduce these very large fluctuations and that equation <ref> agrees very well with the data. § DISCUSSION AND PERSPECTIVES We reviewed different approaches based on stochastics differential equations for describing the evolution of urban populations (a more extensive discussion can be found in <cit.>). This problem has a long history and showcases some of the danger and pitfalls of stylized facts but also the importance of extensive datasets. Indeed, after the empirical observations of Auerbach and Zipf, most of the theoretical approaches focused on the problem how to explain a Zipf exponent equal to one. Gibrat's model couldn't explain the appearance of a stationary distribution, implying the necessity of introducing some friction. This is what the Gabaix model did with a simple mechanisms that produces indeed a Zipf exponent equal to one (or equivalently a power law distribution for the population with exponent equal to 2). Other models were then proposed and specifically designed to produce a Zipf exponent equal to one. However the recent availability of massive datasets, for many countries and many time periods showed that the Zipf exponent is not universal and could be very different from one country to another. This is what actually forced us to reconsider this problem but with another perspective: instead of trying to explain the Zipf result, we tried to construct the evolution equation starting from basic principles. This led to a stochastic equation of growth for cities that is empirically sound and challenges Zipf's law and current models of urban growth. This equation <ref> of a new type with two sources of noise predicts an asymptotic power-law regime, but this stationary regime is not reached in general and finite-time effects cannot be discarded. This implies that in general, the Zipf plot is not a power law and a power law fit gives an effective exponent that depends on many details of the system and can vary a lot from a case to another. In other words, Zipf's law for cities does not hold in general. In addition, and in contrast with most existing models, this equation is also able to statistically reproduce the turbulent micro-dynamics of cities with fast births and deaths. In the derivation of this equation, we showed that microscopic details of interurban migrations are irrelevant and the growth equation obtained is universal. A crucial point in this reasoning is that although we have on average some sort of detailed balance that would lead to a Gaussian multiplicative growth process, it is the existence of non-universal and broadly distributed fluctuations of the microscopic migration flows between cities that govern the statistics of city populations. This result exhibits an interesting connection between the behavior of complex systems and non-equilibrium statistical physics for which microscopic currents and the violation of detailed balance seem to be the rule rather than the exception <cit.>. At a practical level, this result also highlights the critical effect of not only inter-urban migration flows (an ingredient that is generally not considered in urban planning theories), but more importantly their large fluctuations, ultimately connected to the capacity of a city to attract a large number of new citizens. From the urban science perspective, the next frontier is to describe the growth of cities, not from the population point of view only, but in spatial terms. The surface growth equation for cities is still missing and the recent availability of remote sensing data for many cities will certainly trigger interesting results in the future. There are many other questions that are left open. In particular, the spatial structure of these flows displays interesting features for the US<cit.> and it would be interesting to investigate many countries and different periods. Also, we discussed a phenomenological approach to the dynamics of ranking. This is certainly an interesting research direction. In particular, if the dynamics of scores of items are governed by a given equation, we generally don't know what it implies for the ranking dynamics. In other words, the mapping from the equation for scores to the equation for ranks is in general missing. § BIBLIOGRAPHY 99 Batty:2013 M. Batty, The new science of cities. MIT press; 2013. Barthelemy:2016 M. Barthelemy, The structure and dynamics of cities, Cambridge, UK: Cambridge University Press, 2016. Bettencourt:2021 L.M. Bettencourt, Introduction to urban science: evidence and theory of cities as complex systems. MIT Press 2021. Pumain:2004 D. Pumain, (2004) Scaling laws and urban systems. Louail:2022 T. Louail, M. Barthelemy, `A dominance tree approach to systems of cities, ' Computers, Environment and Urban Systems, Volume 97, 101856 (2022) Rozen:2011 H.D. Rozenfeld, D. Rybski, X. Gabaix, H.A. Makse, `The area and population of cities: New insights from a different perspective on cities.' American Economic Review. 2011 Aug;101(5):2205-5. Henderson:1974 J.V. Henderson, `The sizes and types of cities. ' The American Economic Review. 1974 Sep 1;64(4):640-56. Zipf G.K. Zipf, Human behavior and the principle of least effort: An introduction to human ecology. Ravenio Books, 2016. Auerbach F. Auerbach, `Das gesetz der bevölkerungskonzentration.' Petermanns Geographische Mitteilungen 59 (1913): 74-76 (in German, translation available at <https://www.vwl.uni-mannheim.de/media/Lehrstuehle/vwl/Ciccone/auerbach_1913_translated_with_introduction_March_2021.pdf>). Arshad S. Arshad, S. Hu and B. N. Ashraf, `Zipf’s law and city size distribution: A survey of the literature and future research agenda,' Physica A: Statistical Mechanics and its Applications, vol. 492, pp. 75-92, 2018. Singer H. Singer, `The `Courbe des populations'. A parallel to Pareto’s law,' Econ. J., vol. 46, p. 254–263, 1936. Krugman P. Krugman, `Confronting the mystery of urban hierarchy,' J. Japan. Int. Econ., vol. 10, p. 399–418, 1996. Ioannides Y. Ioannides and H. Overman, `Zipf’s law for cities: an empirical examination,' Reg. Sci. Urban Econ., vol. 33, p. 127–137, 2003. Eeckhout J. Eeckhout, `Gibrat’s law for (all) cities,' Am. Econ. Rev., vol. 94, p. 1429–1451, 2004. Rossi E. Rossi-Hansberg and M. Wright, `Urban structure and growth,' Rev. Econ. Stud., vol. 74, p. 597–624, 2007. Cordoba J.-C. Còrdoba, `On the distribution of city sizes,' J. Urban Econ., vol. 63, p. 177–197, 2008. Favaro J. M. Favaro and D. Pumain, `Gibrat Revisited: An Urban Growth Model Incorporating Spatial Interaction and Innovation Cycles,' Geographical Analysis, vol. 43, pp. 261-286, 2011. Duranton G. Duranton and D. Puga, `The growth of cities,' in Handbook of economic growth (Vol. 2), Elsevier, 2014, pp. 781-853. Gabaix X. Gabaix, `Zipf's law for cities: an explanation.' The Quarterly journal of economics 114.3 (1999): 739-767. Pumain D. Pumain and F. Moriconi-Ebrard, `City size distributions and metropolisation,' Geojournal, vol. 43, pp. 307-314, 1997. Coro B. Corominas-Murtra and R. Solé, `Universality of Zipf’s law,' Phys. Rev. E, vol. 82, p. 011102, 2010. Benguigui L. Benguigui and E. Blumenfeld-Lieberthal, `A dynamic model for city size distribution beyond Zipf's law,' Physica A: Statistical Mechanics and its Applications, Vols. 384(2),., pp. 613-627, 2007. Blank A. Blank and S. Solomon, `Power laws in cities population, financial markets and internet sites (scaling in systems with a variable number of components),' Physica A: Statistical Mechanics and its Applications, Vols. 287(1-2), pp. 279-288, 2000. Zanette D.H. Zanette and S. C. Manrubia, `Role of intermittency in urban development: a model of large-scale city formation,' Physical Review Letters, vol. 79, p. 523, 1997. Marsili Marsili, Matteo, and Yi-Cheng Zhang. `Interacting individuals leading to Zipf's law.' Physical review letters 80.12 (1998): 2741. Soo K. T. Soo, `Zipf's Law for cities: a cross-country investigation,' Regional science and urban Economics, vol. 35, pp. 239-263, 2005. Gan L. Gan, D. Li and S. Song, `Is the Zipf law spurious in explaining city-size distributions?,' Economics Letters, vol. 92, pp. 256-262, 2006. Benguigui2 L. Benguigui and E. Blumenfeld-Lieberthal, `The end of a paradigm: is Zipf’s law universal?,' Journal of geographical systems, pp. 13(1), 87-100, 2011. Cottineau C. Cottineau, `MetaZipf. A dynamic meta-analysis of city size distributions.' PloS one 12.8 (2017): e0183919. Sornette1 J. Laherrere, D.Sornette, `Stretched exponential distributions in nature and economy:fat tails with characteristic scales', The European Physical Journal B-Condensed Matter and Complex Systems, 2, 525-539, 1998. Batty:2006 M. Batty, `Rank clocks,' Nature, vol. 444, p. 592–596, 2006. VanKampen N.G. Van Kampen, N. G. (1976). Stochastic differential equations. Physics reports, 24(3), 171-228. Bernt B. Øksendal, `Stochastic differential equations.' Stochastic differential equations. Springer, Berlin, Heidelberg, 2003. 65-84. Bouchaud:1990 J.-P. Bouchaud, A. Georges, `Anomalous diffusion in disordered media: statistical mechanisms, models and physical applications.' Physics reports 195.4-5 (1990): 127-293. Gibrat R. Gibrat, `Les inégalits économiques.' Sirey (1931). Fisher H. Fischer, A history of the central limit theorem: from classical to modern probability theory. New York: Springer, 2011. Kesten H. Kesten, `Random difference equations and renewal theory for products of random matrices.' Acta Mathematica 131 (1973): 207-248. Levy M. Levy, S. Solomon, `Power laws are logarithmic Boltzmann laws.' International Journal of Modern Physics C 7.04 (1996): 595-601. Sornette D. Sornette, R. Cont. `Convergent multiplicative processes repelled from zero: power laws and truncated power laws.' Journal de Physique I 7.3 (1997): 431-444. Blumm N. Blumm, G. Ghoshal, Z. Forró, M. Schich, G. Bianconi, J.-P. Bouchaud, A.-L. Barabási, `Dynamics of ranking processes in complex systems.' Physical review letters. 2012 Sep 17;109(12):128701. Iniguez G. Iñiguez, C. Pineda, C. Gershenson, A.-L. Barabási, `Dynamics of ranking. ' Nature communications. 2022 Mar 28;13(1):1-7. BouchaudMezard J.-P. Bouchaud, M. Mézard, `Wealth condensation in a simple model of economy.' Physica A: Statistical Mechanics and its Applications 282.3-4 (2000): 536-545. Verbavatz V. Verbavatz, M. Barthelemy, `The growth equation of cities.' Nature 587.7834 (2020): 397-401. Satish S.M. Reia, S.C. Rao, M. Barthelemy, S.V. Ukkusuri, `Spatial structure of city population growth.' Nature Communications 13.1 (2022): 1-10. Erlander S. Erlander and N. F. Stewart, The gravity model in transportation analysis: theory and extensions, Vsp, 1990. Gnedenko B. V. Gnedenko and A. N. Kolmogorov, Limit distributions for sums of independent random variables, Cambridge, MA (USA): Addison-Wesley, 1954. Bettencourt L. Bettencourt and D. Zünd, `Demography, Symmetry and the Emergence of Universal Patterns in Urban Systems,' Nature communications, vol. 11, pp. 1-9, 2020. VanKampen2 N.G. van Kampen, `Itô versus Stratonovich,' J. Stat. Phys., vol. 24, p. 175–187, 1981. Srokowski T. Srokowski, `Multiplicative Lévy processes: Itô versus Stratonovich interpretation,' Physical Review E, vol. 80, p. 051113, 2009. Jespersen S. Jespersen, M. R. and H. C. Fogedby, `Lévy flights in external force fields: Langevin and fractional Fokker-Planck equations and their solutions,' Physical Review E, vol. 59, p. 2736, 1999. Schertzer D. Schertzer, M. Larchevêque, J. Duan, V. Yanovsky and S. Lovejoy, `Fractional Fokker–Planck equation for nonlinear stochastic differential equations driven by non-Gaussian Lévy stable noises,' Journal of Mathematical Physics, vol. 42, pp. 200-202, 2001. Fox C. Fox, `The G and H functions as symmetrical Fourier kernels,' vol. 98(3), 1961. BV M. Barthelemy and V. Verbavatz, Statistics and dynamics of urban populations, Oxford University Press, to appear (2023) Bouchaud J.-P. Bouchaud, `Crises and collective socio-economic phenomena: simple models and challenges,' Journal of Statistical Physics, vol. 151, pp. 567-606, 2013.
http://arxiv.org/abs/2307.07539v1
20230714135611
Improved Self-Normalized Concentration in Hilbert Spaces: Sublinear Regret for GP-UCB
[ "Justin Whitehouse", "Zhiwei Steven Wu", "Aaditya Ramdas" ]
cs.LG
[ "cs.LG", "math.ST", "stat.ML", "stat.TH" ]
Mott-Enhanced Exciton Condensation in a Hubbard bilayer Massimo Capone August 12, 2023 ======================================================= In the kernelized bandit problem, a learner aims to sequentially compute the optimum of a function lying in a reproducing kernel Hilbert space given only noisy evaluations at sequentially chosen points. In particular, the learner aims to minimize regret, which is a measure of the suboptimality of the choices made. Arguably the most popular algorithm is the Gaussian Process Upper Confidence Bound (GP-UCB) algorithm, which involves acting based on a simple linear estimator of the unknown function. Despite its popularity, existing analyses of GP-UCB give a suboptimal regret rate, which fails to be sublinear for many commonly used kernels such as the Matérn kernel. This has led to a longstanding open question: are existing regret analyses for GP-UCB tight, or can bounds be improved by using more sophisticated analytical techniques? In this work, we resolve this open question and show that GP-UCB enjoys nearly optimal regret. In particular, our results directly imply sublinear regret rates for the Matérn kernel, improving over the state-of-the-art analyses and partially resolving a COLT open problem posed by Vakili et al. Our improvements rely on two key technical results. First, we use modern supermartingale techniques to construct a novel, self-normalized concentration inequality that greatly simplifies existing approaches. Second, we address the importance of regularizing in proportion to the smoothness of the underlying kernel k. Together, these new technical tools enable a simplified, tighter analysis of the GP-UCB algorithm. § INTRODUCTION An essential problem in areas such as econometrics <cit.>, medicine <cit.>, optimal control <cit.>, and advertising <cit.> is to optimize an unknown function given bandit feedback, in which algorithms only get to observe the outcomes for the chosen actions. Due to the bandit feedback, there is a fundamental tradeoff between exploiting what has been observed about the local behavior of the function and exploring to learn more about the function's global behavior. There has been a long line of work on bandit learning that investigates this tradeoff across different settings, including multi-armed bandits <cit.>, linear bandits <cit.>, and kernelized bandits <cit.>. In this work, we focus on the kernelized bandit framework, which can be viewed as an extension of the well-studied linear bandit setting to an infinite-dimensional reproducing kernel Hilbert space (or RKHS) (H, ⟨·, ·⟩_H). In this problem, there is some unknown function f^∗ : → of bounded norm in H, where ⊂^d is a bounded set. In each round t ∈ [T], the learner uses previous observations to select an action X_t ∈, and then observes feedback Y_t := f^∗(X_t) + ϵ_t, where ϵ_t is a zero-mean noise variable. The learner aims to minimize (with high probability) the regret at time T, which is defined as R_T := ∑_t = 1^T f^∗(x^∗) - f^∗(X_t) where x^∗ := max_x ∈f^∗(x). The goal is to develop simple, efficient algorithms for the kernelized bandit problem that minimize regret R_T. We make the following standard assumption. We also make assumptions on the underlying kernel k, which we discuss in Section <ref>. We assume that (a) there is some constant D > 0 known to the learner such that f^∗_H ≤ D and (b) for every t ≥ 1, ϵ_t is σ-subGaussian conditioned on σ(Y_1:t - 1, X_1:t). Arguably the simplest algorithm for the kernelized bandit problem is GP-UCB (Gaussian process upper confidence bound) <cit.>. GP-UCB works by maintaining a kernel ridge regression estimator of the unknown function f^∗ alongside a confidence ellipsoid, optimistically selecting in each round the action that provides the maximal payoff over all feasible functions. Not only is GP-UCB efficiently computable thanks to the kernel trick, but it also offers strong empirical guarantees <cit.>. The only seeming deficit of GP-UCB is its regret guarantee, as existing analyses only show that, with high probability, R_T = O(γ_T√(T)), where γ_T is a kernel-dependent measure of complexity known as the maximum information gain <cit.>. In contrast, more complicated, less computationally efficient algorithms such as SupKernelUCB <cit.> have been shown to obtain regret bounds of O(√(γ_TT)), improving over the analysis of GP-UCB by a multiplicative factor of √(γ_T). This gap is stark as the bound O(γ_T√(T)) fails, in general, to be sub-linear for the practically relevant Matérn kernel, whereas O(√(γ_T T)) is sublinear for any kernel experiencing polynomial eigendecay <cit.>. This discrepancy has prompted the development of many variants of GP-UCB that, while less computationally efficient, offer better, regret guarantees in some situations <cit.>. (See a detailed discussion of these algorithms along with other related work in Appendix <ref>.) However, the following question remains an open problem in online learning <cit.>: are existing analyses of vanilla GP-UCB tight, or can an improved analysis show GP-UCB enjoys sublinear regret? §.§ Contributions In this work, we show that GP-UCB obtains almost optimal, sublinear regret for any kernel experiencing polynomial eigendecay. This, in particular, implies for the first time that GP-UCB obtains sublinear regret for the commonly used Matérn family of kernels. In more detail, our two main contributions are as follows. * In Section <ref>, we construct a novel, time-uniform confidence bound for controlling the growth of self-normalized processes in separable Hilbert spaces. As opposed to the existing bound of <cit.>, which involves employing a complicated “double mixture” argument, our bound follows directly from applying the well-studied finite-dimensional method of mixtures alongside a simple truncation argument <cit.>. These bounds are clean and show simple dependence on the regularization parameter. We believe our bounds and proofs may be applicable in (or easily adapted to) other kernelized learning problems as well. * In Section <ref>, we use our new concentration inequalities to provide an improved regret analysis for GP-UCB. By carefully choosing regularization parameters based on the smoothness of the underlying kernel, we demonstrate that GP-UCB enjoys sublinear regret of O(T^3 + β/2 + 2β) for any kernel experiencing (C, β)-polynomial eigendecay. As a special case of this result, we obtain regret bounds of O(T^ν + 2d/2ν + 2d) for the commonly used Matérn kernel with smoothness ν in dimension d. Our new analysis improves over existing state-of-the-art analysis for GP-UCB, which fails to guarantee sublinear regret in general for the Matérn kernel family <cit.>, and thus partially resolves an open problem posed by <cit.>. In sum, our results show that GP-UCB, the go-to algorithm for the kernelized bandit problem, is nearly optimal, coming close to the algorithm-independent lower bounds of <cit.>. Our work thus can be seen as providing theoretical justification for the strong empirical performance of GP-UCB <cit.>. Perhaps the most important message of our work is the importance of careful regularization in online learning problems. While many existing bandit works treat the regularization parameter as a small, kernel-independent constant, we are able to obtain significant improvements by carefully selecting the regularization parameter. We hope our work will encourage others to pay close attention to the selection of regularization parameters in future works. § BACKGROUND AND PROBLEM STATEMENT Notation. We briefly touch on basic definitions and notational conveniences that will be used throughout our work. If a_1, …, a_t ∈, we let a_1:t := (a_1, …, a_t)^⊤. Let (H, ⟨·, ·⟩_H) be a reproducing kernel Hilbert space associated with a kernel k : ×→. We refer to the identity operator on H as 𝕀_H. This is distinct from the identity mapping on ^d, which we will refer to as I_d. For elements f, g ∈ H, we define their outer product as fg^⊤ := f⟨ g, ·⟩_H and inner product as f^⊤ g := ⟨ f, g⟩_H. For any t ≥ 1 and sequence of points x_1, …, x_t ∈ (which will typically be understood from context), let Φ_t := (k(·, x_1), …, k(·, x_t))^⊤. We can respectively define the Gram matrix K_t : ^t →^t and covariance operator V_t : H → H as K_t := (k(x_i, x_j))_i, j ∈ [t] = Φ_tΦ_t^⊤ and V_t := ∑_s = 1^t k(·, x_s)k(·, x_s)^⊤ = Φ_t^⊤Φ_t. These two operators essentially encode the same information about the observed data points, the former being easier to work with when actually performing computations (by use of the well known kernel trick) and latter being easier to algebraically manipulate. Suppose A : H → H is a Hermitian operator of finite rank; enumerate its non-zero eigenvalues as λ_1(A), …, λ_k(A). We can define the Fredholm determinant of I + A as (I + A) := ∏_m = 1^k(1 + λ_i(A)) <cit.>. For any t ≥ 1, ρ > 0, and x_1, …, x_t ∈, one can check via a straightforward computation that (I_t + ρ^-1K_t) = (𝕀_H + ρ^-1V_t), where K_t and V_t are the Gram matrix and covariance operator defined above. We, again, will use these two quantities interchangeably in the sequel, but will typically prefer the latter in our proofs. If (H, ⟨·, ·⟩_H) is a (now general) separable Hilbert space and (φ_n)_n ≥ 1 is an orthonormal basis for H, for any N ≥ 1 we can define the orthogonal projection operator π_N : H →{φ_1, …, φ_N}⊂ H by π_N f := ∑_n = 1^N ⟨ f, φ_n⟩_H φ_n. We can correspondingly the define the projection onto the remaining basis functions to be the map π_N^⊥ : H →{φ_1, …, φ_N}^⊥ given by π_N^⊥ f := f - π_N f. Lastly, if A : H → H is a symmetric, bounded linear operator, we let λ_max(A) denote the maximal eigenvalue of A, when such a value exists. In particular, λ_max(A) will exist whenever A has a finite rank, as will typically be the case considered in this paper. Basics on RKHSs. Let ⊂^d be some domain. A kernel is a positive semidefinite map k : ×→ that is square-integrable, i.e. ∫_∫_ |k(x, y)|^2dxdy < ∞. Any kernel k has an associated reproducing kernel Hilbert space or RKHS (H, ⟨·, ·⟩_H) containing the closed span of all partial kernel evaluations k(·, x), x ∈. In particular, the inner product ⟨·, ·⟩_H on H satisfies the reproducing relationship f(x) = ⟨ f, k(·, x)⟩_H for all x ∈. A kernel k can be associated with a corresponding Hilbert-Schmidt operator, which is the Hermitian operator T_k : L^2() → L^2() given by (T_kf)(x) := ∫_ f(y)k(x, y)dy for any x ∈. In short, T_k can be thought of as “smoothing out” or “mollifying” a function f according to the similarity metric induced by k. T_k plays a key role in kernelized learning through Mercer's Theorem, which gives an explicit representation for H in terms of the eigenvalues and eigenfunctions of T_k. [Mercer's Theorem] Let (H, ⟨·, ·⟩_H) be the RKHS associated with kernel k, and let (μ_n)_n ≥ 1 and (ϕ_n)_n ≥ 1 be the sequence of non-increasing eigenvalues and corresponding eigenfunctions for T_k. Let (φ_n)_n ≥ 1 be the sequence of rescaled functions φ_n := √(μ)_n ϕ_n. Then, H = {∑_n = 1^∞θ_n φ_n : ∑_n = 1^∞θ_n^2 < ∞}, and (φ_n)_n ≥ 1 forms an orthonormal basis for (H, ⟨·, ·⟩_H). We make the following assumption throughout the remainder of our work, which is standard and comes from <cit.>. [Assumption on kernel k] The kernel k : ×→ satisfies (a) |k(x, y)| ≤ L for all x, y ∈, for some constant L > 0 and (b) |ϕ_n(x)| ≤ B for all x ∈, for some B > 0. “Complexity” of RKHS's. By the eigendecay of a kernel k, we really mean the rate of decay of the sequence of eigenvalues (μ_n)_n ≥ 1. In the literature, there are two common paradigms for studying the eigendecay of k: (C_1, C_2, β)-exponential eigendecay, under which ∀ n ≥ 1, μ_n ≤ C_1exp(-C_2n^β), and (C, β)-polynomial eigendecay, under which ∀ n ≥ 1, μ_n ≤ Cn^-β. For kernels experiencing exponential eigendecay, of which the squared exponential is the most important example, GP-UCB is known to be optimal up to poly-logarithmic factors. However, for kernels experiencing polynomial eigendecay, of which the Matérn family is a common example, existing analyses of GP-UCB fail to yield sublinear regret. It is this latter case we focus on in this work. Given the above representation in Fact <ref>, it is clear that the eigendecay of the kernel k governs the “complexity” or “size” of the RKHS H. We make this notion of complexity precise by discussing maximum information gain, a sequential, kernel-dependent quantity governing concentration and hardness of learning in RKHS's <cit.>. Let t ≥ 1 and ρ > 0 be arbitrary. The maximum information gain at time t with regularization ρ is the scalar γ_t(ρ) given by γ_t(ρ) := sup_x_1, …, x_t ∈1/2log(𝕀_H + ρ^-1V_t) = sup_x_1, …, x_t ∈1/2log(I_t + ρ^-1K_t). Our presentation of maximum information gain differs from some previous works in that we encode the regularization parameter ρ into our notation. This inclusion is key for our results, as we obtain improvements by carefully selecting ρ. <cit.> bound the rate of growth of γ_t(ρ) in terms of the rate of eigendecay of the kernel k. We leverage the following fact in our main results. [Corollary 1 in <cit.>] Suppose that kernel k satisfies Assumption <ref> and experiences (C, β)-polynomial eigendecay. Then, for any t ≥ 1, we have γ_t(ρ) ≤((CB^2t/ρ)^1/βlog^-1/β(1 + Lt/ρ) + 1)log(1 + Lt/ρ). We last define the practically relevant Matérn kernel and discuss its eigendecay. The Matérn kernel with bandwidth σ > 0 and smoothness ν > 1/2 is given by k_ν, σ(x, y) := 1/Γ(ν)2^ν - 1(√(2ν)x - y_y/σ)^νB_ν(√(2ν)x-y_2/σ), where Γ is the gamma function and B_ν is the modified Bessel function of the second kind. It is known that there is some constant C > 0 that may depend on σ but not on d or ν such that k_ν, σ experiences (C, 2ν + d/d)-eigendecay <cit.>. Basics on martingale concentration: A filtration (_t)_t ≥ 0 is a sequence of σ-algebras satisfying _t ⊂_t + 1 for all t ≥ 1. If (M_t)_t ≥ 0 is a H-valued process, we say (M_t)_t ≥ 0 is a martingale with respect to (_t)_t ≥ 0 if (a) (M_t)_t ≥ 0 is (_t)_t ≥ 0-adapted, and (b) (M_t |_t - 1) = M_t - 1 for all t ≥ 1. An -valued process is called a supermartingale if the equality in (b) is replaced with “≤”, i.e. supermartingales tend to decrease. Martingales are useful in many statistical applications due to their strong concentration of measure properties <cit.>. The follow fact can be leveraged to provide time-uniform bounds on the growth of any non-negative supermartingale. [Ville's Inequality] Let (M_t)_t ≥ 0 be a non-negative supermartingale with respect to some filtration. Suppose M_0 = 1. Then, for any δ∈ (0, 1), we have (∃ t ≥ 0 : M_t ≥1/δ) ≤δ. See <cit.> for a self-contained proof of Ville's inequality, and many applications. If is a σ-algebra, and ϵ is an -valued random variable, we say ϵ is σ-subGaussian conditioned on if, for any λ∈, we have log(e^λϵ|) ≤λ^2σ^2/2; in particular this condition implies that ϵ is mean zero. With this, we state the following result on self-normalized processes from <cit.> (based off of results in <cit.>) which is commonly leveraged to construct confidence ellipsoids in the linear bandit setting. [Theorem 1 from <cit.>] Let (_t)_t ≥ 0 be a filtration, let (X_t)_t ≥ 1 be an (_t)_t ≥ 0-predictable sequence in ^d, and let (ϵ_t)_t ≥ 1 be a real-valued (_t)_t ≥ 1-adapted sequence such that conditional on _t - 1, ϵ_t is mean zero and σ-subGaussian. Then, for any ρ > 0, the process (M_t)_t ≥ 0 given by M_t := 1/√((I_d + ρ^-1V_t))exp{1/2(ρ I_d + V_t)^-1/2S_t/σ_2^2} is a non-negative supermartingale with respect to (_t)_t ≥ 0, where S_t := ∑_s = 1^t ϵ_s X_s and V_t := ∑_s = 1^t X_sX_s^⊤. Consequently, by Fact <ref>, for any confidence δ∈ (0, 1), the following holds: with probability at least 1 - δ, simultaneously for all t ≥ 1, we have (V_t + ρ I_d)^-1/2S_t_2 ≤σ√(2log(1/δ√((I_d +ρ^-1V_t)))). Note the simple dependence on the regularization parameter ρ > 0 in the above bound. While the regularization parameter ρ doesn't prove important in regret analysis for linear bandits (where ρ is treated as constant), the choice for ρ will be critical in our setting. Our main result in the following section will extend Fact <ref> to the setting of Hilbert spaces essentially verbatim. § IMPROVED SELF-NORMALIZED CONCENTRATION IN HILLBERT SPACES We now discuss the first of our main contributions: a novel time-uniform, self-normalized concentration inequality for processes taking value in any separable Hilbert space. We use this bound in the sequel to construct simpler, more flexible confidence ellipsoids than currently exist for GP-UCB. The idea behind our contributions is straightforward — by using the existing, finite-dimensional bounds of <cit.> along with a careful limiting argument, we can show and identical result holds in infinite-dimensional Hilbert spaces. We start by presenting our new concentration inequality below. Let (_t)_t ≥ 0 be a filtration, (f_t)_t ≥ 1 be an (_t)_t ≥ 0-predictable sequence in a separable Hilbert space H such that f_t_H < ∞ a.s. for all t ≥ 0, and (ϵ_t)_t ≥ 1 be an (_t)_t ≥ 1-adapted sequence in such that conditioned on _t - 1, ϵ_t is mean zero and σ-subGaussian. Defining S_t := ∑_s = 1^t ϵ_s f_s and V_t := ∑_s = 1^t f_s f_s^⊤, we have that for any ρ > 0, the process (M_t)_t ≥ 0 defined by M_t := 1/√((𝕀_H + ρ^-1V_t))exp{1/2(ρ𝕀_H + V_t)^-1/2S_t/σ_H^2} is a nonnegative supermartingale with respect to (_t)_t ≥ 0. Consequently, by Fact <ref>, for any δ∈ (0, 1), with probability at least 1 - δ, simultaneously for all t ≥ 1, we have (V_t + ρ I_d)^-1/2S_t_H ≤σ√(2log(1/δ√((𝕀_H +ρ^-1V_t)))). We provide a full proof of Theorem <ref> in Appendix <ref> and provide a sketch below. We can summarize our proof in two steps. First, following from Fact <ref>, the bound in Theorem <ref> holds when we project S_t and V_t onto a finite number N of coordinates, defining a “truncated” non-negative supermartingale M_t^(N). Secondly, we can make a limiting arugment, showing M_t^(N) is “essentially” M_t for large values of N. In other words, we develop a method of mixtures for separable Hilbert spaces[A space is separable if it has a countable, dense set. Separability is key, because it means we have a countable basis, whose first N elements we project onto.]. Let (φ_n)_n ≥ 1 be an orthonormal basis for H, and, for any N ≥ 1, let π_N denote the projection operator onto H_N := {φ_1, …, φ_N}. Note that the projected process (π_N S_t)_t ≥ 1 is an H-valued martingale with respect to (_t)_t ≥ 0. Further, note that the projected variance process (π_N V_t π^⊤_N)_t ≥ 0 satisfies π_N V_t π^⊤_N = ∑_s = 1^t (π_N f_s)(π_N f_s)^⊤. Since, for any N ≥ 1, H_N is a finite-dimensional Hilbert space, it follows from Lemma <ref> that the process (M_t^(N))_t ≥ 0 given by M_t^(N) := 1/√((𝕀_H + ρ^-1π_N V_t π_N^⊤))exp{1/2(ρ𝕀_H +π_N V_t π_N^⊤)^-1/2π_N S_t_H^2}, is a non-negative supermartingale with respect to (_t)_t ≥ 0. One can check that, for any t ≥ 0, M_t^(N) M_t. Thus, Fatou's Lemma implies (M_t |_t - 1) = (lim inf_N →∞M_t^(N)|_t - 1) ≤lim inf_N →∞(M_t^(N)|_t - 1) ≤lim inf_N →∞M_t - 1^(N) = M_t - 1, which proves the first part of the claim. The second part of the claim follows from applying Fact <ref> to the defined non-negative supermartingale and rearranging. See Appendix <ref> for details. The following corollary specializes Theorem <ref> to the case where H is a RKHS and f_t = k(·, X_t), for all t ≥ 1. In this special case, we can put our results in terms of the familiar Gram matrix K_t, assuming the quantity is invertible. While we prefer the simplicity and elegance of working directly in the RKHS H in the sequel, the follow corollary allows us to present our bounds in a way that is computationally tractable. Let us assume the same setup as Theorem <ref>, and additionally assume that (a) (H, ⟨·, ·⟩_H) is a RKHS associated with some kernel k, and (b) there is some -valued (_t)_t ≥ 0-predictable process (X_t)_t ≥ 1 such that (f_t)_t ≥ 1 = (k(·, X_t))_t ≥ 1. Then, for any ρ > 0 and δ∈ (0, 1), we have that, with probability at least 1 - δ, simultaneously for all t ≥ 0, (V_t + ρ𝕀_H)^-1/2S_t_H ≤ σ√(2log(√(1/δ(I_t +ρ^-1K_t)))). If, in addition, the Gram matrix K_t = (k(X_i, X_j))_i, j ∈ [t] is invertible, we have the equality (I_t + ρ K_t^-1)^-1/2ϵ_1:t_2 = (ρ𝕀_H + V_t)^-1/2S_t_H . We prove Corollary <ref> in Appendix <ref>. We compare our bound to the following result from <cit.>. [Theorem 1 from <cit.>] Assume the same setup as Fact <ref>. Let η > 0 be arbitrary, and let K_t := (k(X_i, X_j))_i, j ∈ [t] be the Gram matrix corresponding to observations made by time t ≥ 1. Then, with probability at least 1 - δ, simultaneously for all t ≥ 1, we have ((K_t + η I_t)^-1 + I_t)^-1/2ϵ_1:t_2 ≤σ√(2log(1/δ√(((1 + η)I_t + K_t)))). To the best of our knowledge, Fact <ref> is the only existing result on self-normalized, time-uniform concentration in RKHS's. We parameterize the bounds in the above fact in terms of η > 0 instead of ρ > 0 to emphasize the following difference: both sides of our bounds shrink as ρ is increased, whereas both sides of the bound in Fact <ref> increase as η grows. Thus, increasing ρ in our bound should be seen as decreasing η in the bound of <cit.>. The bounds in Corollary <ref> and Fact <ref> coincide when ρ = 1 and η↓ 0 (per Lemma 1 in <cit.>), but are otherwise not equivalent for other choices of ρ and η. We believe our bounds to be more usable than those of <cit.> for several reasons. First, our bounds directly extend the method of mixtures (in particular, Fact <ref>) to potentially infinite-dimensional Hilbert spaces. This allows us to directly leverage existing analysis from <cit.> to prove our regret bounds, with only slight modifications. This is in contrast to the more cumbersome regret analysis that leverages Fact <ref>, which is not only more difficult to follow, but also obtains inferior regret guarantees. Second, we note that our bound has a simple dependence on ρ > 0. In more detail, directly as a byproduct of our simpler bounds, Theorem <ref> offers a regret bound that can readily be tuned in terms of ρ. Due to their use of a “double mixture” technique in proving Fact <ref>, <cit.> essentially wind up with a nested, doubly-regularized matrix ((K_t + η I_t)^-1 + I_t)^-1/2 with which they normalize the residuals ϵ_1:t. In particular, this more complicated normalization make it difficult to understand how varying η impacts regret guarantees, which we find to be essential for proving improved regret guarantees. Note that both sides of our bound shrink with ρ, approaching 0 as ρ approaches infinity. This is expected, as regularization serves to decrease the impact of noise (which is the term we are bounding) on the regression estimate, thus preventing overfitting. Since both sides of the bound in Fact <ref> increase when ρ grows, to make a fair comparison, we must consider the regime where ρ tends toward 0. In this case, the bound reduces to (K_t^-1 + I_t)^-1/2ϵ_1:t_2 ≤σ√(2log(1/δ√((I_t + K_t)))). Note that this is precisely equivalent to Corollary <ref> when ρ is set to 1. However, contrary to intuition, no setting of ρ causes both sides of the bound in Fact <ref> to tend towards zero. This ultimately prevents the use of appropriate regularization as a tool for minimizing regret, as in the sequel we will be selecting ρ in proportion to the time horizon T, thus causing both sides of the bound to tend towards zero. We go in greater depth in Section <ref> in showing how to appropriately regularize GP-UCB. § AN IMPROVED REGRET ANALYSIS OF GP-UCB In this section, we provide the second of our main contributions, which is an improved regret analysis for the GP-UCB algorithm. We provide a description of GP-UCB in Algorithm <ref>. While we state the algorithm directly in terms of quantities in the RKHS H, these quantities can be readily converted to those involving Gram matrices or Gaussian processes for those who prefer that perspective <cit.>. As seen in Section <ref>, by carefully extending the “method of mixtures” bounds of <cit.> and <cit.> to Hilbert spaces, we can construct self-normalized concentration inequalities that have simple dependence on the regularization parameter ρ. These simplified bounds, in conjunction with information about the eigendecay of the kernel k <cit.>, can be combined to carefully choose ρ to obtain improved regret. We now present our main result. We assume that (a) there is some constant D > 0 known to the learner such that f^∗_H ≤ D and (b) for every t ≥ 1, ϵ_t is σ-subGaussian conditioned on σ(Y_1:t - 1, X_1:t). With the above paragraph in mind, we now present the main result of our work. Let T > 0 be a fixed time horizon, ρ > 0 a regularization parameter, and assume Assumptions <ref> and <ref> hold. Let δ∈ (0, 1), and for t ≥ 1 define U_t := σ√(2log(1/δ√((𝕀_H + ρ^-1V_t)))) + ρ^1/2D. Then, with probability at least 1 - δ, the regret of Algorithm <ref> run with parameters ρ, (U_t)_t ≥ 1, D satisfies R_T = O(γ_T(ρ)√(T) + √(ργ_T(ρ)T)), where in the big-Oh notation above we treat δ, D, σ, B, and L as being held constant. If the kernel k experiences (C, β)-polynomial eigendecay for some C > 0 and β > 1, taking ρ = T^1/1 + β yields R_T = O(T^3 + β/2 + 2β)[The notation O suppresses multiplicative, poly-logarithmic factors in T], which is always sub-linear in T. We specialize the above theorem to the case of the Matérn kernel in the following corollary. Definition <ref> states that the Matérn kernel with smoothness ν > 1/2 in dimension d experiences (C, 2ν + d/d)-eigendecay, for some constnat C > 0. Thus, GP-UCB obtains a regret rate of R_T = O(T^ν + 2d/2ν + 2d). We note that our regret analysis is the first to show that GP-UCB attains sublinear regret for general kernels experiencing polynomial eigendecay. Of particular import is that Corollary <ref> of Theorem <ref> yields the first analysis of GP-UCB that implies sublinear regret for the Matérn kernel under general settings of ambient dimension d and smoothness ν. We note that our analysis does not obtain optimal regret, as the theoretically interesting but computationally cumbersome SupKernelUCB algorithm <cit.> obtains a slightly improved regret bound of O(T^β + 1/2β) for (C, β)-polynomial eigendecay and O(T^ν + d/2ν + d) for the Matérn kernel with smoothness ν in dimension d. We hypothesize that it may not be possible to construct self-normalized concentration inequalities under ·_H that obtain this bound, and supply a heuristic justification in the conclusion. We now provide a sketch of the proof of Theorem <ref>. The entire proof, along with full statements and proofs of the technical lemmas, can be found in Appendix <ref>. Letting, for any t ∈ [T], the “instantaneous regret” be defined as r_t := f^∗(x^∗) - f^∗(X_t), a standard argument yields that, with probability at least 1 - δ, simultaneously for all t ∈ [T], r_t ≤ 2 U_t - 1(ρ𝕀_H + V_t - 1)^-1/2k(·, X_t)_H. A further standard argument using Cauchy-Schwarz and an elliptical potential argument yields R_T = ∑_t = 1^T r_t ≤ U_T√(2Tlog(𝕀_H + ρ^-1V_T)) = (σ√(2log(1/δ√((𝕀_H + ρ^-1V_T)))) + ρ^1/2D)√(2Tlog(𝕀_H + ρ^-1V_T)) ≤(σ√(2log(1/δ)) + σ√(2γ_T(ρ)) + ρ^1/2D)√(4Tγ_T(ρ)) = O(γ_T(ρ)√(T) + √(ργ_T(ρ)T)), which proves the first part of the claim. If, additionally, k experiences (C, β)-polynomial eigendecay, we know that γ_T(ρ) = O((T/ρ)^1/β) by Fact <ref>. Setting ρ := T^1/1 + β thus yields R_T = O(γ_T(ρ)√(T) + √(ργ_T(ρ)T)) = O(T^3 + β/2 + 2β), proving the second part of the claim. § CONCLUSION In this work, we present an improved analysis for the GP-UCB algorithm in the kernelized bandit problem. We provide the first analysis showing that GP-UCB obtains sublinear regret when the underlying kernel k experiences polynomial eigendecay, which in particular implies sublinear regret rates for the practically relevant Matérn kernel. In particular, we show GP-UCB obtains regret O(T^3 + β/2 + 2β) when k experiences (C, β)-polynomial eigendecay, and regret O(T^ν + 2d/2ν + 2d) for the Matérn kernel with smoothness ν in dimension d. Our technical contributions are twofold. First, we show how to extend self-normalized concentration inequalities for finite-dimensional, Euclidean spaces directly to the case of Hilbert spaces through carefully making a truncation argument. Second, we demonstrate the importance of regularization in the kernelized bandit problem. In particular, since the smoothness of the kernel k governs the hardness of learning, by regularizing in proportion to the rate of eigendecay of k, one can obtain significantly improved regret bounds. A shortcoming of our work is that, despite obtaining the first generally sublinear regret bounds for GP-UCB, our rates are not optimal. In particular, there are discretization-based algorithms, such as SupKernelUCB <cit.>, which obtain slightly better regret bounds of O(T^1 + β/2β) for (C, β)-polynomial eigendecay. We hypothesize that the vanilla GP-UCB algorithm, which involves constructing confidence ellipsoids directly in the RKHS H, cannot obtain this rate. The common line of reasoning <cit.> is that because the Lin-UCB (the equivalent algorithm in ^d) obtains the optimal regret rate of O(d√(T)) in the linear bandit problem setting, then GP-UCB should attain optimal regret as well. In the linear bandit setting, there is no subtlety between estimating the optimal action and unknown slope vector, as these are one and the same. In the kernel bandit setting, estimating the function and optimal action are not equivalent tasks. In particular, the former serves in essence as a nuisance parameter in estimating the latter: tight estimation of unknown function under the Hilbert space norm implies tight estimation of the optimal action, but not the other way around. Existing optimal algorithms are successful because they discretize the input domain, which has finite metric dimension <cit.>, and make no attempts to estimate the unknown function in RKHS norm. Since compact sets in RKHS's do not, in general, have finite metric dimension <cit.>, this makes estimation of the unknown function a strictly more difficult task. We leave the verification of this hypothesis as future work. § ACKNOWLEDGEMENTS AR acknowledges support from NSF DMS 1916320 and an ARL IoBT CRA grant. Research reported in this paper was sponsored in part by the DEVCOM Army Research Laboratory under Cooperative Agreement W911NF-17-2-0196 (ARL IoBT CRA). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. ZSW and JW were supported in part by the NSF CNS2120667, a CyLab 2021 grant, a Google Faculty Research Award, and a Mozilla Research Grant. JW acknowledges support from NSF GRFP grants DGE1745016 and DGE2140739. plainnat § RELATED WORK The kernelized bandit problem was first studied by <cit.>, who introduce the GP-UCB algorithm and characterize its regret in both the Bayesian and Frequentist setting. While the authors demonstrate that GP-UCB obtains sublinear regret in the Bayesian setting for the commonly used kernels, their bounds fail to be sublinear in general in the frequentist setting for the Matérn kernel, one of the most popular kernel choices in practice. <cit.> further study the performance of GP-UCB in the frequentist setting. In particular, by leveraging a martingale-based “double mixture” argument, the authors are able to significantly simplify the confidence bounds presented in <cit.>. Unfortunately, the arguments introduced by <cit.> did not improve regret bounds beyond logarithmic factors, and thus GP-UCB continued to fail to obtain sublinear regret for certain kernels in their work. There are many other algorithms that have been created for kernelized bandits. <cit.> introduce an algorithm specific to the Matérn kernel that obtains significantly improved regret over GP-UCB. This algorithm adaptively partitions the input domain into small hypercubes and running an instance of GP-UCB in each element of the discretized domain. <cit.> introduce an algorithm called LP-GP-UCB, which augments the GP-UCB estimator with local polynomial corrections. While in the worst case this algorithm recovers the regret bound of <cit.>, if additional information is known about the unknown function f^∗ (e.g. it is Holder continuous), it can provide improved regret guarantees. Perhaps the most important non-GP-UCB algorithm in the literature is the SupKernel algorithm introduced by <cit.>, which discretizes the input domain and successively eliminates actions from play. This algorithm is signficant because, despite its complicated nature, it obtains regret rates that match known lower bounds provided by <cit.> up to logarithmic factors. Intimately tied to the kernelized bandit problem is the information-theoretic quantity of maximum information gain <cit.>, which is a sequential, kernel-specific measure of hardness of learning. Almost all preceding algorithms provide regret bounds in terms of the max information gain. Of particular import for our paper is the work of <cit.>. In this work, the authors use a truncation argument to upper bound the maximum information gain of kernels in terms of their eigendecay. We directly employ these bounds in our improved analysis of GP-UCB. The max-information gain bounds presented in <cit.> can be coupled with the regret analysis in <cit.> to yield a regret bound of O(T^ν + 3d/2/2ν + d) in the case of the Matérn kernel with smoothness ν in dimension d. In particular, when ν≤d/2, this regret bound fails to be sublinear. In practical setting, d is viewed as large and ν is taken to be 3/2 or 5/2, making these bounds vacuous <cit.> The regret bounds in this paper are sublinear for any selection of smoothness ν > 1/2 and d ≥ 1. Moreover, a simple computation yields that our regret bounds strictly improve over (in terms of d and ν) those implied by <cit.>. Last, we touch upon the topic of self-normalized concentration, which is an integral tool for constructing confidence bounds in UCB-like algorithms. Heuristically, self-normalized aims to sequentially control the growth of processes that have been rescaled by their variance to look, roughly speaking, normally (or subGaussian) distributed. The prototypical example of self-normalized concentration in the bandit literature comes from <cit.>, wherein the authors use a well known technique called the “method of mixtures” to construct confidence ellipsoids for finite dimensional online regression estimates. The concentration result in the aforementioned work is a specialization of results in <cit.>, which provide self-normalized concentration for a wide variety of martingale-related processes, several of which have been recently improved <cit.>. <cit.> extend the results of <cit.> to the kernel setting using a “double mixture” technique, allowing them to construct self-normalized concentration inequalities for infinite-dimensional processes. We also use the method of mixtures, but a much simpler finite-dimensional version, as we explain next. § TECHNICAL LEMMAS FOR THEOREM <REF> In this appendix, prove Theorem <ref> along with several corresponding technical lemmas. While many of the following results are intuitively true, we provide their proofs in full rigor, as there can be subtleties when working in infinite-dimensional spaces. Throughout, we assume that the subGaussian noise parameter is σ = 1. The general case can readily be recovered by considering the rescaled process (S_t/σ)_t ≥ 0. The first lemma we present is a restriction of Theorem <ref> to the case where the underlying Hilbert space (H, ⟨·, ·⟩_H) is finite dimensional, say of dimension N. In this setting, the result essentially follows immediately from Fact <ref>. All we need to do is construct a natural isometric isomorphism between the spaces H and ^N, and then argue that applying such a mapping doesn't alter the norm of the self-normalized process. Theorem <ref> holds if we additionally assume that H is finite dimensional, i.e. if there exists N ≥ 1 and orthonormal functions φ_1, …, φ_N such that H := {φ_1, …, φ_N}. Let τ : H →^N be the map that takes a function f = ∑_n = 1^Nθ_nφ_n ∈ H to its natural embedding τ f := (θ_1, …, θ_N)^⊤∈^N. Not only is the map τ an isomorphism between H and ^N, but it is also an isometry, i.e. f_H = τ f_2 for all f ∈ H. Further, τ satisfies the relation τ^⊤ = τ^-1. Define the “hatted” processes (S_t)_t ≥ 1 and (V_t)_t ≥ 1, which take values in ^N and ^N × N respectively as S_t = ∑_s = 1^t ϵ_s τ k(·, X_s) and V_t = ∑_s = 1^t (τ k(·, X_s))(τ k(·, X_s))^⊤. It is not hard to see that, by the linearity of τ, that for any t ≥ 1, we have S_t = τ S_t and V_t = τ V_t τ^⊤. We observe that (a) (V_t + ρ I_N)^-1/2 = τ(V_t + ρ𝕀_H)^-1/2τ^⊤ and (b) that the eigenvalues of V_t are exactly those of V_t. Since the processes (S_t)_t ≥ 1 and (V_t)_t ≥ 1 satisfy the assumptions of Theorem <ref>, we see that the process (M_t)_t ≥ 0 given by M_t := 1/√((I_N + ρ^-1V_t))exp{1/2(ρ I_N + V_t)^-1/2S_t_2^2} is a non-negative supermartingale with respect to (_t)_t ≥ 0. From observation (a), the fact τ is an isometry, and the fact τ^⊤ = τ^-1, it follows that (V_t + ρ I_N)^-1/2S_t_2 = τ(V_t + ρ𝕀_H)^-1/2τ^⊤τ S_t_2 = (V_t + ρ𝕀_H)^-1/2τ^-1τ S_t_H = (V_t + ρ𝕀_H)^-1/2S_t_H. Further, observation (b) implies that (I_N + ρV_t) = (𝕀_H + ρ V_t). Substituting these identities into the definition of (M_t)_t ≥ 0 yields the desired result, i.e. that M_t = 1/√((𝕀_H +ρ^-1V_t))exp{1/2(V_t + ρ I_d)^-1/2S_t_H^2}. is a non-negative supermartingale with respect to (_t)_t ≥ 0. The remainder of the result follows from applying Ville's Inequality (Fact <ref>) and rearranging. We can prove Theorem <ref> by truncating the Hilbert space H onto the first N components, applying Lemma <ref> to the “truncated” processes (π_N S_t)_t ≥ 0 and (π_N V_t π_N)_t ≥ 0 to construct a relevant, non-negative supermartingale M_t^(N), and then show that the error from truncation in this non-negative supermartingale tends towards zero as N grows large. The following two technical lemmas are useful in showing that this latter truncation tends towards zero. For any t ≥ 1, let V_t be as in the statement of Theorem <ref>, and let π_N be as in Section <ref>. Then, we have π_N V_t π_N V_t, where the above convergence holds under the operator norm on H. Fix ϵ>0, t ≥ 1, and for s ∈ [t], let us write f_s = ∑_n = 1^∞θ_n(s) φ_n. Since we have assumed f_t_H < ∞ for all t ≥ 1, there exists some N_t < ∞ such that, for all s ∈ [t], π_N_t^⊥ f_s_H^2 = ∑_n = N_t + 1^∞θ_n(s)^2 < ϵ/2t. We also have, for any s ∈ [t] and N ≥ 1, that f_s is an eigenfunction of f_sf_s^⊤π_N^⊥ = f_s⟨ f_s, π_N^⊥(·)⟩_H with corresponding (unique) eigenvalue f_sf_s^⊤π_N^⊥_op = λ_max(f_sf_s^⊤π_N^⊥) = π_N^⊥ f_s_H^2 = ∑_n = N + 1^∞θ_n(s)^2. Observe that, as an orthogonal projection operator, π_N is self-adjoint, i.e. π_N = π_N^⊤. With this information, we see that, for N ≥ N_t, we have π_N V_t π_N - V_t_op ≤∑_s = 1^tπ_N f_s f_s^⊤π_N - f_sf_s^⊤_op = ∑_s = 1^t π_N f_s f_s^⊤π_N - π_N f_s f_s^⊤ + π_N f_s f_s^⊤ - f_s f_s^⊤_op ≤∑_s = 1^t π_N f_s f_s^⊤π_N - π_N f_s f_s^⊤_op + π_N f_s f_s^⊤ - f_s f_s^⊤_op ≤∑_s = 1^t π_N_opf_s f_s^⊤π_N - f_s f_s^⊤_op + π_N f_s f_s^⊤ - f_s f_s^⊤_op = ∑_s = 1^t 2f_s f_s^⊤π_N^⊥_op = ∑_s = 1^t 2π_N^⊥ f_s_H^2 < ϵ. Since ϵ > 0 was arbitrary, we have shown the desired result. For any t ≥ 1, let V_t be as in Theorem <ref>, ρ > 0 arbitrary, and π_N as in Section <ref>. Then, we have (𝕀_H + ρ^-1π_N V_t π_N) (𝕀_H + ρ^-1V_t). We know that the mapping A ↦(𝕀_H + A) is continuous under the “trace norm” A_1 := ∑_n = 1^∞ |λ_n(A)| <cit.>. Thus, to show the desired result, it suffices to show that π_N V_t π_N - V_t_1 0. Observe that both π_N V_t π_N and V_t are operators of rank at most t, so so their difference π_N V_t π_N - V_t has rank at most 2t. Thus, we know that π_N V_t π_N - V_t_1≤ 2tπ_N V_t π_N - V_t_op 0, where the final convergence follows from Lemma <ref>. Thus, we have shown the desired result. We now tie together all of these technical (but intuitive) results in the proof of Theorem <ref> below. Let (φ_n)_n ≥ 1 be an orthonormal basis for H, and for N ≥ 1, let π_N denote the projection operator outlined in Section <ref>. Recall that π_N = π_N^⊤. Further H_N := {φ_1, …, φ_N}⊂ H is the image of H under π_N. Since (S_t)_t ≥ 0 is an H-valued martingale with respect to (_t)_t ≥ 1, it follows that the projected process (π_N S_t)_t ≥ 1 is an H_N-valued martingale with respect to (_t)_t ≥ 0. Further, note that the projected variance process (π_N V_t π^⊤_N)_t ≥ 0 satisfies π_N V_t π^⊤_N = ∑_s = 1^t (π_N f_s)(π_N f_s)^⊤. Since, for any N ≥ 1, H_N is a finite-dimensional Hilbert space, it follows from Lemma <ref> that the process (M_t^(N))_t ≥ 0 given by M_t^(N) := 1/√((𝕀_H_N + ρ^-1π_N V_t π_N^⊤))exp{1/2(ρ𝕀_H_N +π_N V_t π_N^⊤)^-1/2π_N S_t_H_N^2} = 1/√((𝕀_H + ρ^-1π_N V_t π_N^⊤))exp{1/2(ρ𝕀_H +π_N V_t π_N^⊤)^-1/2π_N S_t_H^2}, is a non-negative supermartingale with respect to (_t)_t ≥ 0. In the above 𝕀_H_N denotes the identity 𝕀_H restricted to H_N ⊂ H and denotes the determinant restricted to the subspace H_N. The equivalence of the second and third terms above is trivial. We now argue that for any t ≥ 1, lim_N →∞M_t^(N) = M_t. If we show this to be true, then we have, for any t ≥ 1 (M_t |_t - 1) = (lim inf_N →∞M_t^(N)|_t - 1) ≤lim inf_N →∞(M_t^(N)|_t - 1) ≤lim inf_N →∞M_t - 1^(N) = M_t - 1, which implies (M_t)_t ≥ 0 is a non-negative supermartingale with respect to (_t)_t ≥ 0 thus proving the result. In the above, the first inequality follows from Fatou's lemma for conditional expectations (see <cit.>, for instance), and the second inequality follows from the supermartingale property. Lemma <ref> tells us that (𝕀_H + ρ^-1π_N V_t π_N) (𝕀_H + ρ^-1 V_t) for all t ≥ 1, so to show the desired convergence in (<ref>), it suffices to show that (ρ𝕀_H + π_NV_tπ_N)^-1/2π_NS_t_H (ρ𝕀_H + V_t)^-1/2S_t_H for any t. Let _t := ρ𝕀_H + V_t and _t(N) := ρ𝕀_H + π_N V_t π_N in the following line of reason for simplicity. We trivially have |_t(N)^-1/2π_NS_t_H - _t^-1/2S_t_H| ≤_t(N)^-1/2π_NS_t - _t^-1/2S_t_H = _t(N)^-1/2π_NS_t - _t(N)^-1/2S_t + _t(N)^-1/2S_t - _t^-1/2S_t_H ≤_t(N)^-1/2_opπ_N^⊥ S_t_H + _t(N)^-1/2 - _t^-1/2_opS_t_H 0. as lim_N →∞π_N^⊥ f = 0 for any f ∈ H of finite norm, and Lemma <ref> tells us that V_t - π_N V_t π_N_op 0, which in turn implies that _t(N)^-1/2 - _t^-1/2_op = (ρ𝕀_H + π_NV_tπ_N)^-1/2 - (ρ𝕀_H + V_t)^-1/2_H 0. Thus, we have shown the desired result. The second part of the claim follows from a direct application of Fact <ref> and rearranging. As a final result in this appendix, we provide a proof of Corollary <ref>. This corollary allows for a more direct comparison of our results with those of <cit.>. Our proof is a simple generalization Lemma 1 in the aforementioned paper to the case of arbitrary regularization parameters. The first result is straightforward, and follows from the identity (𝕀_H + ρ^-1V_t) = (I_t + ρ^-1K_t), which we bring to attention in Section <ref>. The second result follows from the following line of reasoning. Before proceeding, recall that Φ_t := (k(·, X_1), …, k(·, X_t))^⊤, V_t = Φ_t^⊤Φ_t, K_t = Φ_tΦ_t^⊤ and S_t = ∑_s = 1^tϵ_s k(·, X_s) = Φ_t^⊤ϵ_1:t. (ρ𝕀_H + V_t)^-1S_t_H^2 = ϵ_1:t^⊤Φ_t (ρ𝕀_H + Φ_t^⊤Φ_t)^-1Φ_t^⊤ϵ_1:t = ϵ_1:t^⊤(ρ^-1/2Φ_t)(𝕀_H + (ρ^-1/2Φ_t)^⊤(ρ^-1/2Φ_t))^-1(ρ^-1/2Φ_t)^⊤ϵ_1:t = ϵ_1:t^T ρ^-1Φ_tΦ_t^⊤(I_t + ρ^-1Φ_tΦ_t^⊤)^-1ϵ_1:t = ϵ_1:t^⊤ (ρ^-1K_t)(I_t + ρ^-1K_t)^-1ϵ_1:t = ϵ_1:t^⊤(I_t + ρ K_t^-1)^-1ϵ_1:t = (I_t + ρ K_t^-1)^-1/2ϵ_1:t_2^2. In the above, the second equality comes from pulling out a multiplicative factor of ρ form the center operator inverse. The third inequality comes from the famed “push through” identity. Lastly, the second to last equality comes from observing that (a) ρ^-1K_t and (I_t + ρ^-1 K_t)^-1 are simultaneously diagonalizable matrices and (b) for scalars, we have the identity (1 + a^-1)^-1 = a(1 + a)^-1. Thus, we have shown the desired result. § TECHNICAL LEMMAS FOR THEOREM <REF> In this appendix, we provide various technical lemmas needed for the proof of Theorem <ref>. We then follow these lemmas with a full proof of Theorem <ref>, which extends the sketch provided in the main body of the paper. Most of the following technical lemmas either already exist in the literature <cit.> or are extensions of what is known in the case of finite-dimensional, linear bandits <cit.>. We nonetheless provide self-contained proofs for the sake of completeness. Let (f_t)_t ≥ 1 be the sequence of functions defined in Algorithm <ref>, and assume Assumption <ref> holds. Let δ∈ (0, 1) be an arbitrary confidence parameter. Then, with probability at least 1 - δ, simultaneously for all t ≥ 1, we have (V_t + ρ𝕀_H)^1/2(f_t - f^∗)_H ≤σ√(2log(1/δ√((𝕀_H + ρ^-1V_t)))) + ρ^1/2D, where we recall that the right hand side equals U_t. First, observe that we have f_t - f^∗ = (ρ𝕀_H + V_t)^-1Φ_t^⊤ Y_1:t - f^∗ = (ρ𝕀_H + V_t)^-1Φ_t^⊤ (Φ_t f^∗ + ϵ_1:t) - f^∗ = (ρ𝕀_H + V_t)^-1Φ_t^⊤ (Φ_t f^∗ + ϵ_1:t) - f^∗±ρ(ρ𝕀_H + V_t)^-1f^∗ = (ρ𝕀_H + V_t)^-1Φ_t^⊤ϵ_1:t - ρ (ρ𝕀_H + V_t)^-1f^∗. Applying the triangle inequality to the above, we have (ρ𝕀_H + V_t)^1/2(f_t - f^∗)_H ≤(ρ𝕀_H + V_t)^-1/2Φ_t^⊤ϵ_1:t_H + ρ(ρ𝕀_H + V_t)^-1/2f^∗_H ≤σ√(2log(1/δ√((𝕀_H + ρ^-1V_t)))) + ρ^1/2D. To justify the final inequality, we look at each term separately. For the first term, observe that V_t = ρ𝕀_H + ∑_s = 1^t k(·, X_t)k(·, X_t)^⊤ and S_t := Φ_t^⊤ϵ_1:t = ∑_s = 1^t ϵ_s k(·, X_s). Thus, we are in the setting of Theorem <ref>, and thus have, with probability at least 1 - δ, simultaneously for all t ≥ 0, (ρ𝕀_H + V_t)^-1/2Φ_t^⊤ϵ_1:t_H ≤σ√(2log(1/δ√((𝕀_H + ρ^-1V_t)))). For the second term, observe that (a) λ_min(ρ𝕀_H + V_t) ≥ρ and (b) by Assumption <ref>, we have f^∗_H ≤ D. Thus applying Holder's inequality, we have, deterministically ρ(ρ𝕀_H + V_t)^-1/2f^∗_H ≤ρ(ρ𝕀_H + V_t)^-1/2_opf^∗_H ≤ρ^1/2f^∗_H ≤ρ^1/2D. These together give us the desired result. The following “elliptical potential” lemma, abstractly, aims to control the the growth of the squared, self-normalized norm of the selected actions. We more or less port the argument from <cit.>, which provides an analogue in the linear stochastic bandit case. We just need to be mildly careful to work around the fact we are using Fredholm determinants. For any t ≥ 1, let V_t be the covariance operator defined in Algorithm <ref>, and let ρ > 0 be arbitrary. We have the identity (𝕀_H + ρ^-1V_t) = ∏_s = 1^t(1 + (ρ𝕀_H + V_s - 1)^-1/2k(·, X_s)_H^2). In particular, if ρ≥ 1 L, where L is the bound outlined in Assumption <ref>, we have ∑_s = 1^t (ρ𝕀_H + V_s - 1)^-1/2k(·, X_s)_H^2 ≤ 2log(𝕀_H + ρ^-1V_t). Let H_t ⊂ H be the finite-dimensional Hilbert space H_t := {k(·, X_1), …, k(·, X_t)}. Let _H_t denote the determinant restricted to H_t, i.e. the map that acts on a (symmetric) operator A : H_t → H_t by _H_t(A) := ∏_s = 1^t λ_s(A), where λ_1(A), …, λ_t(A) are the enumerated eigenvalues of A. Observe the identity (𝕀_H + ρ^-1V_t) = _H_t(𝕀_H_t + ρ^-1V_t), where we recall the determinant on the lefthand side is the Fredholm determinant, as defined in Section <ref>. Next, following the same line of reasoning as <cit.>, we have _H_t(ρ𝕀_H_t + V_t) = _H_t(ρ𝕀_H_t + V_t - 1)_H_t(𝕀_H_t + (ρ𝕀_H_t + V_t - 1)^-1/2k(·, X_t)k(·, X_t)^⊤ (ρ𝕀_H_t + V_t - 1)^-1/2) =_H_t(ρ𝕀_H_t + V_t - 1)(1 + (ρ𝕀_H_t + V_t - 1)^-1/2k(·, X_t)_H^2) = ⋯ (Iterating t - 1 more times) = _H_t(ρ𝕀_H)∏_s = 1^t(1 + (ρ𝕀_H_t + V_s - 1)^-1/2k(·, X_s)_H^2) = _H_t(ρ𝕀_H)∏_s = 1^t(1 + (ρ𝕀_H + V_s - 1)^-1/2k(·, X_s)_H^2), where the last equality comes from realizing, for all s ∈ [t], (ρ𝕀_H_t + V_s -1)^-1/2k(·, X_s)_H = (ρ𝕀_H + V_s - 1)^-1/2k(·, X_s)_H. Thus, rearranging yields _H_t(𝕀_H_t + ρ^-1V_t) = ∏_s = 1^t(1 + (ρ𝕀_H + V_s - 1)^-1/2k(·, X_s)_H^2), which yields the first part of the claim. Now, to see the second part of the claim, observe the bound x ≤ 2log(1 + x), ∀ x ∈ [0, 1]. Observing that, for all s ∈ [t], (ρ𝕀_H + V_s - 1)^-1/2k(·, X_s)_H ≤ 1 when ρ≥ 1 L, we have ∑_s = 1^t (ρ𝕀_H + V_s - 1)^-1/2k(·, X_s)_H^2 ≤ 2∑_s = 1^t log(1 + (ρ𝕀_H + V_s - 1)^-1/2k(·, X_s)_H^2) = 2log(∏_s = 1^t(1 + (ρ𝕀_H + V_s - 1)^-1/2k(·, X_s)_H^2)) = 2log(𝕀_H + ρ^-1V_t), proving the second part of the lemma. With the above lemmas, along with the concentration results provided by Theorem <ref>, we can provide a full proof for Theorem <ref>. We take the standard approach of (a) first bounding instantaneous regret and then (b) applying the Cauchy-Schwarz inequality to bound the aggregation of terms. To start, for any t ∈ [T], define the “instantaneous regret” as r_t := f^∗(x^∗) - f^∗(X_t), where we recall x^∗ := max_x ∈f^∗(x). By applying Lemma <ref>, we have with probability at least 1 - δ that r_t = f^∗(x^∗) - f^∗(X_t) ≤f_t(X_t) - f^∗(X_t) = f_t(X_t) - f_t - 1(X_t) + f_t - 1(X_t) - f^∗(X_t) = ⟨f_t - f_t - 1, k(·, X_t)⟩_H - ⟨ f_t - 1 - f^∗, k(·, X_t)⟩_H ≤(ρ𝕀_H + V_t - 1)^-1/2k(·, X_t)_H( (ρ𝕀_H + V_t - 1)^1/2(f_t - f_t - 1)_H + (ρ𝕀_H + V_t - 1)^1/2(f_t - 1 - f^∗)_H ) ≤ 2U_t - 1(ρ𝕀_H + V_t - 1)^-1/2k(·, X_t)_H, where f_t and f_t - 1 are as in Algorithm <ref>. Note that, in the above, we apply Lemma <ref> in obtaining the first inequality (which is the “optimism in the face of uncertainty” part of the bound), and additionally in obtaining the last inequality. The second to last inequality follows from applying Cauchy-Schwarz. With the above bound, we can apply again the Cauchy-Schwarz inequality to see R_T = ∑_t = 1^T r_t ≤√(T∑_t = 1^T r_t^2)≤ U_T√(2T∑_t = 1^T(ρ𝕀_H + V_t - 1)^-1/2k(·, X_t)_H^2) ≤ U_T√(2Tlog(𝕀_H + ρ^-1V_T)) = (σ√(2log(1/δ√((𝕀_H + ρ^-1V_T)))) + ρ^1/2D)√(2Tlog(𝕀_H + ρ^-1V_T)) ≤(σ√(2log(1/δ)) + σ√(2γ_T(ρ)) + ρ^1/2D)√(4Tγ_T(ρ)) = σγ_T(ρ)√(8T) + D√(4ργ_T(ρ) T) + σ√(8Tlog(1/δ)) = O(γ_T(ρ)√(T) + √(ργ_T(ρ)T)). In the above, the second inequality follows from the second part of Lemma <ref>, the following equality follows from substituting in U_T, and the final inequality follows from the definition of the maximum information gain γ_T(ρ) and the fact that √(a + b)≤√(a) + √(b) for all a, b ≥ 0. The last, big-Oh bound is straightforward. With this, we have proven the first part of the theorem. Now, suppose the kernel k experiences (C, β)-polynomial eigendecay. Then, by Fact <ref>, we know that γ_T(ρ) ≤((CB^2T/ρ)^1/βlog^-1/β(1 + LT/ρ) + 1)log(1 + LT/ρ) = O((T/ρ)^1/β). We aim to set ρ = (T/ρ)^1/β, which occurs when ρ = T^1/1 + β. When this happens, we have (T/ρ)^1/β√(T) = T^1/1 + β + 1/2 = T^3 + β/2 + 2β. Applying this, we have that R_T = O(γ_T(ρ)√(T) +√(ργ_T(ρ) T)) = O(T^3 + β/2 + 2β), which, in particular, is sublinear for any β > 1. Thus, we are done.
http://arxiv.org/abs/2307.03926v1
20230708075122
Enhancing Room Security and Automating Class Attendance Using ID Cards
[ "Shravan Bhat", "Nithin R", "Pranav S" ]
cs.CR
[ "cs.CR", "cs.HC", "none", "J.7" ]
Enhancing Room Security and Automating Class Attendance Using ID Cards Shravan Bhat – 171EE240, Nithin R – 171EC131, Pranav S - 171EC135 August 12, 2023 ======================================================================== With the rapid advancements in technology, automation has emerged as the future of human endeavors. From simple tasks like attendance management to complex security systems, automation has the potential to revolutionize various aspects of our lives. This research paper explores the implementation of a method aimed at enhancing room security in hostels and automating class attendance using ID cards. In this study, we propose a system that utilizes the unique identity information stored in ID cards for various security and check-in tasks. By integrating RFID (Radio-Frequency Identification) reader technology, GSM modules, Node MCU, and Arduino, we create a comprehensive solution. The RFID reader scans the ID card, extracting the relevant information and verifying the user's identity. The data is then transmitted via the GSM module to a central database, ensuring real-time monitoring and security measures. Moreover, the system also enables the automation of class attendance. By utilizing the same ID cards, students can simply tap their cards on a reader placed in the classroom. This information is recorded automatically, eliminating the need for manual attendance taking and reducing errors and time consumption. This research project highlights the practical implementation of ID card technology to enhance room security in hostels and automate class attendance processes. By leveraging the power of automation, we aim to streamline administrative tasks, improve security measures, and optimize efficiency in educational institutions and other relevant settings. ID card, RFID reader, GSM Module, Node MCU, Arduino § INTRODUCTION Security and privacy is a basic need for any human being. India's population has been increasing exponentially since 19th century. Hence student intake for colleges has been increasing every year. Automation would help in trivial tasks like taking attendance, or making payments in a locality. Privacy and security is also an issue in many colleges. Adding layers of security to rooms and safe box would prevent petty theft from happening.Main motivation of this project is to establish a attendance system within our college campus, a cash-less payment system and also to implement safer and key-less room locking systems in our university. § LITERATURE SURVEY §.§ Survey of State of Art Smart card based door lock systems which are expensive and less secure are currently available like the NFC (Near field communication) cards which are used in the hotel rooms. Using these might be very expensive as it requires complex hardware.Automated attendance are available, which uses finger print as the ID, But implementing that on a large scale like college is difficulty and would come out to be rather expensive. §.§ Features * RFID card and RFID reader is included in the door lock system. The door unlocks only when the authorized card is scanned and corresponding pin in entered using the keypad provided. * The locking and unlocking of the door latch is implemented using servo motors, stepped motors and gears. * When a card is scanned an alert SMS is sent to the registered phone number and also an alert notification is generated in the app. When an authorized card is scanned without the user’s consent, the user can shut down the system by sending a message from his phone. * The same RFID card can be used in classrooms as a check in attendance system § DETAILS OF IMPLEMENTATION §.§ Components Used * Sim900 GSM module * Arduino Uno * MFRC522 RFID reader and RFID cards * Servo motors, stepped motors and gears * 4*4 keypad * Buzzer and power adaptor * Node MCU * LEDs and resistors * I2C LCD display §.§ Working Smart ID card is divided into 3 sub-systems1) Security System2) Payment System3) Attendance System * Security System The RFID reader communicates with the Arduino through the SPI protocol .The I2C LCD communicates with the Arduino through the I2C protocol. The keypad is connected to Arduino. The 4X4 keypad has 8 connections but the last column of keypad is not required. We only require numbers for the password.For powering the SIM900 module, 5V, 2A power adaptor is used. Once the SIM900 module is powered, the power light will light up and on pressing the power key, the status led lights up. Then the phone is paired with the module. GSM Module:GSM is a mobile communication modem; it is stands for global system for mobile communication (GSM). It is widely used mobile communication system in the world. GSM is an open and digital cellular technology used for transmitting mobile voice and data services.GSM module is used here since it can communicate with a mobile and the data which it receives can be processed and sent to the Arduino.I2C Protocol:I2C is a serial protocol for two-wire interface to connect low-speed devices like microcontrollers, I/O interfaces and other similar peripherals in embedded systems. * Payment System:The RFID reader communicates with the Node MCU through SPI protocol. The Node MCU is connected to a web server where the data is stored. When the RFID card is scanned and the pin is entered , the balance amount is displayed on the screen.Node MCU:This device is used instead of only Arduino UNO because Node MCU has a wi-fi module which can be connected to the web server. The ESP8266 can be controlled from local Wi-Fi network or from the internet (after port forwarding). The ESP-01 module has GPIO pins that can be programmed to control device/ execute a code through the internet. The module can be programmed using an Arduino through the serial pins (RX,TX). * Attendance system:When the the ID is scanned on the RFID reader, the student name that is stored in the RFID card is printed on the serial monitor. It is made sure that the can't be registered twice by comparing it with already registered IDs. An external app is used to store the output from the serial monitor. The output can be saved on to the computer. § RESULTS AND DISCUSSIONS §.§ Security System The Door Lock security system was successfully implemented. When an authorized ID card is scanned onto the RFID reader and the correct password is entered onto the keypad, only then the door unlocks when the servo motor turns. Consequently a message is sent to the owner saying that the door is unlocked. After few seconds the door locks back, turning the servo motor to the original position When the owner is inside the room, he/she can use a switch which is present inside the room to unlock the door. Subsequently after few seconds the door locks backs, turning the servo motor back to the original positionIf in any case a wrong ID card or wrong password is entered. The whole system locks down and an alarm is buzzed using a buzzer. A message is sent to the owner saying that there was an attempt to breach the security system.The security system fails to detect an intruder when RFID card's ID is changed to the owners ID. It will also fail if the owner is negligent, revealing the password to others. §.§ Payment System when a ID is scanned in onto the RFID reader, the value that is stored in the RFID, is sent to the server via WIFI module through internet on to the data base with the date and time which is taken from the internet. This stored value can be changed by the vendor or the shopkeeper to the new balance amount. The changed balance amount is then updated in the ID card through the WIFI module ESP8266backdrop of this system is that the balance can be changed to a wrong value giving a wrong balance §.§ Attendance system The attendance system was successfully implemented. When an registered ID card is scanned onto the RFID reader, the ID card number is send to the database through the wifi module Node MCU. The data base saves the student's name, ID number on the database. This present list can be retrieved from the database.As a fail safe for the above implemented method, the RFID reader reads the ID number of the card and compares it with the student register, if ID is present, it prints the student's name onto the serial monitor. An external app saves the logs of the serial monitor as text.This method would fail if some other student scans the card even if the owner is not present in the class. So the scanner must be monitored while the student is scanning on the RFID scanner § ACKNOWLEDGMENT With immense pleasure we are presenting "Enhancing Room Security and Automating Class Attendance Using ID Cards". As a part of the curriculum of "Embedded Systems and Design" under the department of “Electronics and Communication Engineering, National Institute of Technology, Karnataka”. We wish to thank all people who gave us the unending support. We express my profound thanks to our Professor, Dr. Ramesh Kini M., And all those who have indirectly guided and helped us in the preparation of this project. § 00 b1 How RFID Works https://electronics.howstuffworks.com/gadgets/high-tech-gadgets/rfid.htm b2 Specification of ESP8266 https://randomnerdtutorials.com/esp8266-adc-reading-analog-values-with-nodemcu/ b3 Data sheet ARDUINO UNO https://www.farnell.com/datasheets/1682209.pdf
http://arxiv.org/abs/2307.05674v2
20230711180002
Quantum Carroll/fracton particles
[ "José Figueroa-O'Farrill", "Alfredo Pérez", "Stefan Prohazka" ]
hep-th
[ "hep-th", "cond-mat.stat-mech", "cond-mat.str-el", "gr-qc" ]
=1 compat=1.18 fillbetween calc,arrows,arrows.meta,cd,shapes.geometric,positioning,through,decorations.markings,decorations.text plain lemmaLemma proposition[lemma]Proposition theorem[lemma]Theorem corollary[lemma]Corollary *mainthmTheorem definition definition[lemma]Definition *remarkRemark ⇝ L>l<a,1]José Figueroa-O'Farrill,ORCID: https://orcid.org/0000-0002-9308-93600000-0002-9308-9360b,c,2]Alfredo PérezORCID: https://orcid.org/0000-0003-0989-99590000-0003-0989-9959a,3]and Stefan ProhazkaORCID: https://orcid.org/0000-0002-3925-39830000-0002-3925-3983[a]Maxwell Institute and School of Mathematics, The University of Edinburgh, James Clerk Maxwell Building, Peter Guthrie Tait Road, Edinburgh EH9 3FD, Scotland, United Kingdom[b]Centro de Estudios Científicos (CECs), Avenida Arturo Prat 514, Valdivia, Chile[c]Facultad de Ingeniería, Arquitectura y Diseño, Universidad San Sebastián, sede Valdivia, General Lagos 1163, Valdivia 5110693, [email protected]@[email protected] classify and relate unitary irreducible representations (UIRs) of the Carroll and dipole groups, i.e., we define elementary quantum Carroll and fracton particles and establish a correspondence between them. Whenever possible, we express the UIRs in terms of fields on Carroll/aristotelian spacetime subject to their free field equations. We emphasise that free massive (or “electric”) Carroll and fracton quantum field theories are ultralocal field theories and highlight their peculiar and puzzling thermodynamic features. We also comment on subtle differences between massless and “magnetic” Carroll field theories and discuss the importance of Carroll and fractons symmetries for flat space holography. Quantum Carroll/fracton particles [ August 12, 2023 ================================== § INTRODUCTION In this work we classify and relate quantum Carroll and fracton particles, i.e., we classify unitary irreducible representations of the Carroll <cit.> and dipole <cit.> groups and describe them, whenever possible, as fields in Carroll spacetime or the aristotelian spacetime underlying the fracton system. This is a continuation of our earlier paper <cit.>, henceforth referred to as Part I, where we studied the classical elementary systems with Carroll and dipole (for us, “fracton”[We will not be able to give full justice to the broad field of “fractons” for which we refer the reader to <cit.> for reviews. ]) symmetries. In particular, in this paper we lift the Carroll/fracton correspondence proposed in Part I from the classical to the quantum realm. A Lie group G is said to be a symmetry of a quantum mechanical model if the underlying Hilbert space of states admits a unitary representation[This might be relaxed to only projective representations of G, but we can always restrict to honest representations by passing to a central extension of G. We shall assume that we have done this from now on. For the case of the Carroll and dipole groups there is no need to do this, since there are no nontrivial central extensions in 3+1 dimensions and above.] of G. In this sense it is typical to think of unitary representations as describing the symmetries of a quantum mechanical model, what we will somewhat loosely refer to as quantum symmetries in this paper. Indeed, some of these unitary representations may be constructed by geometrically quantising coadjoint orbits and, for some (but famously not all) Lie groups, all unitary representations arise in this way. In particular, if a group acts as symmetries of a given spacetime, we expect it to be realised as quantum symmetries of any natural quantum system defined on that spacetime. Conversely, it is often the case that unitary representations of a Lie group G can actually be realised on classical fields defined on a spacetime on which G acts by symmetries. In any unitary representation of a Lie group, the elements of the Lie algebra give rise to hermitian operators which may be thought of as physical observables of the corresponding quantum system and in whose spectrum we might be interested. For example, the generator of time translations gives rise to the hamiltonian, which governs the energy spectrum of the theory. Unitary representations need not be irreducible, but they typically decompose into irreducible components, which we may think of as the building blocks of the quantum symmetries of the given group. We call them the elementary quantum systems and they are the quantum counterpart of the coadjoint orbits describing the elementary classical systems. One of the earliest descriptions of elementary quantum systems is the Wigner classification <cit.> of unitary irreducible representations (UIRs, for short) of the Poincaré group, which describe the (free) particles we observe in nature, and forms a cornerstone of relativistic quantum field theory on Minkowski spacetime. Thus if we wish to study quantum mechanical models with a certain symmetry group, it is useful to first understand the elementary quantum systems of that group. This motivates the study of UIRs of the Carroll and dipole groups and allows us to propose a definition of what we mean by elementary quantum Carroll and fracton particles. We are able to treat carrollions and fractons simultaneously for the most part since free complex Carroll and fracton scalar theories and their symmetries essentially[The dipole group is a trivial central extension of the Carroll group.] coincide <cit.> (see also <cit.>). This led us in Part I to propose a correspondence, summarised in Table <ref>, between all elementary carrollions and fractons, which we show in this work to persist at the quantum level. Let us emphasise that out of these elementary ingredients one can of course build composite objects (which are then reducible), to which the above correspondence extends. The elementary dipoles should be contrasted with composite dipoles built out of two elementary monopoles, both of which have mobility, in contradistinction to isolated monopoles. They correspond to two Carroll particles of opposite energy <cit.>. Seen from this perspective it is even less surprising that composite Carroll particles with opposite mass can move <cit.> (see also <cit.>). This is a manifestation of the fact that the dipole moment for nonzero charge depends on the choice of origin (see, e.g., <cit.>). Hence a monopole, which has nonzero charge (q ≠ 0), cannot move without changing the dipole moment. As soon as the charge is zero, e.g., by adding another monopole with opposite charge, mobility in ways that do not change the total dipole moment is restored. This shows that it can be useful to think about Carroll and fracton systems from complementary perspectives. Since carrollian physics is relevant for flat space holography at null <cit.>, timelike <cit.> or spacelike <cit.> infinity it might be interesting to understand them from a fracton perspective. We will provide further comments concerning these interesting topics in Section <ref>, but let us highlight that some of the structure of flat space holography can already be seen by using only Carroll symmetries, e.g., there is a radiative and non-radiative branch, related to the option of having vanishing or non-vanishing mass. In addition, the equations of the non-radiative branch share similarities with the massless or magnetic carrollian field theory. Another motivation comes from the successful and wide-ranging applications of tools that fall under the banner of “scattering amplitudes”. See, e.g., <cit.> for a review. To apply this remarkable toolkit to carrollian or fractonic theories, one needs first of all an understanding of the possible quantum particles, i.e., the “in” and “out” states of scattering amplitudes. The UIRs classified in this paper provide the complete answer to this question and it could be interesting to employ this technology. The construction of UIRs of the Carroll and dipole groups employed in this paper is essentially that pioneered by Wigner. The upshot of this method is that UIRs are carried by fields in momentum space, more precisely fields defined over orbits on momentum space of the “homogeneous” subgroup: e.g., Lorentz in the case of Poincaré. These fields transform under (unitary, irreducible) representations of the little group associated to these orbits. In the Poincaré case, from which we derive most of our intuition, the orbits are the mass shells and the little groups are the subgroups of the Lorentz group which preserve a given massive, massless or tachyonic momentum. There is however one crucial difference between the Poincaré and Carroll groups: in the latter, boosts commute and hence there are other choices of abelian subgroups from which we can induce. This is reflected in the existence of automorphisms of the Carroll group <cit.> which mix momenta and boosts. Although we will induce from characters of the translation group in this work, one could equally induce from characters of the group generated by boosts and time translations and in some cases, such as the centrons (in Carroll language) or elementary dipoles (in fracton language), it would perhaps be more natural to do that and hence express the corresponding UIRs as fields on centre-of-mass (or dipole-moment) space. It is natural to wish to describe these representations in terms of classical fields defined over the relevant spacetime: Minkowski in the case of Poincaré, the eponymous spacetime in the case of Carroll or the aristotelian spacetime in the case of the fractons <cit.>. Such fields transform according to representations of the homogeneous subgroup, which is the stabiliser of a chosen origin in the spacetime and hence in passing from the momentum space description to the spacetime description, there is always a choice to be made: we need to embed the representation of the little group (the so-called “inducing representation”) into some representation of the homogeneous subgroup, a process known as “covariantisation” in the Physics literature. The embedding representation need not be unitary; although a typical consideration is that it should be as small as possible and, in any case, finite-dimensional, in order to arrive at spacetime fields with a finite number of components. It typically happens that the embedding representation is of larger dimension than the inducing representation and hence that the spacetime field has more degrees of freedom than the momentum space field. One way to cut down to the required number of degrees of freedom is by imposing field equations on the spacetime fields, so that the sought-after irreducible representation is carried not by all spacetime fields, but only by those which obey their field equations. There is a systematic way to arrive at the field equations, once the inducing representation has been covariantised, in terms of a group-theoretical generalisation of the Fourier transform. It is well known from the case of the Poincaré group, that this procedure is the origin of many of the familiar relativistic free field equations: Klein–Gordon, Dirac, Maxwell, Proca, linearised Einstein,... In this paper we will give similar descriptions for some of the UIRs of the Carroll and dipole groups. In particular, we will see that some of the massless low-helicity UIRs of the Carroll group are given by solutions of the three-dimensional euclidean Helmholtz, Dirac and topologically massive Maxwell equations. This is perhaps not surprising in that the massless helicity UIRs of the Carroll group are actually UIRs of the three-dimensional euclidean group, but what is perhaps novel is the interpretation of these well-known partial differential equations as irreducibility conditions for three-dimensional euclidean fields and, by extension, for massless carrollian fields (or neutral fractons). It is worth highlighting the fact that the field equations we find do not necessarily agree with the Carroll-invariant field equations in the literature (see, e.g., <cit.>). The fundamental reason for this discrepancy is our different points of departure. Our principal aim in this paper is the classification of UIRs of the Carroll and dipole groups. These representations are given in terms of fields on momentum space and that suffices for the classification. Those fields have the precise number of degrees of freedom (roughly the dimension of the inducing representation) required to describe the UIRs. To express the UIRs in terms of spacetime fields, we must make a choice of how to covariantise the inducing representation and our approach has been to choose the simplest covariantisation: roughly, the one which adds the smallest number of extra degrees of freedom. For example, for massive Carroll UIRs (equivalently, charged fracton UIRs), the inducing representation is already covariant if we demand that the boosts (equivalently, the dipole generators) act trivially. This results in no additional field equations beyond the one coming from having fixed the energy. This may result in unfamiliar/surprising field transformation laws, since such an economical choice of covariantisation is not available for the Poincaré group. Indeed, what makes this possible is that the boosts commute in the homogeneous Carroll group, but do not in the Lorentz group, where the commutator of two boosts is a rotation. By contrast, in the approach where one departs from building invariant actions for spacetime fields, it is perhaps not obvious (and indeed would have to be checked) that the resulting field equations project onto an irreducible subrepresentation of the representation carried by the off-shell spacetime fields. This paper is organised as follows. In the remainder of this Introduction we provide a self-contained summary of the UIRs of the Carroll (Section <ref>) and dipole (Section <ref>) groups and whenever possible we summarise their description in terms of spacetime fields. Readers who are happy to skip some of the details could then continue with Section <ref> where we discuss Carroll and fracton quantum field theories and finish with the discussion in Section <ref>, where we highlight, e.g., the relevance for flat space holography. The more detailed treatment starts in Section <ref>, where we review the basic results of Part I on the coadjoint orbits of the Carroll group, their structure and the action of automorphisms. The construction of the UIRs starts in Section <ref>, based on the method of induced representations described in some detail in Appendix <ref>. As this topic is somewhat technical, we distill from the appendix a sort of algorithm to construct the UIRs and this is briefly recapped in Section <ref>. Section <ref> outlines the method and checks that the momentum space orbits admit invariant measures. Section <ref> works out the inducing representations: the UIRs of the little groups of the momentum orbits. Two of the little groups are themselves semidirect products and require iterating the method of induced representations. In Section <ref> we put everything together and list the UIRs of the Carroll group: divided into those with nonzero energy (termed “massive” in Part I and treated in Section <ref>) and those with zero energy (termed “massless” in Part I and treated in Section <ref>). In Section <ref> we comment on a more unified description of the massless UIRs as induced representations from a larger subgroup of the Carroll group. We end the section with a conjectural correspondence between the UIRs and the (quantisable) coadjoint orbits. In Section <ref> we describe some of the UIRs found in the previous section in terms of classical fields on Carroll spacetime. This requires a continuation of the brief recap of the method of induced representations, describing the covariantisation procedure and the group-theoretical Fourier transform, and contained in Section <ref>. We then do an example of a massive carrollian field (in Section <ref>) and of a massless carrollian field with helicity (in Section <ref>), which are the only two classes of UIRs which seem to admit a description in terms of finite-component carrollian fields. We work out the resulting field equations for the cases of helicities 0, 12 and 1 and recover the three-dimensional Helmholtz, Dirac equation and topologically massive Maxwell equations, respectively. Section <ref> is devoted to fractonic particles and fields. In Section <ref> we classify the UIRs of the dipole group by observing that there is a bijective correspondence between UIRs of the Carroll group and classes of UIRs of the dipole group distinguished solely by the fracton energy. We then describe some of these UIRs in terms of fields on aristotelian spacetime. We do the example of a charged monopole in Section <ref> and of what could be termed a neutral aristotelion in Section <ref>. In Section <ref> we discuss Carroll and fracton quantum field theories in a second-quantisation language, some of their similarities, and highlight the relation to ultralocal field theories <cit.>. Additionally, we comment on the intricate thermodynamic properties of these theories and show that there is a subtle difference between free massless Carroll theories and the magnetic Carroll theory. In Section <ref> we provide a discussion where we emphasise interesting connections to flat space holography and other intriguing topics for further exploration. The paper ends with two appendices: Appendix <ref> contains a short review of the method of induced representations, whereas Appendix <ref> contains some formulae in special coordinates for the 3-sphere adapted to the Hopf fibration, which we make use of in our discussion of massless UIRs of the Carroll group. §.§ Summary In this section we summarise the quantum Carroll and fracton particles and their field theories. In addition to the vacuum sector, UIRs fall into two broader classes outlined in Table <ref>: * II: massive Carroll particles Ĥ= E_0 and charged monopoles Q̂= q * III-V: massless Carroll particle Ĥ=0 and neutral fractons Q̂ = 0 In each case the properties of the particles are quite distinct and many of the curious physical features of carrollian and fracton theories can be traced back to this fact. We will present hermitian operators corresponding to our symmetries. They are related to the skew-hermitian Lie algebra generators via multiplication by i. §.§.§ Quantum Carroll particles and fields We are not the first to study the UIRs of the Carroll group, the massive UIRs were already constructed by Lévy-Leblond <cit.> and the massless sector was highlighted in <cit.> (see also Appendix A in <cit.>), but this work provides the first classification. A similar feat has already been accomplished for the Poincaré <cit.> and Galilei/Bargmann groups (we refer to the review <cit.> and references therein) and this work closes the final gap for quantum symmetries based on the maximally symmetric affine spacetimes <cit.>. We will now provide a summary of quantum Carroll particles, i.e., UIRs of the Carroll group, and discuss some of their properties. Further details are presented in Section <ref>. Broadly speaking they fall into two classes, massive carrollions (Ĥ=E_0) and massless carrollions (Ĥ=0), which have very distinctive features. We parametrise the Carroll group as g=g(R,,,s) = e^s H e^· e^· R where R is a rotation, denote the Carroll boost generators, the generators of spatial translations and H the generator of time translations. The Carroll Lie algebra is given by (i,j,k=1,2,3) [J_i, J_j] = ϵ_ijk J_k [J_i, B_j] = ϵ_ijk B_k [J_i, P_j] = ϵ_ijk P_k [B_i,P_j] =δ_ij H . The quantum Carroll particles, by which we mean UIRs of the (simply-connected) Carroll group, fall into several different classes listed below. The notation _0 denotes the non-negative integers and V_s stands for the complex spin-s irreducible representation of (3) ≅(2), of dimension 2s+1. I(s) vacuum sector with 2s ∈_0 with underlying Hilbert space V_s. When s=0 this represents the vacuum, whereas for s>0 these are spinning vacua. In this representation only the rotations act nontrivially and they do so via the spin-s irreducible representation. II(s,E_0) massive spin s with 2s ∈_0 and E_0 ∈∖{0} <cit.>. This shows that the spin of massive quantum Carroll particles is indeed quantised. The underlying Hilbert space is given by square-integrable functions ψ∈ L^2(Å^3,V_s) and ∈Å^3 parametrises the hyperplane in momentum space with E =E_0 (see Figure <ref>). The unitary action of G is given by[We could have added an additional label to our wavefunctions such that the specific E=E_0 hyperplane is explicit, e.g., we could have written ψ_E_0(). To reduce clutter we will leave the energy implicit.] (g ·ψ)() = e^i ( E_0 s + ·)ρ(R) ψ(R^-1( + E_0)) where R ↦ρ(R) denotes the spin-s representation of (3) and the inner product (ψ_1,ψ_2) = ∫_Å^3 d^3p <ψ_1(),ψ_2()>_V_s, where <-,->_V_s is an (2)-invariant hermitian inner product on V_s. The hermitian operators that correspond to our conserved charges are in this basis given by = - i ×/ + Ŝ B̂ = -i E_0/ Ĥ = E_0 P̂ = , where Ŝ are the infinitesimal generators of the spin-s representation ρ(R). Massive spin-s carrollions can then be labeled by Ĥ = E_0 and Ŝ^2 = s (s+1) , which are multiples of the identity. For massive carrollions we can also define a position operator X̂ X̂ = 1/E_0B̂ which agrees with the intuition that the centre of mass of a massive Carroll particle is the energy multiplied by the position and satisfies the canonical commutation relations [X̂_i,P̂_j] = - i δ_ij . We may alternatively diagonalise with respect to , which is related to the above via a Fourier transform (see (<ref>)) and express the representation in the “boost basis” as (g ·ψ̃)() = e^i ( - E_0 s + ·)ρ(R) ψ̃(R^-1( - E_0)) , where the inner product is now given by (ψ̃_1,ψ̃_2) = ∫_Å^3 d^3k <ψ̃_1(),ψ̃_2()>_V_s . We provide further details for this UIR in Section <ref>. This representation can also be described using fields on Carroll spacetime, i.e., as massive Carroll field theories. These V_s-valued fields ϕ(t,) are obtained from the V_s-valued momentum space fields ψ() via a group-theoretical Fourier transform which, in this case, agrees with the classical Fourier transform: ϕ(t,) = e^-i E_0 t∫_Å^3 d^3p e^-i·ψ() . The field ϕ satisfies the obvious (and only) field equation ϕ̣/ṭ = -i E_0 ϕ. The action of the Carroll group on the spacetime fields is given by (g ·ϕ)(t,) = ρ(R) ϕ(t-s-·( -), R^-1( -)) where we want to emphasise that the fields are scalars under boosts, since these act nontrivially only on the coordinates. When we do not restrict to just one orbit and allow both energies E=± E_0 we are led to ultralocal (quantum) field theories <cit.> or “electric Carroll field” theories <cit.>, as discussed in more detail in Section <ref>. III(n,p) massless helicity n/2 with real p>0 and n ∈, so the helicity is now quantised. The underlying Hilbert space consists of complex-valued functions[More precisely, square-integrable (relative to an (2)-invariant measure) sections of the line bundle 𝒪(-n) over the complex projective line, but the description in this summary suffices.] on the complex plane, where the action of G is given by (g ·ψ)(z) = e^i ·(z)( η + ξ z| η + ξ z|)^-nψ( ηz -ξη + ξ z). In this case z is a stereographic coordinate on the sphere =p, the action of time translations and boosts is trivial and (z), given in equation (<ref>), satisfies (z) ^2=p^2. The rotation group (2) acts on z via linear fractional transformations: R = [ η ξ; - ξ η ]∈ SU(2) acts as z ↦η z + ξη- ξ z . Consequently, B̂ = Ĥ=0, however P̂ = (z) so that these representations are indeed specified by the helicity n2 and P̂^2 = p^2. The inner product on the Hilbert space is given by <ψ_1, ψ_2> := ∫_2i dz ∧ d/(1+|z|^2)^2ψ_1(z)ψ_2(z) . This UIR can also be described using spacetime fields. One possibility is to covariantise the inducing representation of (1) of weight n with boosts acting trivially into the spin-|n/2|, representation V_|n/2| of (2) as the highest (if n≥ 0) or lowest (if n≤ 0) weight vectors in V_|n/2|. The V_|n/2|-valued spacetime fields ϕ(t,) are given in terms of the momentum space fields ψ(z) by ϕ(t,) = ∫_2i dz ∧ d/(1 + |z|^2)^2 e^-i·(z)ρ(σ(z))ψ(z), where σ(z) ∈(2) is defined by σ(z) = 1/√(1+|z|^2)[ z -1; 1 ]. Notice that the spacetime fields do not depend on t, all massless Carroll fields fulfil ϕ̣/ṭ = 0 , so they are essentially euclidean three-dimensional fields. The action of G factors through the action of the three-dimensional euclidean group: (g·ϕ)() = ρ(R) ϕ(R^-1( - )) , where we see that boosts act trivially. The additional field equations which project to the irreducible subrepresentation can be worked out for the lowest values of the helicity. For helicity 0 we obtain the Helmholtz equation (△ + p^2) ϕ() = 0, for a scalar field, where △ is the laplacian acting on functions in three-dimensional euclidean space. For helicity 1/2, we obtain the three-dimensional euclidean Dirac equation ( + i p ) ϕ()= 0, where now ϕ() is a 2-component field taking values in the spin-1/2 representation of (2). Finally, for helicity 1 we find the topologically massive Maxwell equation of <cit.>: ∇×ϕ = p ϕ, where ϕ is now a three-dimensional vector field, which is metrically dual to the Hodge dual of the Maxwell field-strength. III'(n,k) centrons with real k>0 and n ∈. They are in many ways analogous to the massless carrollions just described, so we will be brief. The underlying Hilbert space consists again of complex-valued functions on the complex plane and the action of G is given by (g ·ψ)(z) = e^i ·(z)( η + ξ z| η + ξ z|)^-nψ( ηz -ξη + ξ z), where z is now a stereographic coordinate on the sphere =k and is (z) given in equation (<ref>) with p ↦ k. In this case the action of the time and spatial translations is trivial and the representations are uniquely specified by P̂ = Ĥ=0, B̂^2 = k^2 and n∈. The inner product on the Hilbert space is again given by (<ref>). We can also write down field theories for the centrons and they are mutatis mutandis the same as for the massless carrollions, with the interesting twist that they live naturally in “centre-of-mass space”. In this sense they are more reminiscent of internal degrees of freedom, such as spin. IV_±(n,p,k) (anti)parallel massless helicity n/2 with n ∈ and real p,k >0. The underlying Hilbert space is again given by complex-valued functions on the complex plane and the action of G is given by (g ·ψ)(z) = e^i (±kp)·(z)( η + ξ z| η + ξ z|)^-nψ( ηz -ξη + ξ z). In this case z is a stereographic coordinate on the sphere =p. By inspection we see that III(n,p) is the limit of IV_±(n,p,k) as k→ 0, which results from formally putting k=0 in the above expression for g·ψ. The action of time translations is trivial, consequently Ĥ=0. For the momentum and centre-of-mass operators we obtain P̂ = (z) and B̂ = ±kp(z), respectively. Since = ±kp, the sign tells us whether we are in the parallel (+) or antiparallel (-) cases. In summary, we can characterise these UIRs by P̂^2 = p^2 and B̂^2 = k^2 and the sign of P̂·B̂. The inner product on the Hilbert space is again given by (<ref>). V_±(p,k,θ) generic massless with real p, k>0 and θ∈ (0,π). It is interesting to note that there are no discrete quantum numbers for the generic massless particles. The underlying Hilbert space is L^2(S^3, ), which are the square-integrable functions on the round 3-sphere with values in a one-dimensional unitary representation of the nilpotent subgroup of G generated by boosts and translations. The unitary character of this representation is such that χ( e^s H + · e^·) = e^i ( · + ·), where = (0,0,p) and = (ksinθ,0,kcosθ). We identify S^3 with the (2) subgroup of G and we write the action of g = g(R,,,s) ∈ G on L^2(S^3,) as (g ·Ψ)(S) = e^i (· S + · S)Ψ(R^-1S), where S ∈(2). The inner product is given by <Ψ_1,Ψ_2> = ∫_S^3 dμ(S) Ψ_1(S)Ψ_2(S), with dμ(S) the volume form of a round metric on S^3, or equivalently a bi-invariant Haar measure on (2). This representation breaks up as the orthogonal direct sum of two UIRs: L^2_±(S^3,), where Ψ∈ L^2_±(S^3,) if and only if Ψ(-S) = ±Ψ(S) for all S ∈(2). The sign labels two inequivalent quantisations of the same coadjoint orbit, a phenomenon typically associated to a disconnected stabiliser, which is indeed the underlying reason here too as discussed in Section <ref>. In summary, apart from that sign, the representation is uniquely specified by Ĥ = 0 P̂^2 = p^2 B̂^2 = k^2 P̂·B̂ = p k cosθ , where P̂ = S and B̂=S. From the point of view of path integral quantisation the subtleties in the quantisation of these orbits derives from the intricate constraint structure of the orbits, which are basically the classical analog of (<ref>), as in Section 3.5 of Part I. §.§.§ Quantum fracton particles and fields In this section we summarise the UIRs of the dipole group and some of their field-theoretic realisations on aristotelian spacetime. To the best of our knowledge there have been no attempts towards a classification of unitary irreducible representations of the dipole group. The dipole Lie algebra[A better name might be monopole-dipole algebra, since the particles described by this algebra include monopoles as well as dipoles.] is given by [J_i, J_j] = ϵ_ijk J_k [J_i, P_j] = ϵ_ijk P_k [J_i, D_j] = ϵ_ijk D_k [D_i,P_j] =δ_ij Q , with an additional generator H_F which is central and most notably the exchange of center-of-mass B_i with dipole moment D_i and Carroll energy H with charge Q, as shown in Table <ref>. Let us now discuss the quantum generalisation of the correspondence between Carroll and fracton particles <cit.>. As explained in Section <ref>, UIRs of the dipole group are in bijective correspondence with the UIRs of the Carroll group, except that we extend them to a UIR of the dipole group by declaring that e^s H_F should act via the unitary character χ(e^s H_F) = e^i s E for some E∈ which is to be interpreted as the fracton energy. We therefore use the same notation, but replacing the Carroll energy E_0 with the monopole charge q and the magnitude of the centre-of-mass k with the magnitude of the dipole moment d and adding a label E: hence the UIRs of the dipole group are I(s,E), II(s,q,E), III(n,p,E), III'(n,d,E), IV_±(n,p,d,E) and V_±(p,d,θ,E), as can be seen in Table <ref>. Monopoles with charge q and spin s. For example, monopoles of charge q≠ 0 and spin s (where 2s is a non-negative integer) are given by the UIR II(s,q,E). The underlying Hilbert space is given by square-integrable functions ψ∈ L^2(Å^3,V_s) which means that they are functions on the hyperplane in momentum space ∈Å^3 with fixed charge q and energy E (see Figure <ref> with E ↦ q). They are valued in V_s, i.e., in the complex spin-s UIR of (2). The action of the dipole group is given as follows. If we let g=g(R,,,θ,s) = e^s H_F + θ Q + · e^· R denote the generic element of the dipole group, we have, for ψ a V_s-valued field, that (g ·ψ)() = e^i ( q θ + E s + ·)ρ(R) ψ(R^-1( + q)) . Let us emphasise that the dipole transformation acts as expected ↦ + q and ρ(R) is a manifestation of the fact that these are spin s monopoles. We could have written ψ_q,E() to emphasise that our functions are restricted to these specific charge q and energy E. The infinitesimal action of (<ref>) is given by = - i ×/ + Ŝ Q̂ = q D̂ = -i q / Ĥ = E P̂ = , which are hermitian operators with respect to the inner product (ψ_1,ψ_2) = ∫_Å^3 d^3p <ψ_1(),ψ_2()>_V_s . The UIRs can then be uniquely labeled by Q̂ = q Ĥ = E Ŝ^2 = s (s+1) , which are multiples of the identity. We can also define a position operator X̂ X̂ = 1/qD̂ which agrees with the intuition that the dipole moment is given by the charge times the position. The position operator satisfies the canonical commutation relation [X̂_i,P̂_j] = - i δ_ij . We may alternatively diagonalise with respect to , which is related to the above via a Fourier transform (see (<ref>)) and express the representation in the “dipole basis” (g ·ψ̃)() = e^i ( - E s + qθ +·)ρ(R) ψ̃(R^-1( - q)) , where the dipole moment is, as expected, shifted by the translations.[This choice of basis was also employed in Appendix D in <cit.>.] The inner product is then given by (ψ̃_1,ψ̃_2) = ∫_Å^3 d^3d <ψ̃_1(),ψ̃_2()>_V_s . We may describe these UIRs in terms of fields on the aristotelian spacetime <cit.> with coordinates (t,). The field ϕ(t,) is obtained from ψ() via a Fourier transform ϕ(t,) = e^-iE t∫_Å^3 d^3p e^-i·ψ() , and the action of the generic element g of the dipole group in equation (<ref>) is given by (g ·ϕ)(t,) = e^i q(θ + · ( - ))ρ(R) ϕ(t-s, R^-1( - )) , with ρ the spin-s representation of (2). In particular, pure charge and dipole transformations act as expected via a phase ϕ(t,) ↦ e^i q(θ + ·)ϕ(t, ) . The only field equation is ϕ̣/ṭ = -iE ϕ. Readers who are happy to skip the details of the classification of the UIRs could continue with our discussion of Carroll and fracton quantum field theories in Section <ref>. § REVIEW OF COADJOINT ORBITS OF THE CARROLL GROUP In Part I we classified the coadjoint orbits of the (3+1)-dimensional Carroll group. The Carroll Lie algebra is the ten-dimensional real Lie algebra spanned by J_i,B_i,P_i,H where i=1,2,3 subject to the following non-zero brackets: [J_i, J_j] = ϵ_ijk J_k [J_i, B_j] = ϵ_ijk B_k [J_i, P_j] = ϵ_ijk P_k [B_i,P_j] =δ_ij H , where the Levi-Civita symbol ϵ_ijk is normalised so that ϵ_123=1. The connected Carroll group G is isomorphic to a semidirect product G ≅ K ⋉ T, where K ≅(3) ⋉^3 and T ≅^4. The Lie algebra of T is spanned by P_i, H and the Lie algebra of K by J_i, B_i. The group K is isomorphic to the (connected) three-dimensional euclidean group. Elements α∈^* in the dual of the Carroll Lie algebra are parametrised by the “momenta” of classical particles; that is, α = (, , , E) where = <α,J> is the angular momentum, = <α, B> is the centre of mass, = <α, P> is the linear momentum and E = <α, H> is the energy. Coadjoint orbits belong to several classes distinguished in the first instance by the value of the Casimir elements H and W^2, which is the euclidean norm of W_i := H J_i + ϵ_ijkP_j B_k . On α = (, , , E), H(α) = E and W^2(α) = E + ×^2. Notice that since the energy is constant on each orbit, it is trivially bounded below, that being a typical physical requirement. See also Figure <ref>. In Part I, we arrived at the classification of coadjoint orbits displayed in Table <ref>.
http://arxiv.org/abs/2307.04981v1
20230711023946
A Multi-view Impartial Decision Network for Frontotemporal Dementia Diagnosis
[ "Guoyao Deng", "Ke Zou", "Meng Wang", "Xuedong Yuan", "Sancong Ying", "Huazhu Fu" ]
cs.CV
[ "cs.CV" ]
G.Deng et al. National Key Laboratory of Fundamental Science on Synthetic Vision, Sichuan University, Sichuan, China College of Computer Science, Sichuan University, Sichuan, China Institute of High Performance Computing, A*STAR, Singapore A Multi-view Impartial Decision Network for Frontotemporal Dementia Diagnosis Guoyao Deng1, Ke Zou1,3, Meng Wang3, Xuedong Yuan2, Sancong Ying2 and Huazhu Fu3 August 12, 2023 ==================================================================================== Frontotemporal Dementia (FTD) diagnosis has been successfully progress using deep learning techniques. However, current FTD identification methods suffer from two limitations. Firstly, they do not exploit the potential of multi-view functional magnetic resonance imaging (fMRI) for classifying FTD. Secondly, they do not consider the reliability of the multi-view FTD diagnosis. To address these limitations, we propose a reliable multi-view impartial decision network (MID-Net) for FTD diagnosis in fMRI. Our MID-Net provides confidence for each view and generates a reliable prediction without any conflict. To achieve this, we employ multiple expert models to extract evidence from the abundant neural network information contained in fMRI images. We then introduce the Dirichlet Distribution to characterize the expert class probability distribution from an evidence level. Additionally, a novel Impartial Decision Maker (IDer) is proposed to combine the different opinions inductively to arrive at an unbiased prediction without additional computation cost. Overall, our MID-Net dynamically integrates the decisions of different experts on FTD disease, especially when dealing with multi-view high-conflict cases. Extensive experiments on a high-quality FTD fMRI dataset demonstrate that our model outperforms previous methods and provides high uncertainty for hard-to-classify examples. We believe that our approach represents a significant step toward the deployment of reliable FTD decision-making under multi-expert conditions. We will release the codes for reproduction after acceptance. § INTRODUCTION Frontotemporal Dementia (FTD) has become the second most common type of presenile dementia, which causes personality changes, progressive behavioral problems, and cognitive impairment in tens of millions of the elderly. FTD includes a range of subtypes. This heterogeneity of FTD hampers the diagnosis process. Thus early and accurate diagnoses are of vital importance for comprehending the disease process and developing disease-modifying treatments <cit.>. For deep learning computer-aid-diagnosis based on fMRI images, since fMRI images inherently have multiple perspectives on brain neural network activity, multi-view input can enable the network to learn abundant brain activity information. But two major problems exist in current FTD deep learning classification: Single-view classification fails to extract sufficient brain activity information. And fusion strategies in trusted multi-view classification do not properly balance conflicts. There are a variety of CAD methods currently in fMRI image diagnosis <cit.>. Traditional machine learning CAD methods require hand-crafted features and feature selection, which ignore abundant origin information hidden in fMRI images <cit.>. Deep learning CAD methods using networks like CNN mostly focus on directly extracting information from fMRI images without feature egineering <cit.>. Sarraf et al. <cit.> used LeNet-5 to classify Alzheimer’s Disease (AD) from healthy controls using single-view 2D images converted from fMRI data. Ramzan et al. <cit.> applied Resnet to classify AD with the same input data form. These approaches can not correctly estimate the confidence of the predictions because of the drawback of softmax. Moreover, the promise of multi-view classification in fMRI images has not been investigated. To this end, a reliable model should both recognize sufficient brain neural network information and provide accurate confidence. Multi-view classification is mainly divided into early fusion, late fusion, and score fusion according to fusion strategy<cit.>. Confidence-based fusion proposed by Han et al. <cit.> is a late fusion strategy that combines decisions at the evidence level. It combined multi-view information with Dempster-Shafer Theory aiming to provide trusted prediction with confidence. Although this method did promote both classification reliability and robustness, the fatal "Zadeh's problem" remained unsolved <cit.>. While trying to combine two opinions that conflict with each other using DS-Combine<cit.>, the combined result becomes counter-intuitive and baseless. This reveals that DS-Combine is inapplicable in FTD diagnosis due to such results may affect the doctor's judgment, and even delay the diagnosis and effective treatment of FTD disease<cit.>. A balanced and risk-aware method is needed. Based on the above analysis, we propose a credible Frontotemporal Dementia (FTD) multi-view classification method in this paper. Our model infers confidence for each view's prediction and properly handles the complementary and conflicting information among them, providing a more accurate and reliable diagnosis of FTD. To estimate the confidence of each view, we adopted evidential deep learning. Furthermore, we proposed the Impartial Decision Maker (IDer) to solve "Zadeh's problem" in current trusted multi-view classification, avoiding high-risk and counter-intuitive prediction. This approach is a crucial step towards safe and trusted FTD diagnosis and deployment. To the best of our knowledge, we are the first to address conflict in FTD multi-view classification. Our contributions can be summarized as follows: (1) We propose a novel multi-view impartial decision method (MID-Net) for Frontotemporal Dementia diagnosis and classification based on rs-fMRI images. (2) We introduce the Impartial Decision Maker (IDer) for sample-adaptive multi-view integration, which combines multi-view information at the evidence level and forms a unified opinion without any conflict. (3) We conduct sufficient experiments on the FTD dataset[The dataset has IRB certification.] to verify the performance of our MID-Net and the effectiveness of uncertainty estimation. § METHOD §.§ Overall framework & Uncertainty quantification The overall framework is shown in Fig. <ref>. In order to make full use of brain activity information in rs-fMRI images, we first deploy backbone networks for three sectional views. After the feature extraction, models form the opinions with collected evidence through a Dirichlet distribution. At the final decision level, we utilize IDer to complete the trusted fusion. We now delve into the details. Evidential Classifier: The drawback of the maximum-likelihood classifier has been discussed, that the point estimate of softmax produces over-confident erroneous prediction when faces out-of-distribution samples. When deployed in CAD, it will cause ineffective treatment and even irreparable consequences. In order to quantify the confidence behind every choice our model makes for a single view, we utilize evidential classifiers for more reliable predictions, which infer the strength of evidence to quantify belief masses and uncertainty under Dirichlet distribution<cit.>. Dirichlet Distribution: In the theory of Subjective Logic <cit.>, Dirichlet distribution formalizes the model's opinions assigned belief mass of any subset of the frame of discernment and the uncertainty mass as an explicit parameter. This type of uncertainty mass expresses "I do not know" for all possible states. More specifically, in a frame of K mutually exclusive singletons (e.g., class labels), these K belief masses and the uncertainty mass of each view are all non-negative and sum to one, as: u_^v + ∑_k=1^Kb_k^v = 1, where u_^v≥0 and b_k≥0 for k = 1,2,···, K, denote the overall uncertainty and the belief of k-th class respectively. the evidence e_^v = [e_1^v,⋯,e_K^v] induce the parameter α_k^v of Dirichlet distribution in the theory of subjective theory,i.e., α_k^v = e_k^v + 1. And the belief mass b_k^v and uncertainty mass u_^v are computed as: b_k^v = e_k^v + 1/S_v = α_v/S_^v and u_^v = K/S_^v, where S_^v =∑_k^Kα_k^v = ∑_k^K(e_k^v+1) refers to as the Dirichlet strength. For each view, the more evidence observed for a possible class from a sample, the greater the corresponding belief. When little evidence is gained, greater uncertainty is assigned to this view. Composing opinions via Dirichlet distribution avoids making false predictions that are overconfident, also the estimated uncertainty ensures the decision risk of our model is visible. §.§ Impartial Decision Maker "Zadeh's problem" in FTD: As shown in Fig. <ref> (a), due to the characteristics of complementary brain activity information contained in different brain regions<cit.>, the conclusions drawn from different perspectives may be divergent. The Current fusion strategy DS-Combine cannot fuse two divided opinions from two different perspectives, even arrive at a counterintuitive, baseless opinion. We found that this problem of DS-combine is caused by using Dempster's rule to fuse opinions. This drawback of Dempster's rule has been pointed out by Zadeh et al.<cit.>. Therefore, in the multi-view classification of fMRI images, a fusion strategy that can properly handle conflicts of opinion is critical. To better fuse opinion with low risk and resolve the highly conflicting situation, we proposed the Impartial Decision Maker based on The Weighted Operator Theory <cit.>aiming to achieve a more balanced information fusion. For any two views, models' opinions towards K classes O_^1 = [b_1^1,b_2^1,...,b_k^1,u_^1] and O_^2= [b_1^2,b_2^2,...,b_k^2,u_^2] is combined in the following manner: O^IDer = O_^1 O_^2, where is the combination operator. The specific formulation in our IDer is: b_k^IDer = b_k^1b_k^2 + b_k^1u_^2 + b_k^2u_^1 + w_b_k^CRF,u^IDer = u_^1u_^2+ w_u^CRF, wherew_b_k^CRF and w_u^CRF are the conflict resolution factor(CRF). b_k^IDer is the combined belief of k class and u^IDer is the combined uncertainty of two views. CRF measures conflict among beliefs and redistribute belief and uncertainty. w_b_k^CRF and w_u^CRF are calculated as: w_b_k^CRF = 1/2(b_k^1+b_k^2), w_u^CRF = 1/2(u_^1+u_^2). Based on the above formulations, we obtain the unified opinion O^IDer for two views. Thus given v views of data, we can first collect evidence from each view and then combine the opinions from different views by the following rule: O^IDer = O_^1 O_^2 ⋯ O_^v. After obtaining the final opinion O^IDer =[{b_k^IDer}_k=1^K,u^IDer] for all views, the combined evidence for all possible classes e^IDer is calculated according to Eq. <ref>. Finally, the probability of all categories and the uncertainty of the overall decision are inferred through the above parameters. On account of the method above, IDer combines evidence elegantly from different sources in a balanced way as shown in Fig. <ref>. Complementary information from highly conflicting opinions is reasonably taken into account. Compared to the DS-combine evidential fusion rule<cit.>, one can recognize that our IDer is more reasonable for multi-view fMRI image information fusion since different view sections of the neural network can contain very complementary information. With integrated opinions inferring the output in a human-understandable fashion, IDer guarantees impartiality in decision-making and truthful causality exists in our model's diagnosis, instead of leaving us questioning why is the result. §.§ Learning process for FTD For a given sample i, our model output the evidence of each class, which is represented as e_i.Furthermore, the corresponding parameter of Dirichlet distribution α_i is equal to e_i + 1. By calculating this parameter, we can obtain the final estimationa_i/S_i of class probabilities. As our downstream task is a classification task, we first need cross-entropy loss function to supervise the training process, In the multinomial opinion D(p_i|α_i), where p_i is the class probability. Therefore, we adopted the adjusted CE loss which can be further addressed as follows: ℒ_a = ∫[ ∑_n = 1^C - y_iklog (p_ik)]1/B( α _i)∏_n = 1^C p_ik^α _ik - 1dp_i = ∑_n = 1^C y_ik( ψ( S_i) - ψ( α _ik)), where ψ( ·) denote the digamma function. p_m is the class assignment probabilities on a simplex. To guarantee that incorrect labels will yield less evidence, even shrinking to 0, the KL divergence loss function is introduced as below: ℒ_KL = log( Γ( ∑_j= 1^K α̃_ij)/Γ (K)∑_j= 1^C Γ( α̃_ij)) + ∑_j = 1^K ( α̃_ij - 1)[ ψ( α̃_ij) - ψ( ∑_j = 1^C α̃_ij)], where Γ( ·) is the gamma function. α̃_mc = y_mc + ( 1 - y_mc) ⊙α _mc denotes the adjusted parameters of the Dirichlet distribution, which is used to ensure that ground truth class evidence is not mistaken for 0. Hence, the overall loss function of our proposed network can be defined as follows: ℒ = ℒ_a + λℒ_KL, where λ>0 is the annealing factor, in order to prevent the training process from collapsing at the early stage thereby ending up with a flat uniform distribution output. § EXPERIMENTS In order to evaluate the proposed method, we compared with the following methods: single-view softmax classifier (S-S), single-view evidential deep learning (S-E), multi-view softmax classifiers with score fusion (M-S+SF), multi-view evidential classifiers with DS-combine (M-E+DS). MLE methods are supervised by cross-entropy loss. Two backbone networks, Reset-18 and Vision Transformer are chosen. Data & Implementation Details We validate our method on the FTD test set. All the data are pre-processed by SPM12 [https://www.fil.ion.ucl.ac.uk/spm/]and DPABI <cit.> and calculated in MNI space. 164 cases of patients with ground truth are stratified and divided into train, validation, and test sets. The test set size is 20% of the total dataset size. The 4D fMRI image data in nifti format was first converted along the time axis into a stack of 3D volumes. Then we extract horizontal, lateral, and frontal view sections from the center of each volume. The shape of the input slice is zoomed to 3×224×244. The data contain 4 classes, which are labeled as bvFTD (label 0), svPPA(label 1), healthy control(label 2), and nfvPPA (label 4). Our proposed network is implemented in PyTorch and trained on NVIDIA GeForce RTX 3090. We adopt the Adam optimizer to optimize the overall parameters with an initial learning rate of 0.001. The maximum epoch is set to 10. Random hue change, random saturation change, random value change of the image, Gaussian blur, motion blur, and median blur are utilized as data augmentation. All the following experiments adopted a five-fold cross-validation strategy to prevent performance improvement caused by accidental factors. Uncertainty Estimation. To further illustrate the role of uncertainty estimation in our model, we visualize the classification results of several out-of-distribution samples from the public ADHD fMRI image dataset ADHD-200[<http://fcon_1000.projects.nitrc.org/indi/adhd200/>]. As reported in Fig. <ref> (a), the baseline model tends to predict very high confidence for the most likely class even if it is completely erroneous. But benefit from the ability to say "I'm not sure.", our MID-Net can detect the decision risk thus provide high uncertainty as shown in <ref> (b). As shown in the <ref> (c) and (d), higher uncertainty are generated for low-quality view 2. Meanwhile, the uncertainty of the combined opinoin is higher. These results suggest our model's predictions remain credible even when encountering low-quality data. Comparison with baseline methods. As shown in Tab. <ref>, our method reaches satisfactory results. Single-view methods fell to be competitive because of lacking enough brain activity information. With VIT backbone, because of the superiority of IDer, our model far exceeds the accuracy rate of other models by more than 3%. Comparison of decision-making capabilities on low-quality fMRI images. In fMRI pre-processing, manual calibration is often difficult and time-consuming. But the image quality without manual calibration may be poor. In order to test the model classification ability when the image de-noising effect is poor or a view is polluted. In Tab. <ref> and Tab. <ref>, we can see that with both backbones, the performance of our method had only slightly decreased by less than half that of other methods. IDer redistributes beliefs and uncertainty, and combines opinions well, eliminating most of the distractions. § CONCLUSION In this paper, we present MID-Net, a multi-view impartial decision network for FTD diagnosis with uncertainty estimation. Our approach offers a means of estimating the uncertainty of each prediction, which is crucial for providing confidence measurements in FTD diagnosis. To accomplish this, we propose the use of an Impartial Decision Maker (IDer) that can combine opinions impartially and make inferences without incurring computational costs or necessitating changes to the backbone network. As a result, our model can prevent overconfident predictions and accurately estimate the risks associated with its decisions. Our extensive experiments demonstrate that our approach provides reliable and robust uncertainty estimates, which can quantify the decision-making risk of the model. Furthermore, we show that our method can also identify poor quality FTD pre-processing. Moreover, in the diagnosis of FTD, IDer conducts fair unification and reasoning on the evidence of brain activity information collected from different perspectives. In summary, our MID-Net competes effectively with previous approaches in terms of classification robustness and the reliability of uncertainty estimation. It provides a valuable contribution to the field of FTD diagnosis by offering a reliable and impartial means of decision-making that can accommodate evidence from multiple perspectives. splncs04 § SUPPLEMENTARY MATERIALS
http://arxiv.org/abs/2307.04235v1
20230709174304
Optimizing an LTS-Simulation Algorithm (Technical Report)
[ "Lukáš Holík", "Jiří Šimáček" ]
cs.FL
[ "cs.FL" ]
^1 FIT BUT, Božetěchova 2, 61266 Brno, Czech Republic ^2 VERIMAG, UJF, 2. av. de Vignate, 38610 Gières, France email: [email protected] Optimizing an LTS-Simulation AlgorithmThis is a version of Technical Report No. FIT-TR-2009-03 of Faculty of Information Technology, Brno University of Technology, accompanying the work published originally as <cit.>. Lukáš Holík^1 Jiří Šimáček^1,2 August 12, 2023 ========================================================================================================================================================================================================================== plain When comparing the fastest algorithm for computing the largest simulation preorder over Kripke structures with the one for labeled transition systems (LTS), there is a noticeable time and space complexity blow-up proportional to the size of the alphabet of an LTS. In this paper, we present optimizations that suppress this increase of complexity and may turn a large alphabet of an LTS to an advantage. Our experimental results show significant speed-ups and memory savings. Moreover, the optimized algorithm allows one to improve asymptotic complexity of procedures for computing simulations over tree automata using recently proposed algorithms based on computing simulation over certain special LTS derived from a tree automaton. § INTRODUCTION A practical limitation of automated methods dealing with LTSs—such as LTL model checking, regular model checking, etc.—is often the size of generated LTSs. One of the well established approaches to overcome this problem is the reduction of an LTS using a suitable equivalence relation according to which the states of the LTS are collapsed. A good candidate for such a relation is simulation equivalence. It strongly preserves logics like ACTL^*, ECTL^*, and LTL <cit.>, and with respect to its reduction power and computation cost, it offers a desirable compromise among the other common candidates, such as bisimulation equivalence <cit.> and language equivalence. The currently fastest LTS-simulation algorithm (denoted as LRT—labeled RT) has been published in <cit.>. It is a straightforward modification of the fastest algorithm (denoted as RT, standing for Ranzato-Tapparo) for computing simulation over Kripke structures <cit.>, which improves the algorithm from <cit.>. The time complexity of RT amounts to Ø(|P_∼||δ|), space complexity to Ø(|P_∼||S|). In the case of LRT, we obtain time complexity Ø(|P_∼||δ| + |||P_∼||S|) and space complexity Ø(|||P_∼||S|). Here, S is the set of states, δ is the transition relation, is the alphabet and P_∼ is the partition of S according to the simulation equivalence. The space complexity blow-up of LRT is caused by indexing the data structures of RT by the symbols of the alphabet. In this paper, we propose an optimized version of LRT (denoted OLRT) that lowers the above described blow-up. We exploit the fact that not all states of an LTS have incoming and outgoing transitions labeled by all symbols of an alphabet, which allows us to reduce the memory footprint of the data structures used during the computation. Our experiments show that the optimizations we propose lead to significant savings of space as well as of time in many practical cases. Moreover, we have achieved a promising reduction of the asymptotic complexity of algorithms for computing tree-automata simulations from <cit.> using OLRT. § PRELIMINARIES Given a binary relation ρ over a set X, we use ρ x to denote the set {y| (x,y)∈ρ}. Then, for a set Y⊆ X, ρ(Y) = ⋃{ρ(y)| y∈ Y}. A partition-relation pair over X is a pair P,Rel where P ⊆ 2^X is a partition of X (we call elements of P blocks) and Rel ⊆ P × P. A partition-relation pair P,Rel induces the relation ρ = ⋃_(B,C)∈ Rel B × C. We say that P, is the coarsest partition-relation pair inducing ρ if any two x,y∈ X are in the same block of P if and only if ρ x = ρ y and ρ x = ρ y. Note that in the case when ρ is a preorder and P, is coarsest, then P is the set of equivalence classes of ρ∩ρ^-1 and is a partial order. A labeled transition system (LTS) is a tuple , where S is a finite set of states, is a finite set of labels, and for each a∈, _a⊆ S× S is an a-labeled transition relation. We use δ to denote ⋃_a∈δ_a. A simulation over T is a binary relation on S such that if (u,v)∈, then for all a∈ and u'∈_a(u), there exists v'∈_a(v) such that (u',v')∈. It can be shown that for a given LTS T and an initial preorder ⊆ S × S, there is a unique maximal simulation on T that is a subset of , and that is a preorder (see <cit.>). § THE ORIGINAL LRT ALGORITHM In this section, we describe the original version of the algorithm presented in <cit.>, which we denote as LRT (see Algortihm <ref>). The algorithm gradually refines a partition-rela­tion pair P,, which is initialized as the coarsest partition-relation pair inducing an initial preorder . After its termination, P, is the coarsest partition-relation pair inducing . The basic invariant of the algorithm is that the relation induced by P, is always a superset of . The while-loop refines the partition P and then prunes the relation in each iteration of the while-loop. The role of the sets can be explained as follows: During the initialization, every _a(B) is filled by states v such that _a(v)∩⋃(B) = ∅ (there is no a-transition leading from v “above” B wrt. ). During the computation phase, v is added into _a(B) after _a(v)∩⋃(B) becomes empty (because of pruning on line 17). Emptiness of _a(v)∩(B) is tested on line 20 using counters _a(v,B), which record the cardinality of _a(v)∩(B). From the definition of simulation, and because the relation induced by P, is always a superset of , _a(v)∩⋃(B) = ∅ implies that for all u∈_a(B), (u,v)∉ (v cannot simulate any u∈_a(B)). To reflect this, the relation is pruned each time _a(B) is processed. The code on lines 8–13 prepares the partition-relation pair and all the data structures. First, (P,_a(B)) divides every block B' into B'∩_a(B) (which cannot simulate states from _a(B) as they have empty intersection with _a((B))), and B'∖_a(B). More specifically, for a set ⊆ S, (P,) returns a finer partition P' = {B∖| B∈ P}∪{B∩| B∈ P}. After refining P by the operation, the newly created blocks of P inherit the data structures (counters and sets) from their “parents” (for a block B∈ P, its parent is the block B_∈ P_ such that B⊆ B_). is then updated on line 17 by removing the pairs (C,D) such that C∩_a(B)≠∅ and D⊆_a(B). The change of causes that for some states u∈ S and symbols b∈, _a(u)∩⋃(C) becomes empty. To propagate the change of the relation along the transition relation, u will be moved into _b(C) on line 20, which will cause new changes of the relation in the following iterations of the while-loop. If there is no nonempty set, then P, is the coarsest partition-relation pair inducing and the algorithm terminates. Correctness of LRT is stated by Theorem <ref>. With an LTS T=(S,,) and the coarsest partition-relation pair P_,_ inducing a preorder I⊆ S× S on the input, LRT terminates with the coarsest partition-relation pair P, inducing . § OPTIMIZATIONS OF LRT The optimization we are now going to propose reduces the number of counters and the number and the size of sets. The changes required by OLRT are indicated in Algorithm <ref> on the right hand sides of the concerned lines. We will need the following notation. For a state v∈ S, v = {a∈|_a(v)≠∅} is the set of input symbols and v = {a∈|_a(v)≠∅} is the set of output symbols of v. The output preorder is the relation = ⋂_a∈_a(S)×_a(S) (this is, (u,v)∈ if and only if u⊆ v). To make our optimization possible, we have to initialize P, by the finer partition-relation pair P_∩,_∩ (instead of P_,_), which is the coarsest partition-relation pair inducing the relation ∩. As both and are preorders, ∩ is a preorder too. As ⊆ and ⊆ (any simulation on T is a subset of ), equals the maximal simulation included in ∩. Thus, this step itself does not influence the output of the algorithm. Assuming that P, is initialized to P_∩,_∩, we can observe that for any B∈ P and a∈ chosen on line 5, the following two claims hold: If a∉ B, then skipping this iteration of the while-loop does not affect the output of the algorithm. In an iteration of the while-loop processing _a(B) with a∉ B, as there is no C∈ P with δ_a(C)∩ B ≠∅, the for-loop on line 16 stops immediately. No pair (C,D) will be removed from on line 17, no counter will be decremented, and no state will be added into a set. The only thing that can happen is that (P,) refines P. However, in this case, this refinement of P would be done anyway in other iterations of the while-loop when processing sets _b(C) with b∈ C. To see this, note that correctness of the algorithm does not depend on the order in which nonempty sets are processed. Therefore, we can postpone processing all the nonempty _a(B) sets with a∉ B to the end of the computation. Recall that processing no of these sets can cause that an empty set becomes nonempty. Thus, the algorithm terminates after processing the last of the postponed _a(B) sets. If processing some of these _a(B) with a∉ B refines P, P will contain blocks C,D such that both (C,D) and (D,C) are in (recall that when processing _a(B), no pair of blocks can be removed from on line 17). This means that the final P, will not be coarsest, which is a contradiction with Theorem <ref>. Thus, processing the postponed _a(B) sets can influence nor neither P, and therefore they do not have to be processed at all. It does not matter whether we assign _a(B) or _a(B)∖ (S∖_a(S)) to on line 6. Observe that v with a∉ v (i.e., v∈ S∖_a(S)) cannot be added into _a(B) on line 20, as this would mean that v has an a-transition leading to D. Therefore, v can get into _a(B) only during initialization on line 4 together with all states from S∖_a(S). After _a(B) is processed (and emptied) for the first time, no state from S∖_a(S) can appear there again. Thus, _a(B) contains states from S∖_a(S) only when it is processed for the first time and then it contains all of them. It can be shown that for any partition Q of a set X and any Y⊆ X, if (Q,Y) = Q, then also for any Z⊆ X with Y⊆ Z, (Q,Z) = (Q,Z∖ Y). As P refines P_∩, (P,S ∖_a(S)) = P. Therefore, as S∖_a(S) ⊆_a(B), (P,_a(B)) = (P,_a(B)∖(S∖_a(S))). We have shown that removing S∖_a(S) from does not influence the result of the operation in this iteration of the while-loop (note that this implies that all blocks from the new partition are included in or have empty intersection with S∖_a(S)). It remains to show that it also does not influence updating on line 17. Removing S∖_a(S) from could only cause that the blocks D such that D⊆ S∖_a(S) that were chosen on line 15 with the original value of will not be chosen with the restricted . Thus, some of the pairs (C,D) removed from with the original version of could stay in with the restricted version of . However, such a pair (C,D) cannot exist because with the original value of , if (C,D) is removed from , then a∈ C (as δ(C)∩ B≠∅) and therefore also a∈ D (as was initialized to _∩ on line 1 and (C,D)∈). Thus, D∩ (S∖_a(S))=∅, which means that (C,D) is removed from even with the restricted . Therefore, it does not matter whether S∖_a(S) is a subset of or it has an empty intersection with . As justified above, we can optimize LRT as follows. Sets _a(B) are computed only if a ∈ B and in that case we only add states q ∈_a(S) to _a(B). As a result, we can reduce the number of required counters by maintaining _a(v,B) if and only if a∈ B and a∈ v. § IMPLEMENTATION AND COMPLEXITY OF OLRT We first briefly describe the essential data structures (there are some additional data structures required by our optimizations) and then we sketch the complexity analysis. For the full details, see the technical report <cit.>. Data Structures. The input LTS is represented as a list of records about its states. The record about each state v ∈ S contains a list of nonempty _a(v) sets[We use a list rather than an array having an entry for each a ∈ in order to avoid a need to iterate over alphabet symbols for which there is no transition.], each of them encoded as a list of its members. The partition P is encoded as a doubly-linked list (DLL) of blocks. Each block is represented as a DLL of (pointers to) states of the block. Each block B contains for each a∈ a list of (pointers on) states from _a(B). Each time when any set _a(B) becomes nonempty, block B is moved to the beginning of the list of blocks. Choosing the block B on line 5 then means just scanning the head of the list of blocks. The relation is encoded as a resizable boolean matrix. Each block B∈ P and each state v∈ S contains an -indexed array containing a record B.a and v.a, respectively. The record B.a stores the information whether a∈ B (we need the test on a∈ B to take a constant time), If a∈ B, then B.a also contains a reference to the set _a(B), represented as a list of states (with a constant time addition), and a reference to an array of counters B.a. containing the counter _a(v,B) for each v∈_a(S). Note that for two different symbols a,b∈ and some v∈ S, the counter _a(v,B) has different index in the array B.a. than the counter _b(v,B) in B.b. (as the sets _a(S) and _b(S) are different). Therefore, for each v∈ S and a∈, v.a contains an index v_a under which for each B∈ P, the counter _a(v,B) can be found in the array B.a.. Using the -indexed arrays attached to symbols and blocks, every counter can be found/updated in a constant time. For every v∈ S,a∈, v.a also stores a pointer to the list containing _a(v) or null if _a(v) is empty. This allows the constant time testing whether a∈ v and the constant time searching for the _a(v) list. Complexity analysis (Sketch). We first point out how our optimizations influence complexity of the most costly part of the code which is the main while loop. The analysis of lines 14–16 of LRT is based on the observation that for any two B',D'∈ P_ and any a∈, it can happen at most once that a and some B with B'⊆ B are chosen on line 14 and at the same time D'⊆_a(B). In one single iteration of the while-loop, blocks C are listed by traversing all (v),v∈ B (the Ds can be enumerated during the operation). Within the whole computation, for any B'∈ P_, transitions leading to B' are traversed on line 14 at most P_ times, so the complexity of lines 14–16 of LRT is Ø(∑_a∈∑_D∈ P_∼∑_v∈ S|_a(v)|) = Ø(|P_||δ|). In the case of OLRT, the number and the content of remove sets is restricted in such a way that for a nonempty set _a(B), it holds that a∈ B and _a(B)⊆_a(S). Hence, for a fixed a, a-transition leading to a block B'∈ P_ can be traversed only |{D'∈ P_| a∈D'}| times and the complexity of lines 14–16 decreases to Ø(∑_D∈ P_∑_a∈ D|δ_a|). The analysis of lines 17–20 of LRT is based on the fact that once (C,D) appears on line 17, no (C',D') with C'⊆ C, D'⊆ D can appear there again. For a fixed (C,D), the time spent on lines 17–20 is in Ø(∑_v∈ B|(v)|) and only those blocks C,D can meet on line 17 such that C× D⊆. Thus, the overall time spent by LRT on lines 17–20 is in Ø(∑_B∈ P_∑_v∈(B)|(v)|). In OLRT, blocks C,D can meet on line 17 only if C× D⊆∩, and the complexity of lines 17–20 in OLRT decreases to Ø(∑_B∈ P_∑_v∈(∩)(B)|(v)|). Additionally, OLRT refines P_,_ to P_∩,_∩ on line 1. This can be done by successive splitting according to the sets _a(S),a∈ and after each split, breaking the relation between blocks included in _a(S) and the ones outside. This procedure takes time Ø(|||P_∩|^2). Apart from some other smaller differences, the implementation and the complexity analysis of OLRT are analogous to the implementation and the analysis of LRT <cit.>. The overall time complexity of OLRT is Ø( |||P_∩|^2 + |||S|+ |P_|^2 + ∑_B∈ P_ (∑_a∈ B (|_a(S)| + |δ_a|)+ ∑_v∈(∩)(B)|(v)|)). The space complexity of OLRT is determined by the number of counters, the contents of the sets, the size of the matrix encoding of , and the space needed for storing the B.a and v.a records (for every block B, state v and symbol a). Overall, it gives Ø(|P_|^2 + |Σ||S| + ∑_B∈ P_∑_a∈ B|δ_a^-1(S)|). Observe that the improvement of both time and space complexity of LRT is most significant for systems with large alphabets and a high diversity of sets of input and output symbols of states. Certain regular diversity of sets of input and output symbols is an inherent property of LTSs that arise when we compute simulations over tree automata. We address the impact of employing OLRT within the procedures for computing tree automata simulation in the next section. § TREE AUTOMATA SIMULATIONS In <cit.>, authors propose methods for computing tree automata simulations via translating problems of computing simulations over tree-automata to problems of computing simulations over certain LTSs. In this section, we show how replacing LRT by OLRT within these translation-based procedures decreases the overall complexity of computing tree-automata simulations. A (finite, bottom-up) tree automaton (TA) is a quadruple A = (Q, Σ,Δ, F) where Q is a finite set of states, F ⊆ Q is a set of final states, Σ is a ranked alphabet with a ranking function :Σ→, and Δ⊆ Q^*×Σ× Q is a set of transition rules such that if (q_1… q_n,f,q)∈Δ, then (f) = n. Finally, we denote by the smallest n ∈ such that n ≥ m for each m ∈ such that there is some (q_1… q_m,f,q) ∈Δ. We omit the definition of the semantics of TA as we will not need it, and we only refer to <cit.>. For the rest of this section, we fix a TA A = (Q,Σ,Δ,F). A downward simulation D is a binary relation on Q such that if (q,r)∈ D, then for all q n f q∈Δ, there exists r n f r∈Δ such that (q_i,r_i)∈ D for each i:1≤ i≤ n. Given a downward simulation which is a preorder called an inducing simulation, an upward simulation induced by is a binary relation on Q such that if (q,r)∈, then (i) for all q n f q'∈Δ with q_i=q,1 ≤ i ≤ n, there exists r n f r'∈Δ with r_i=r, (q',r')∈, and (q_j,r_j) ∈ for each j:1≤ j≠ i≤ n; (ii) q∈ F r∈ F. From now on, let denote the maximal downward simulation on A and the maximal upward simulation on A induced by . To define the translations from downward and upward simulation problems, we need the following notions. Given a transition t = (q_1… q_n,f,q)∈Δ, q_1… q_n is its left-hand side and t(i)∈ (Q∪{})^*×Σ× Q is an environment—the tuple which arises from t by replacing state q_i, 1 ≤ i ≤ n, at the i^th position of the left-hand side of t by the so called hole ∉Q. We use of to denote the set of all left-hand sides of A and to denote the set of all environments of A. We translate the downward simulation problem on A to the simulation problem on the LTS A^∙ = (Q^∙,^∙,{δ_a^∙| a∈^∙}) where Q^∙ = {q^∙| q∈ Q}∪{l^∙| l∈}, Σ^∙ = Σ∪{1,…, }}, and for each (q_1… q_n,f,q)∈Δ, (q^∙, q_1… q_n^∙)∈δ^∙_f and (q_1… q_n^∙, q_i^∙)∈δ^∙_i for each i:1≤ i ≤ n. The initial relation is simply ^∙ = Q^∙× Q^∙. The upward simulation problem is then translated into a simulation problem on LTS A^⊙ = (Q^⊙,Σ^⊙,{δ^⊙_a| a∈Σ^⊙}), where Q^⊙ = {q^⊙| q∈ Q}∪{e^⊙| e∈}, Σ^⊙ = Σ^∙, and for each t=(q_1… q_n,f,q)∈Δ, for each 1≤ i≤ n, (q_i^⊙,t(i)^⊙)∈δ^⊙_i and (t(i)^⊙,q^⊙)∈δ^⊙_a. The initial relation I^⊙⊆ Q^⊙× Q^⊙ contains all the pairs (q^⊙,r^⊙) such that q,r∈ Q and r∈ F q∈ F, and ((q_1… q_n,f,q)(i)^⊙,(r_1 … r_n,f,r)(i)^⊙) such that (q_j,r_j)∈ for all j:1≤ i≠ j ≤ n. Let ∼^∙ be the maximal simulation on A^∙ included in ^∙ and let ∼^⊙ be the maximal simulation on A^⊙ included in ^⊙. The following theorem shows correctness of the translations. For all q,r∈ Q, we have (q^∙,r^∙) ∈∼^∙ if and only if (q,r)∈ and (q^⊙,r^⊙) ∈∼^⊙ if and only if (q,r)∈. The states of the LTSs (A^∙ as well as A^⊙) can be classified into several classes according to the sets of input/output symbols. Particularly, Q^∙ can be classified into the classes {q^∙| q∈ Q} and for each n:1≤ n≤, {q_1… q_n^∙| q_1… q_n∈}, and Q^⊙ can be classified into {q^⊙| q∈ Q} and for each a∈Σ and i:1≤ i≤(a), {t(i)^⊙| t = (q_1… q_n,a,q)∈Δ}. This turns to a significant advantage when computing simulations on A^∙ or on A^⊙ using OLRT instead of LRT. Moreover, we now propose another small optimization, which is a specialized procedure for computing P_∩_∩ for the both of A^⊙, A^∙. It is based on the simple observation that we need only a constant time (not a time proportional to the size of the alphabet) to determine whether two left-hand sides or two environments are related by the particular (more specifically, (e_1^⊙,e_2^⊙)∈ if and only if the inner symbols of e_1 and e_2 are the same, and (q_1… q_n^∙,r_1… r_m^∙)∈ if and only if n≤ m). Complexity of the Optimized Algorithm. We only point out the main differences between application of LRT <cit.> and OLRT on the LTSs that arise from the translations described above. For implementation details and full complexity analysis of the OLRT versions, see the technical report <cit.>. To be able to express the complexity of running OLRT on A^∙ and A^⊙, we extend to the set such that ( q n, r n)∈ if and only if (q_i,r_i)∈ for each i:1≤ i≤ n, and we extend to the set such that ((q_1 … q_n,f,q)(i),(r_1 … r_n,f,r)(i)) ∈ m=n ∧ i=j ∧ (q,r)∈∧ (∀ k ∈{ 1, ..., n}. k≠ i ⟹(q_k,r_k)∈). For a preorder ρ over a set X, we use Xρ to denote the partition of X according to the equivalence ρ∩ρ^-1. The procedures for computing ∼^∙ and ∼^⊙ consist of (i) translating A to the particular LTS (A^∙ or A^⊙) and computing the partition-relation pair inducing the initial preorder (^∙ or ^⊙), and (ii) running a simulation algorithm (LRT or OLRT) on it. Here, we analyze the impact of replacing LRT by OLRT on the complexity of step (ii), which is the step with dominating complexity (as shown in <cit.> and also by our experiments; step (ii) is much more computationally demanding than step (i)). As shown in the technical report <cit.>, OLRT takes on A^∙ and ^∙ space Ø(Space_D) where Space_D = ( + ||)|∪ Q| + |∪ Q/|^2 + |||/||Q| + |Q/||| and time Ø(Space_D + |||Q/|^2 + |/||Δ| ). On A^⊙ and ^⊙, OLRT runs in time O(Space_U) where Space_U = ( + ||)|| + |/U|^2 + |/U||Q| + |Q/U||| and time Ø(Space_U + |||Q/U|^2 + |/U||δ|). We compare the above results with <cit.>, where LRT is used. LRT on A^∙ and ^∙ takes Ø(Space_D^old) space where Space_D^old = (|Σ|+ )|Q∪||Q∪|, and Ø(Space_D^old + |Δ||Q∪|) time. In the case of A^⊙ and I^⊙, we obtain space complexity Ø(Space_U^old) where Space_U^old = |Σ||||| and time complexity Ø(Space_U^old + |Δ|||). The biggest difference is in the space complexity (decreasing the factors Space_D^old and Space_U^old). However, the time complexity is better too, and our experiments show a significant improvement in space as well as in time. § EXPERIMENTS We implemented the original and the improved version of the algorithm in a uniform way in OCaml and experimentally compared their performance. The simulation algorithms were benchmarked using LTSs obtained from the runs of the abstract regular model checking (ARMC) (see <cit.>) on several classic examples—producer-consumer (pc), readers-writers (rw), and list reversal (lr)—and using a set of tree automata obtained from the run of the abstract regular tree model checking (ARTMC) (see <cit.>) on several operations, such as list reversal, red-black tree balancing, etc. We also used several randomly generated LTSs and tree automata. We performed the experiments on AMD Opteron 8389 2.90 GHz PC with 128 GiB of memory (however we set the memory limit to approximately 20 GiB for each process). The system was running Linux and OCaml 3.10.2. The performance of the algorithms is compared in Table <ref> (general LTSs), Table <ref> (LTSs generated while computing the downward simulation), and Table <ref> (LTSs generated while computing the upward simulation), which contain the running times ([s]) and the amount of memory ([MiB]) required to finish the computation. As seen from the results of our experiments, our optimized implementation performs substantially better than the original. On average, it improves the running time and space requirements by about one order of magnitude. As expected, we can see the biggest improvements especially in the cases, where we tested the impact of the growing size of the alphabet. § CONCLUSION We proposed an optimized algorithm for computing simulations over LTSs, which improves the asymptotic complexity in both space and time of the best algorithm (LRT) known to date (see <cit.>) and which also performs significantly better in practice. We also show how employing OLRT instead of LRT reduces the complexity of the procedures for computing tree-automata simulations from <cit.>. As our future work, we want to develop further optimizations, which would allow to handle even bigger LTSs and tree automata. One of the possibilities is to replace existing data structures by a symbolic representation, for example, by using BDDs. Acknowledgements. This work was supported in part by the Czech Science Foundation (projects P103/10/0306, 102/09/H042, 201/09/P531), the BUT FIT grant FIT-10-1, the Czech COST project OC10009 associated with the ESF COST action IC0901, the Czech Ministry of Education by the project MSM 0021630528, and the ESF project Games for Design and Verification.
http://arxiv.org/abs/2307.07439v1
20230714160403
Atlas-Based Interpretable Age Prediction
[ "Sophie Starck", "Yadunandan Vivekanand Kini", "Jessica Johanna Maria Ritter", "Rickmer Braren", "Daniel Rueckert", "Tamara Mueller" ]
eess.IV
[ "eess.IV", "cs.CV", "cs.LG" ]
Atlas-Based Interpretable Age Prediction Starck and Kini et al. Chair for AI in Medicine and Healthcare, Faculty of Informatics, Technical University of Munich, Germany Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Technical University of Munich, Germany Department of Computing, Imperial College London, United Kingdom Atlas-Based Interpretable Age Prediction In Whole-Body MR Images Sophie StarckThese authors contributed equally. 1 Yadunandan Vivekanand Kini1 1 Jessica Johanna Maria Ritter 2Rickmer Braren 2Daniel Rückert 1, 2, 3 Tamara Müller1, 2 August 12, 2023 ========================================================================================================================================================================== Age prediction is an important part of medical assessments and research. It can aid in detecting diseases as well as abnormal ageing by highlighting the discrepancy between chronological and biological age. To gain a comprehensive understanding of age-related changes observed in various body parts, we investigate them on a larger scale by using whole-body images. We utilise the Grad-CAM interpretability method to determine the body areas most predictive of a person's age. We expand our analysis beyond individual subjects by employing registration techniques to generate population-wide interpretability maps. Furthermore, we set state-of-the-art whole-body age prediction with a model that achieves a mean absolute error of 2.76 years. Our findings reveal three primary areas of interest: the spine, the autochthonous back muscles, and the cardiac region, which exhibits the highest importance. § INTRODUCTION Deep learning (DL) methods have significantly advanced medical research by delivering insights into normal physiology and disease processes. It can provide imaging-derived biomarkers for non-invasive predictions and support physicians in their work <cit.>. Given the high sensitivity of medical data and the potentially life-altering impact that can result from using DL models for medical diagnoses or interventions, incorporating interpretability methods is of great importance. Gaining a better understanding of these models, often referred to as black boxes that hide the details of the decision-making process, is highly relevant. It fosters trust, enhances usability, and could potentially even shed light on previously unknown correlations <cit.>. The investigation of ageing, age-related diseases, and the identification of specific areas in the body affected by age have been prominent research areas in medicine. Age shows one of the strongest correlations with the development of diseases and well-being in general <cit.>. Therefore, acquiring more knowledge about the ageing process can give insights into risk factors or abnormal ageing and serve as an early detection mechanism for several diseases <cit.>. The application of an accurate age prediction method is two-fold: (a) It can aid in establishing an understanding of the mechanisms of ageing in the human body and (b) in finding discrepancies between an individual's chronological and biological age. Chronological age refers to the time elapsed since birth, whereas biological age aims to describe the physiological age, e.g. how the body has aged. There might be deviations between the two, which is often referred to as accelerated (biological age > chronological age) or decelerated (biological age < chronological age) ageing. This has been investigated extensively for brain age estimation <cit.> since brain structures are known to change over time <cit.> and be highly correlated with neurodegenerative diseases such as Alzheimer's or Parkinson's <cit.>. Brain MR images are promising modalities to infer the biological brain age of a subject, often with the help of deep learning techniques <cit.>. Age estimation has also been performed on dental data <cit.>, skeleton bones in the body, such as chest radiography <cit.>, knee skeletons <cit.>, or hand skeletons <cit.>. Despite significant changes in several abdominal organs and tissues, such as the liver <cit.>, bone densities <cit.>, and the pancreas <cit.>, whole-body age prediction has so far not been explored in great detail. There are some works showing significant advancements in the field. Le Goallec et al. <cit.>, for example, focus on deep learning approaches for abdominal age prediction based on liver and pancreas MR images. In this work, we move away from local investigations of specific body areas and investigate age prediction on the whole body (excluding the brain) to identify which areas show the highest information value about a person's age. Towards this goal, we train a convolutional neural network (CNN) on 3D MR images that cover the full body from neck to knee. Subsequently, we apply Grad-CAM <cit.>, a well-established post-hoc interpretability method for CNNs, to identify areas in the body that are most important to the algorithm's decision-making. We subsequently register the Grad-CAM results onto a medical atlas to acquire population-wide interpretability maps. Figure <ref> shows an overview of the pipeline of our work. We identify three main regions of interest in the extracted importance maps: the spine, the autochthonous back muscles, and the heart with its adjacent great vessels like the aorta. Figures <ref> and <ref> show two atlas-based importance maps. We can see that the region along the spine and the area surrounding the heart show the most prominent Grad-CAM activation. § BACKGROUND AND RELATED WORK In this section, we summarise relevant background and related works that address interpretability in medical imaging and population-wide studies with medical atlases. §.§ Interpretability Interpretability methods are applied to DL algorithms to increase trust and reliability of DL models and better understand the decision-making process of neural networks <cit.>. This is especially important in the medical domain, where critical patient diagnoses might depend on DL predictions, and both physicians and patients need to trust the outcome of DL models to ensure their usability. In general, interpretability methods can be categorised into different groups. We can, for example, distinguish active and passive methods, where the former applies interpretability methods during training, while the latter applies them post-hoc <cit.>. Gradient-based and backpropagation methods utilise the model's gradients, while occlusion-based methods occlude part of the input to detect those regions’ impact on model performance <cit.>. One of the most commonly used gradient-based methods is gradient-weighted class activation mapping (Grad-CAM) <cit.>. Grad-CAM utilises the gradient information that flows into a convolutional layer of a CNN and applies global average pooling on these gradients to extract importance values for each input parameter (i.e. voxel). Grad-CAM was originally designed for image classification, image captioning, and virtual question-answering tasks <cit.>; it has, since then, been applied for numerous tasks such as object detection or reinforcement learning <cit.>. However, it has been shown that it can also produce meaningful interpretations for regression tasks <cit.>. One shortcoming of gradient-based interpretability methods such as Grad-CAM is that the results are subject-specific and do not allow for a population-wide interpretation. In the medical context, subject-specific interpretation can be of interest in individual assessments, while a population-level map might hold more generalisable information. We tackle this by generating population-wide interpretability maps using registration methods on previously unseen data. §.§ Population-wide Studies and Medical Atlases Medical imaging is indispensable for medical research and assessment. However, medical images mostly come with high inter-subject variability that can stem from different morphologies or even just different positions in the scanner. Therefore, medical atlases are frequently used to allow for inter-subject or inter-population comparisons. They map several medical images into a common coordinate system, using registration techniques <cit.>. The registered images are then averaged in order to acquire a template of a specific image modality. This is widely used for brain imaging, where atlases are used to generate an average representation of the human brain <cit.>. Atlas generation on the whole body has been explored considerably less due to a much higher inter-subject variability compared to brain images. However, there are some works focusing on body MR atlas generation that have shown promising applications for these atlases <cit.>. In this work, we utilise conditional atlases generated on a subset of the whole population, split by sex and BMI group (healthy, overweight, obese). Consequently, we obtain six comprehensive whole-body atlases. We utilise Grad-CAM to generate subject-specific importance maps which are are subsequently aligned with these atlases, yielding population-wide importance maps. § MATERIALS AND METHODS §.§ Dataset The UK Biobank (UKBB) dataset <cit.> is a large-scale longitudinal study that has been conduced in the UK since 2006. It contains information from approximately 100,000 participants, with a wide range of data such as genetics, biological samples and MR images from the brain, heart and abdomen. In this work, we utilise the whole-body neck-to-knee MR images acquired with the Dixon technique for internal fat across six stations. We use the water contrast images and stitch the stations together using a publicly available tool <cit.>. We select 3120 subjects with a balanced distribution across age, sex and BMI. 1536 subjects were used for training, 384 for validation and 1200 for testing. The ages range from 46 to 81, and the mean age is 63.58 years. §.§ Training Pipeline We train a 3D ResNet-18 model <cit.> from torchvision <cit.> with a hidden layer size of 256. The training is performed by using adaptive moment estimation (Adam) optimiser <cit.> and by minimising the mean absolute error (MAE) of the age predictions. Furthermore, we use a gradient accumulation scheduler which sums and averages the gradients from 32 consecutive mini-batches to update the model's parameters. The initial learning rate is 1e-4, derived from manual tuning and reduced via scheduling when the validation error does not decrease for three epochs. The model was trained for 100 epochs, which lasted approximately 48 hours on an NVIDIA A40 GPU. The application of Grad-CAM is independent of the training process. After training the model, we apply Grad-CAM on the third layer of the network, at inference and on the test set to evaluate the essential body areas related to age prediction. §.§ Registration and Atlas Generation We map all subject-level Grad-CAM maps onto an atlas to investigate the interpretability of our age prediction on a population level. Given the high variability of whole-body MR scans, we split all subjects into subgroups depending on their sex and BMI, following three commonly used BMI groups: healthy, overweight, and obese. This leads to six distinct atlases. The registration process is done by first registering all images of a sex and BMI group to the same target subject. We apply two methods: affine and deformable registration. Affine registration refers to a set of rigid transformations such as rotation, translation, shearing and scaling. These types of transformations allow for a coarse alignment; they do not deform the anatomy of the given subject but only correct the overall position and orientation. The resulting images are then deformed with a non-rigid registration for a more refined registration. This step is more localised and allows for a more detailed alignment. Once all images are registered, the resulting deformation fields are applied to their corresponding activation map, as shown in Figure <ref>. Subsequently, an average map is generated from each subgroup of the dataset which serves as our population-wide interpretability map. § RESULTS AND DISCUSSION We here summarise the results obtained from our experiments, including the age prediction, the extraction of the Grad-CAM importance maps, and the generation of a population-wide importance map, and discuss our findings. §.§ Age Prediction and Grad-CAM results We evaluate our 3D age prediction model, trained on 1536 training samples, by randomly extracting 1200 previously unseen subjects that are approximately equally distributed across all BMI and age groups. Our model achieves a mean absolute error (MAE) of 2.76 years on this test set. Table <ref> summarises the model's performance divided into the same groups that are used for the atlas generation. We can see that the model performs best on healthy subjects and performance decreases for the other BMI groups. To extract the Grad-CAM importance maps, we follow the original approach introduced by <cit.>. We extract the importance maps from the inference runs of the 1200 test subjects (200 of each group) and register them to the sub-group atlases to obtain the population-wide attention maps shown in Figures <ref> and <ref>. By visual assessment of these individual maps, we identify three main areas of importance: (1) the spine, (2) the autochthonous muscles of the back, and (3) the heart region, including the myocardium (muscle tissue surrounding the heart) and the great adjacent vessels (e.g. the aorta). These regions are consistently highlighted over every atlas. Additional highlighted regions comprise the thyroid gland, as we can see in the top left visualisation of both Figures <ref> and <ref>, the knees (bottom left in Figure <ref>) and the abdominal fatty tissue (middle right in Figure <ref>). The additional regions show lesser importance but are identifiable in several groups. These findings align with existing research, as these regions have demonstrated age-related impacts <cit.>, which makes us believe that these population-wide activation maps have great potential to investigate those areas in the body that show the greatest changes with ageing. § CONCLUSION AND FUTURE WORK In this work, we investigate the locality of age information in whole-body MR images. We train a 3D ResNet-18 model on around 1500 neck-to-knee MR images from the UK Biobank <cit.>. Our model achieves whole-body age prediction with a mean absolute error of 2.76 years on the test set. In order to investigate which parts of the body are highly predictive of a subject's age, we apply Grad-CAM to the gradients derived from each test subject. The obtained individual importance maps for all subjects highlight three main regions of interest: the spine, the heart, and the autochthonous muscle. However, these importance maps are subject-specific and do not allow for an easy generalisation to the whole population. We tackle this by registering all importance maps into the same coordinate space and overlaying them with an extracted atlas of the whole-body MR images. The results of these population-wide importance maps are visualised in Figures <ref> and <ref>. Despite mapping individual importance maps to the population atlas and across various groups, these areas stay consistent. Hence, we can deduce that these findings remain valid for the entire population. Moreover, the most highlighted areas of the importance maps align with medical knowledge about ageing. This shows promise in investigating the resulting importance maps in more detail with medical experts to potentially unveil knowledge on ageing in previously unexplored body regions. Furthermore, we recognise the opportunity to extract more qualitative interpretability maps. We aim to further investigate different interpretability methods, such as perturbation-based methods <cit.> or attention-based models, such as Vision Transformers <cit.>. The ultimate target of generating these age-based importance maps is to investigate the mechanisms of ageing at a global scale and understand its behaviours. We can use the model's performance to identify subjects that show accelerated or decelerated ageing, to investigate potential anomalies in their physiology. We believe that taking the interpretability maps from a subject-level to a population-level, allows us to draw more general conclusions about age-related areas in the human body which can guide future research of ageing and age-related diseases. §.§.§ Acknowledgements TM and SS were supported by the ERC (Deep4MI - 884622). This work has been conducted under the UK Biobank application 87802. SS has furthermore been supported by BMBF and the NextGenerationEU of the European Union. splncs04
http://arxiv.org/abs/2307.03903v2
20230708050310
Adversarial Self-Attack Defense and Spatial-Temporal Relation Mining for Visible-Infrared Video Person Re-Identification
[ "Huafeng Li", "Le Xu", "Yafei Zhang", "Dapeng Tao", "Zhengtao Yu" ]
cs.CV
[ "cs.CV" ]
Article Title]Adversarial Self-Attack Defense and Spatial-Temporal Relation Mining for Visible-Infrared Video Person Re-Identification 1]Huafeng [email protected] These authors contributed equally to this work. 1]Le [email protected] These authors contributed equally to this work. [1]Yafei [email protected] 2]Dapeng [email protected] 1]Zhengtao [email protected] [1]Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, 650500, Yunnan, China [2]School of Information Science and Engineering, Yunnan University, Kunming, 650091, Yunnan, China In visible-infrared video person re-identification (re-ID), extracting features not affected by complex scenes (such as modality, camera views, pedestrian pose, background, etc.) changes, and mining and utilizing motion information are the keys to solving cross-modal pedestrian identity matching. To this end, the paper proposes a new visible-infrared video person re-ID method from a novel perspective, i.e., adversarial self-attack defense and spatial-temporal relation mining. In this work, the changes of views, posture, background and modal discrepancy are considered as the main factors that cause the perturbations of person identity features. Such interference information contained in the training samples is used as an adversarial perturbation. It performs adversarial attacks on the re-ID model during the training to make the model more robust to these unfavorable factors. The attack from the adversarial perturbation is introduced by activating the interference information contained in the input samples without generating adversarial samples, and it can be thus called adversarial self-attack. This design allows adversarial attack and defense to be integrated into one framework. This paper further proposes a spatial-temporal information-guided feature representation network to use the information in video sequences. The network cannot only extract the information contained in the video-frame sequences but also use the relation of the local information in space to guide the network to extract more robust features. The proposed method exhibits compelling performance on large-scale cross-modality video datasets. The source code of the proposed method will be released at <https://github.com/lhf12278/xxx>. [ [ August 12, 2023 =================== § INTRODUCTION Person re-identification (re-ID) is a technology used to determine whether the person images or sequences captured by non-overlapping cameras belong to the same identity <cit.>. Most of the related works, such as domain generalization <cit.> and domain adaptation<cit.>, focus on person re-ID under normal illumination (visible modality). In recent years, cross-modality person re-ID <cit.> based on visible and infrared images have attracted more and more attention since they can meet the requirements of pedestrian image matching under poor illumination at night. The main difficulty faced by visible-infrared person re-ID is the modality discrepancy between pedestrian images of two modalities. Most methods <cit.> attempt to study how the features can be learned without being affected by the modality of images. These methods can be roughly divided into methods based on adversarial learning <cit.>, methods guided by intermediate modality <cit.>, and methods embedded with high-level semantic information <cit.>. The adversarial learning-based methods achieve modal confusion by adversarial training between the encoder and the discriminator, thereby reducing the difference between different modalities. The intermediate modality-based methods use the information of intermediate modality to guide or strengthen the role of modality invariant features in identity matching. The semantic information-based methods improve the cross-modality capability of features by introducing high-level semantic information into visual features. The above methods only consider modal discrepancy's impact on person identity matching. However, some aspects are ignored, such as the diversity of pedestrian appearance features caused by view discrepancies, diverse postures of person, and background changes, etc. Moreover, they are designed for the matching between person images and do not consider the information contained in the video sequences. Therefore, if the existing cross-modality person re-ID methods are directly applied to the cross-modality video person re-ID, the retrieval performance may not be optimal. Although video-based person re-ID methods <cit.> are widely applied, they usually do not consider the relationship between different parts of the pedestrian's body under the motion state, which limits the further improvement of the recognition performance. Furthermore, most of the existing video person re-ID methods focus on the identity matching between video sequences under normal lighting conditions, ignoring the impact of the modal discrepancy between infrared and visible person images. Although Lin et al. <cit.> proposed a video-based cross-modality person re-ID method and created the first dataset for this task recently, there are still few studies involving the solution to this problem. We propose a cross-modality video person re-ID method from a novel view—adversarial self-attack and defense. In the proposed method, we regard all unfavorable factors contaminating the model performance as adversarial perturbations. Factors such as the change of camera view, the existence of occluders, the difference in posture, and the gap between modalities lead to a certain diversity of appearance features of the same identity, as illustrated in Fig. <ref>. We regard all the differences in appearance features of the same identity caused by all factors as information perturbations. The robustness of the re-ID model is strengthened by improving its defense ability against perturbations. Technically, the proposed method is mainly composed of the adversarial self-attack module (ASAM), adversarial defense module (ADM) and feature representation module under spatial-temporal information guidance (FRM-STIG). The ASAM is mainly used to activate the adversarial perturbations and implement the attack on the ADM. More specifically, the ASAM is used to guide the single-modality feature extraction network to activate the perturbations in the input training samples. With the effect of ASAM, the robustness of the ADM is enhanced in adversarial training of the ADM. The proposed method does not need to synthesize adversarial samples to train the model but activates the adversarial perturbations of the training samples to realize the adversarial attack. In FRM-STIG, considering the discrimination of the spatial relationship of different body parts under the motion state, we propose to embed the temporal information contained in the video frames into spatial relations of different body parts. To effectively utilize spatial relation to improve the discrimination of features, we propose a spatial-temporal relation-guided feature representation method. More attention is paid to the features related to motion information and spatial relation. Thanks to this design, both the spatial relation of different body parts during motion and the temporal information can be embedded into the pedestrian features, which helps to improve the accuracy and robustness of person video sequence description. Finally, the features with motion information and the features generated by the guidance of spatial-temporal relations are combined as the final features to describe pedestrians. The main contributions of this paper are as follows: * A solution is proposed to solve the impact of modal discrepancy, posture changes, complex background and other factors on person identity matching. The complex perturbations carried by the multi-modality images are treated as the adversarial attack information of the re-ID model. At the same time, by improving the defense ability of the re-ID model against these perturbations, the robustness of the model to complex factors can be improved accordingly. * An adversarial self-attack strategy is proposed to activate the perturbation information contained in the input samples without generating adversarial samples. This design allows adversarial attack and defense to be integrated into one framework. * A spatial relation mining mechanism is proposed for different parts of a person based on temporal information embedding. A feature highlight mechanism guided by spatial-temporal relations is designed to construct features not affected by modality. * The validity of the proposed method is verified on the challenging large-scale visible-infrared video re-ID dataset—VCM and the state-of-the-art performance is obtained under two commonly used evaluation metrics. The rest of this paper is organized as follows. Section <ref> discusses the related state-of-the-art works. Section <ref> elaborates the proposed method in detail. The experimental results are explained in Section <ref> and Section <ref> summarizes the content of this paper and draws conclusions. § RELATED WORK §.§ Visible-Infrared Person Re-ID To solve the modal discrepancy between visible and infrared person images, Wang et al. <cit.> proposed an adversarial generation method to learn the modality-invariant feature. The encoder can extract the features not affected by the modality information via playing a min-max game between the encoder and the modality discriminator. Given the significance of adversarial learning, a series of practical methods have been conducted <cit.>. However, in those methods, the discriminator is used to identify the modal differences between visible and infrared person images, which may cause the loss of information related to personal identity and is not conducive to matching person identity. Another popular way to learn modality-invariant features is to use the intermediate information between two modalities as guidance <cit.>. Specifically, Zhong et al. <cit.> proposed the gray-scale image of a person as an intermediate modality to assist in extracting modality-invariant features. Considering the modality invariance of edge details of a person image, Gao et al. <cit.> enhanced the cross-modal matching ability of features by highlighting the role of edge details in features. Basaran et al. <cit.> proposed to extract modality-invariant identity features by introducing the anaglyph. However, it ignores the high-level semantic information between different body parts or critical points of a person. Such information is usually modal invariant and often used in cross-modality person re-ID. Miao et al. <cit.> proposed a cross-modality person re-ID method based on high-order relationship mining of person key points. Chen et al.<cit.> proposed a modality-invariant feature extraction method by mining different part relationships. Those methods are devoted to extracting features not affected by modality, where the challenges to person identity matching caused by the diversity of person appearance features are not considered. The proposed method is based on adversarial self-attack and defense such that the changes in personal appearance features caused by all factors are deemed adversarial perturbations. The shortcomings of the above methods can be alleviated by improving the model's robustness against such perturbations. §.§ Video Person Re-ID Videos usually contain many motion information, which carries pedestrian identity clues. The video person re-ID has received more and more attention <cit.>. Wu et al. <cit.> proposed 3D ConvNet as a new layer of feature extraction network to extract a person's appearance and motion information from the video sequences simultaneously. Chen et al. <cit.> proposed spatial-temporal awareness to pay attention to the significant parts of a person in both temporal and spatial domains simultaneously and highlight the effect of this part in identity matching. Li et al.<cit.> proposed a global-local temporal representation (GLTR) method for video person re-ID. This method aggregates the short-term temporal cues and long-term relations as the final GLTR. Liu et al. <cit.> proposed a co-saliency spatial-temporal interaction network (CSTNet) for video person re-ID. The method learned discrimination feature representation by capturing the salient foreground regions in the video and exploring the spatial-temporal long-range context interdependency from such regions. Yang et al. <cit.> designed a two-stream dynamic pyramid representation model to solve the problems of mining spatial-temporal information, suppressing redundant information and improving data quality for video person re-ID. The method used dynamic pyramid deflated convolution and pyramid attention pooling to acquire the person's motion information and static appearance. Eom et al. <cit.> designed a spatial and temporal memory network to address the challenge of person occlusion by using prior knowledge that spatial distractors always appear in a particular location. In contrast, temporal distractors usually appear in the first few frames. Liu et al. <cit.> adopted a bi-directional (forward and backward) mechanism to extract the temporal information in the video sequence. Although the above methods effectively utilized the motion information for person re-ID, they ignore the potential structural relationship of a person's body parts in space, limiting the further improvement of feature discrimination. Yan et al. <cit.> proposed multi-granular hypergraphs to mine the temporal information of different granularity regions. They modeled spatial-temporal dependencies in terms of multiple granularities, which effectively improved the performance of video person re-ID. Liu et al.<cit.> proposed a spatial-temporal correlation multi-scale topology learning framework to realize video person re-ID. The method achieved hierarchical spatial-temporal dependencies and pedestrian structure information through 3D and cross-scale graph convolution. To solve the problem that 3D convolution is easily affected by the misalignment of person features in mining temporal information, Chen et al. <cit.> proposed a human-oriented graph method. Although the above methods based on graph convolution can mine the spatial relationship between nodes, they cannot extract long-term spatial cues. Since the transformer is more suitable for extracting the long-term relationship of features, Zhang et al. <cit.> proposed a spatial-temporal transformer for video person re-ID. The method is mainly composed of a spatial transformer and a temporary transformer. The former is used to extract the spatial features of person images, and the latter is used to extract the features of a person in video sequences. Although these methods consider the static spatial structure relation between different person regions, they ignore the discrimination of different person body parts when moving. The proposed method embeds the temporal information into the spatial structure information mining, resulting in a spatial relation mining scheme for different body parts of pedestrians in the state of motion. §.§ Adversarial Attack and Defense in Person Re-ID Adversarial attacks are designed to deprive the original performance of the deep neural network by adding small-magnitude perturbations to original samples. The concept was first proposed by Szegedy et al. <cit.>. Wang et al. <cit.> developed a multi-stage network to perform back-box attack, given the importance of cross-dataset transferability in Re-ID. It pyramids the features at different levels to extract the general and transferable features for the adversarial perturbations. To explore whether the re-ID model based on CNN is vulnerable to the attack of adversarial samples, Wang et al. <cit.> proposed an attack method called advPattern to generate adversarial patterns on clothes. Those methods focus on generating adversarial samples to invalidate the re-ID model without considering how to defend against attacks from adversarial samples. One of the easiest ways to improve the re-ID model's robustness to adversarial examples is to incorporate them into training. In addition, some researchers consider identifying and excluding the adversarial samples from the training dataset via the detection algorithm, which can also avoid the attack from the adversarial samples on the model. Specifically, Wang et al. <cit.> proposed a multi-expert adversarial attack detection method to detect adversarial attack by checking context inconsistency. To fill the gap between training samples and test samples, Bai et al. <cit.> developed an adversarial metric attack while presenting an early attempt to produce a metric-preserving network, thereby protecting metrics from adversarial attacks. To defend the model against the attack of the adversarial samples, although it is simple and effective to use the adversarial samples directly to train the model, it does not maximize the robustness of the model to the adversarial samples. In this paper, we elaborately devise an adversarial self-attack and defense approach that enables the model to defend against the impact of the diversity of person identity features on matching performance. Unlike the existing methods for generating adversarial samples, the proposed method replaces the role of the adversarial samples by activating the adversarial perturbations contained in the training samples. The proposed method integrates adversarial attack and defense within a single framework. § PROPOSED METHOD §.§ Overview The framework proposed in this paper is mainly composed of the adversarial self-attack module (ASAM), adversarial defense module (ADM) and feature representation module under spatial-temporal information guidance (FRM-STIG), as shown in Fig. <ref>. The ASAM is mainly used to activate perturbations in the training samples and achieve the re-ID model's adversarial training. The ADM extracts discrimination features from the sample in which the perturbations are activated. The FRM-STIG extracts the information carried in the sequence and uses them to enhance the effect of features related to motion information. To comprehensively use the information carried in the video sequences, the FRM-STIG integrates visual features with spatial-temporal information to accurately describe a person. §.§ Adversarial Self-Attack Module The ASAM is designed to enable the training samples to replace the role of the adversarial samples. It is implemented in a single ResNet50 framework. The ASAM module contains the Conv Layer, Intra-Modality Self-Attention (IMSA) Layer, Feature Encoder E_att, Global Average Pooling (GAP) Layer, and Batch Normalization (BN) Layer. The Conv Layer here refers to the first convolution layer of ResNet50. The E_att is composed of the last four layers of ResNet-50. The IMSA is used to highlight the role of the perturbations in the feature maps output by the Conv Layer. This Conv Layer generates perturbation information in the training samples to make the re-ID model yield its original performance. Compared with existing methods, ASAM does not need to generate new adversarial samples and only uses the original ones to achieve regular and adversarial training for the re-ID model. We denote the video sequence of different modality of the same identity V= { V_t∈ℝ^H × W} _t = 1^T and I = { I_t∈ℝ^H × W} _t = 1^T, where H and W represent the height and width of a single video frame, V and I represent a sequence of person video frames in visible and infrared modality, respectively. T is the total number of frames in a sequence, t means the index of the t-th frame. The results obtained by inputting the video sequence V and I into the Conv Layer can be expressed as: F_t,c^V = E^V( V_t), F_t,c^I = E^I( I_t) (t=1,2, ⋯, T) , where F_t,c^V and F_t,c^I denote the features output by the Conv Layer. E^V and E^I are the encoders consisting of the first convolution layer of ResNet-50, and their parameters are not shared. The encoders E^I and E^V are respectively used to extract the shallow features of visible and infrared person images. To highlight the role of the perturbations carried in F_t, c^V and F_t, c^I, we first send F_t, c^V and F_t, c^I to the IMSA layer, whose structure is shown in Fig. <ref> and the output results can be expressed as: F̂_t^V = IMSA( F_t,c^V), F̂_t^I = IMSA( F_t,c^I) . To activate the perturbations in F_t,c^V and F_t,c^I such that they can replace the generation of adversarial examples, we send F̂_t^V and F̂_t^I to E_att. After the perturbation information is activated, the feature of the output by the E_att followed by GAP and BN should be misclassified by the pre-trained person identity classifier W_def of ADM. To this end, we use the following identify loss function to optimize E^V, E^I and E_att: ℓ_cov_id=-2/n_b(∑_i=1^n_b/2 q(log( W_def( f_i^V))+log( W_def( f_i^I))) , where W_def is a pre-trained person identity classifier, q=(1/M,1/M, … ,1/M)^T, M is the total number of person identifies in the training set, n_b is the number of video sequences in a batch, and f_i^l= BN(GAP( E_att(F̂_1, i^l, F̂_2,i^l, ⋯, F̂_T,i^l))) , where F̂_t,i^l(t = 1, 2 ⋯, T; l = V, I; i = 1, 2, ⋯, n_b/2) is the feature map of the t-th frame of the i-th sequence in the modality l output by IMSA. Minimizing Eq. (3) would activate the perturbations in the person image. In this paper, the perturbations are regarded as adversarial attack information. The activation helps improve the disturbing immunity of the defense network. In order to make the re-ID model robust to the diversity of pedestrian appearance features, f_i^l has been expected to practice adversarial attack and also related to the person identity. E^V, E^I, E_att are further updated by: ℓ_att_id= - 2/n_b(∑_i = 1^n_b/2 q_i (log ( W_att( f_i^V))+ log ( W_att( f_i^I)))) , where q_i is a one-hot vector representing the identity of f_i^V and f_i^I. W_att is the person identity classifier only used in ASAM. §.§ Adversarial Defense Module ASAM is to activate the perturbations in the training samples and replace the role of the adversarial samples in the adversarial training to improve the robustness of the defense network E_def against the perturbations. To make E_def more immune to attack from the perturbations, an Adversarial Defense Module (ADM) is designed. ADM is mainly composed of defense network E_def, cross-modality cross-attention (CMCA) layer, GAP and BN layers, as shown in Fig. <ref>. The main task of the ADM is to endow E_def with strong defense ability against the perturbations. In cross-modality person re-ID, modal-invariant features play a positive role in promoting the matching accuracy of person identities. Therefore, the features extracted by E_def contain rich information on different modalities, which would be helpful to defend against attacks from feature perturbation. A CMCA layer is embedded in the ADM, as shown in Fig. <ref>. As shown in Fig. <ref>, there are two CMCA layers in the ADM, one embedded after the third convolution layer of E_def, and the other embedded after the last convolution layer of E_def. E_def3 is composed of the first three convolution layers of E_def as an encoder. After the perturbations is activated by the ASAM, the feature maps F_t,c^V and F_t,c^I of the t-th frame are sent to the encoders E_def3 and E_def, the results are: F_t,d3^V= E_def3( F_t,c^V), F_t,d3^I= E_def3( F_t,c^I) F_t,d^V= E_def( F_t,c^V), F_t,d^I= E_def( F_t,c^I) . After F_t, d3^V, F_t, d3^I, F_t, d^V and F_t, d^I are input into CMCA layer, the results can be expressed as: F̅_t,d3^V = ConLa_4 (CMCA( F_t, d3^V, F_t, d3^I)) F̅_t,d3^I = ConLa_4 (CMCA( F_t, d3^I, F_t, d3^V)) F̅_t,d^V = CMCA( F_t, d^V, F_t, d^I) F̅_t,d^I = CMCA ( F_t, d^I, F_t, d^V) , where ConLa_4 denotes the last convolution layer of E_def. The common information can be extracted by embedding the CMCA layer in E_def. The first CMCA layer enables E_def to extract discrimination feature maps with the common information on a shallow convolution layer. The second CMCA layer is used to ensure that the feature maps extracted by E_def contain common information for identity matching. To integrate the complementary information existing in F̅_t,d3^l and F̅_t,d^l (l=V,I) and realize the accurate description of person appearance features, we fuse F̅_t,d3^l and F̅_t,d^l (l=V,I), respectively, and the fused results are sent to the GAP and BN layers. The feature vectors obtained are: f_d^V = BN(GAP((F̅_1,d3^V + F̅_1,d^V)/2, ⋯, (F̅_T,d3^V + F̅_T,d^V)/2)) f_d^I = BN(GAP((F̅_1,d3^I + F̅_1,d^I)/2, ⋯ ,(F̅_T,d3^I + F̅_T,d^I)/2)) . To ensure that f_d^V and f_d^I have strong discrimination, we use the identity loss to optimize E_def: ℓ_def_id =- 2/n_b(∑_i = 1^n_b/2 q_i(log (W_def(f_d,i^V))+ log ( W_def( f_d,i^I)) )) , where f_d, i^V and f_d,i^I represent the features of the i-th sequence in the visible and infrared modality, respectively. The triplet loss is used to solve the hard samples problem: ℓ_def_tri= 1/n_b∑_i = 1^n_b[ f_d,i^a - f_d,i^p_2^2 - f_d,i^a - f_d,i^n_2^2 + α] _ + , where [∙]_+=max{0, ∙}, f_d,i^a and f_d,i^p represent the features of the i-th anchor sample sequence and the hard positive sample sequence with the same identity in a mini-batch. f_d,i^n is a hard negative sample sequence with different identity from f_d,i^a. α denotes the margin (>0, empirically set to 0.3 in this work). The features f_d,i^a, f_d,i^p and f_d,i^n are generated by Eq. (8). §.§ Feature Representation under Spatial-temporal Information Guidance The video sequence of a person contains a lot of motion information, which is not affected by the modality changes. In a single pedestrian image, there is a latent spatial relation between different regions of the pedestrian's body, and different pedestrians usually show different relations. In order to guide the discrimination features learning, a spatial-temporal information mining approach is proposed, as shown in Fig. <ref>. The proposed method is mainly composed of two parts: 1) feature highlighting guided by spatial relation (FH-G-SR) and 2) feature representation embedded by temporal information (FR-E-TI). To highlight the discrimination feature with spatial relation guidance, we first use PCB <cit.> to divide the feature map F_t, d^V and F_t, d^I into K different patches in space, which are converted into feature vectors. The feature vectors of the k-th patches of F_t,d^V and F_t,d^I are denoted as f_t, d^V, k and f_t, d^I, k (k = 1,2, ⋯, K). Based on the experience of PCB, K is set to 6 in this paper. As can be seen in Fig. <ref>, the video sequence features { f_1, d^l, k, ⋯, f_T, d^l,k} are sent into LSTM_mot to obtain the features embedded with motion information, expressed as {f̃_1, d^l,k, ⋯ , f̃_T, d^l,k}. Since the potential spatial relation between patches at the different positions is not involved, a sequence of different patches (of the same frame) is formed and input into LSTM_spa for spatial relationship mining between different patches. Considering that after the last frame passes through LSTM_mot, the obtained features integrate the information of all previous frames, we only form a sequence {f̃_T, d^l,1, ⋯ , f̃_T, d^l,K}(l = V, I) of all patch features of the T-th frame and send it to LSTM_spa. §.§.§ FH-G-SR The result f̅_T, d^l, K obtained after feeding {f̃_T, d^l,1, ⋯ , f̃_T, d^l,K}(l = V, I) into LSTM_spa is the final spatial feature representation. To effectively utilize the spatial relations and the information carried by f_t,d^l,k to highlight discrimination features, we concatenate f̅_T, d^l, K and the original visual feature f_t, d^l, k: f̂_t,d^l,k=concat( f_t,d^l,k, f̅_T,d^l,k) . As shown in Fig. <ref>, after f̂_t,d^l,k passes through the linear mapping (LM) layer, ReLU activation function, the other LM layer, and the Sigmoid activation function, the corresponding weight matrix for features highlighting is obtained by: A_t, d^l, k=Sigmoid(LM(ReLU(LM(f̂_t,d^l,k)))) . With A_t, d^l, k, the feature f_t, d^l, k highlighted by spatial relation guidance is: ḟ_t, d^l, k= f_t, d^l, k⊙ A_t, d^l, k. §.§.§ FR-E-TI Although ḟ_t, d^l, k makes use of the spatial relation between patches, it is only the visual feature of a person, and it does not integrate the motion information contained in the video sequence. Therefore, we embedded the features {f̃_1, d^l,k, ⋯, f̃_T, d^l,k} carrying motion information into the enhanced features in Eq. (13): f̈_t, d^l,k = ḟ_t, d^l, k + f̃_t, d^l,k. GAP is used to achieve the fusion of T frame features {f̈_1, d^l, k, ⋯, f̈_T, d^l, k}, and the feature representation of the k-th patch of modality l can be obtained: f̈_d^l,k = GAP(f̈_1, d^l,k, ⋯, f̈_T, d^l,k) . Finally, we concatenate the features of all patches together according to their spatial positions on the image to form a complete person representation f̈_d^l  (l = V, I). In order to ensure its discrimination, the cross entropy loss is deployed: ℓ_p_id= - 2/n_b∑_i = 1^n_b/2 q_ilog( W_se( f̈_d, i^V))+ q_ilog( W_se( f̈_d, i^I)) , where W_se is the identity classifier of f̈_d, i^l which is generated via Eq. (15). f̈_d, i^l denotes the sequence feature of the i-th video sequence in one batch. The overall loss function in the proposed approach is: ℓ_total = ℓ_cov_id+ λ _1 ℓ_att_id+ λ _2 (ℓ_def_id+ ℓ_def_tri) +λ _3 ℓ_p_id, where λ _1, λ _2, λ _3 are hyper-parameters, which are used to adjust the role of the corresponding loss items. The processes are summarized in Algorithm <ref>. § EXPERIMENTS §.§ Experimental Settings The dataset used in this experiment is a large-scale cross-modal video-based person re-ID dataset—VCM proposed by Lin et al. <cit.>, which is the first and only one currently constructed for the visible-infrared video person re-ID task. The dataset is recorded by 12 non-overlapping HD cameras and consists of 251,452 visible images and 211,807 infrared images with a resolution of 3,840 × 2,170. These images are further divided into 11,785 sequences in the visible modality and 10,078 sequences in the infrared modality. The dataset contains 927 identities, where 232,496 images of 500 identities involving a total of 11,061 sequences are used for training, and the remaining 230,763 images of 427 identities involving a total of 10,802 sequences are used for testing. All experiments are carried out on a PC equipped with an NVIDIA TESLA A100 GPU in the Pytorch 1.10 framework <cit.>. In the training phase, all input images are adjusted to 288 × 144. The batch size is set to 32 (i.e., 32 sequences are processed in a mini-batch). In each epoch, 16 sequences of each modality enter the model for training (containing eight identities, each containing two sequences). Each sequence consists of 6 frames, a total of 192 frames. The model is trained for 200 epochs (each of which contains 268 iterations). The first 150 epochs are used to train E^V, E^I, E_att, E_lstm, W_att and W_def. For the remaining 50 epochs, we fix W_def and fine-tune E_def to further enhance the network's defense capability. The entire training is realized by using SGD optimizer with the momentum of 0.9, weight decay of 5 × 10^-4 and learning rate of 0.12. A warm-up strategy <cit.> is applied to tune the learning rate linearly. Cumulative Matching Curve (CMC) <cit.> and mean Average Precision (mAP) <cit.> are used as the evaluation metrics for model performance. §.§ Ablation Study The proposed method consists of an adversarial self-attack module (ASAM), adversarial defense module (ADM), and feature representation module under spatial-temporal information guidance (FRM-STIG). E^V, E^I and E_def trained by cross-entropy loss and triplet loss are regarded as “Baseline”. The “Baseline” is pre-trained on the ImageNet <cit.> before training on the dataset VCM. We denote the method of adding ASAM to the baseline as “Baseline+ASAM”, similarly, we obtain “Baseline+ADM”, “Baseline+FRM-STIG” and “Baseline+ASAM+ADM”. When FH-G-SR is removed from FRM-STIG, and the remaining content is added to “Baseline+ASAM+ADM”. Such method is noted “Baseline+ASAM+ADM+FR-E-TI”. The complete proposed model is marked “Baseline+ASAM+ADM +FRM-STIG”. Furthermore, in order to verify the contribution of LSTM_spa, LSTM_spa is removed from “Baseline+ASAM+ADM+FRM-STIG” and the corresponding model is denoted as “Baseline+ASAM+ADM+FRM-STIG*”. The results of ablation experiment are reported in Table <ref>. Effectiveness of ASAM. To verify the effect of the ASAM, we add the ASAM to the “Baseline”, and obtain the model “Baseline+ASAM”. One can see in Table <ref> that, on the “Infrared to Visible” task, Rank-1 and mAP achieved by “Baseline+ASAM” decrease by 14.4% and 12.57%, respectively. For the task of “Visible to Infrared”, Rank-1 and mAP are reduced by 15% and 14.96%, respectively. These indicate the attacks from the perturbations activated in the shallow features. Effectiveness of ADM. In Table <ref>, Rank-1 and mAP accuracy of “Baseline+ADM” on the task of querying the visible sequence from the infrared sequence (i.e., Infrared to Visible) is 58.92% and 44.50%, respectively. Compared with that of “Baseline”, the performance is improved by 0.87% and 1.27%, respectively. On the task of “Visible to Infrared” task, the accuracy of Rank-1 and mAP reaches 62.58% and 46.55%, respectively. Compared with that of “Baseline”, the performance of “Baseline+ADM” is improved by 1.57% and 0.96%. It implies that the ADM still has a positive effect on the model performance improvement when the ASAM is absent. It can be also observed that when ASAM and ADM are added to “Baseline” together, the performance of “Baseline+ASAM+ADM” is improved. Compared with “Baseline+ADM”, Rank-1 and mAP of “Baseline+ASAM+ADM” on “Infrared to Visible” task are increased from 58.92% and 44.50% to 60.27% and 46.09%. On “Visible to Infrared” task, the performance of Rank-1 and mAP are improved from 62.58% and 46.55% to 63.01% and 48.05%. These demonstrates the effectiveness of the adversarial training. Effectiveness of FR-E-TI. FH-G-SR is removed from FRM-STIG to evaluate the validity of FR-E-TI with temporal information embedding. It can be seen in Table <ref> that when the FR-E-TI is added to “Baseline+ASAM+ADM”, Rank-1 and mAP of the model “Baseline+ASAM+ADM+FR-E-TI” on “Infrared to Visible” (“Visible to Infrared”) task are improved from 60.27% and 46.09% (63.01% and 48.05%) to 63.92% and 48.56% (66.42% and 50.40%), respectively. The improvement verifies the validity of FR-E-TI when FH-G-SR is absent. Effectiveness of FRM-STIG. It can be seen in Table <ref> that with FRM-STIG, the performance of “Baseline+FRM-STIG” on the “Infrared to Visible” (“Visible to Infrared”) task, the accuracy of Rank-1 and mAP increases from 58.05% and 43.23% (61.13% and 44.80%) to 61.01% and 45.59% (63.74% and 46.34%) respectively. For the same task, after FRM-STIG is added to “Baseline+ASAM+ADM”, the accuracy of the model “Baseline+ASAM+ADM+FRM-STIG” on Rank-1 and mAP increases from 60.27% and 46.09% (63.01% and 48.05%) to 65.31% and 49.49% (67.66% and 51.76%) respectively. It verifies the contribution of FRM-STIG. Moreover, compared with the performance of “Baseline+ASAM+ADM+FR-E-TI”, Rank-1 and mAP of “Baseline+ASAM+ADM+FRM-STIG” on “Visible to Infrared” (“Infrared to Visible”) are improved from 63.92% and 48.56% (66.42% and 50.40%) to 65.31% and 49.49% (67.66% and 51.76%). It demonstrates the validity of FH-G-SR. Further, Rank-1 and mAP of “B+ASAM+ADM+FRM-STIG*” on “Visible to Infrared” (“Infrared to Visible”) decrease by 1.24% and 0.7% (1.88% and 1.19%) compared with those of “Baseline+ASAM+ADM+FRM-STIG”. It verifies the contribution of LSTM_spa. The visual effect of different settings in the ablation experiment on the retrieval results is shown in Fig. <ref>. From the results shown in Fig. <ref>, one can see that the retrieval accuracy has improved when the ADM is added to the “Baseline”. It found that when ASAM and ADM are added to the “Baseline” together, the model performance is visually improved. It indicates that the adversarial attack and defense strategies proposed are effective. Besides, when FRM-STIG is added to “Baseline+ASAM+ADM”, the matching accuracy of sequences is further improved. Fig. <ref> shows the areas focused by “Baseline” and the proposed method, where the warmer the color is, the more attention the area receives. Those results indicate that the proposed method can better extract discriminative features from the person's body area than the Baseline. §.§ Comparison with State-of-the-Arts In order to verify the superiority of the proposed method over the existing methods, it is compared with LbA <cit.>, MPANet <cit.>, DDAG <cit.>, VSD <cit.>, CAJL <cit.>, MITML <cit.>, where the first five methods are designed for image-based visible-infrared person re-ID and the last one is for visible-infrared video person re-ID. Since the first five methods are proposed for the single-frame visible-infrared person image matching task, we remove FRM-STIG for comparison and such method “Proposed*” in Table <ref>. Rank1 and mAP of “Proposed*” are 60.27% and 46.09% (63.01% and 48.05%) on “Infrared to Visible” (“Visible to Infrared”), which are 3.68% and 4.60% (2.88% and 5.24%) higher than those of the sub-optimal image-based method CAJL. Compared with the latest video person re-ID method MITML, Rank-1 and mAP of the proposed method are increased from 63.74% and 45.31% (64.54% and 47.69%) to 65.31% and 49.49% (67.66% and 51.76%) on the “Infrared to visible” (“Visible to Infrared”) task. It shows that the proposed method outperforms all compared ones. §.§ Parameter Analysis In Eq. (17), three hyper-parameters λ_1, λ_2 and λ_3 need to be set. In this section, we discuss the influence of one hyper-parameter by fixing the other two parameters. The performance of the proposed method with different hyper-parameters is shown in Fig. <ref>. The influence of λ_1. Fig. <ref> (a) and (b) show the effect of λ _1 when it changes in [0.1,9]. On the task of “Infrared to Visible”, the proposed method achieves the best performance when λ _1 = 1, and the performance degenerates when λ _1>1. On the task of “Visible to Infrared”, the method in this paper shows insensitivity to the change of λ _1 value. Therefore, we set λ _1 to 1 in our method. The influence of λ_2. Fig. <ref> (c) and (d) show the changes of the model performance when λ _2 changes from 0.01 to 4. On task of “Infrared to Visible”, when λ _2 = 0.5 , the proposed method achieves the highest recognition accuracy. On “Visible to Infrared” task, when λ _2 = 0.1, the proposed method achieves the highest recognition accuracy, and when λ _2 > 0.5, the experimental performance performs a significant downward trend on both of tasks. In this work, we set λ _2 to 0.5 for both tasks. The influence of λ_3. Fig. <ref> (e) and (f) show the changes in performance of the proposed algorithm on “Infrared to Visible” and “Visible to Infrared” tasks, when λ _3 varies between 0.1 and 5. As indicated in Fig. <ref> (e) and (f), for both recognition tasks, the recognition performance of the proposed method reaches its peak when the value of λ_3 reaches 0.5. Therefore, we set λ_3 to 0.5 throughout the experiments. § CONCLUSION An adversarial self-attack defense and feature representation module under spatial-temporal information guidance is proposed for the diversity of pedestrian appearance features on pedestrian identity matching. The method consists of the ASAM, the ADM, and the FRM-STIG. Through the cooperative training of the ASAM and the ADM, the defense network's defense capability has been improved in the face of identity-related perturbation. The proposed method is robust to modality differences and feature changes caused by other factors. In addition, FRM-STIG utilizes each local feature effectively through a spatial relationship-guided highlight mechanism. The experimental results show that the proposed method outperforms the compared SOTA methods. Acknowledgments This work was supported by the National Natural Science Foundation of China under Grants 62276120, 61966021 and 62161015. § DECLARATIONS Conflict of interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. § DATA AVAILABILITY STATEMENT The datasets for this study can be found in the VCM dataset <https://github.com/VCM-project233/MITML>.
http://arxiv.org/abs/2307.04709v1
20230710171722
Fatal mathematical errors in Hong-Page Theorem and Landemore's epistemic argument
[ "Álvaro Romaniega" ]
econ.TH
[ "econ.TH" ]
emphasisquote (emphasis added) Fatal mathematical errors in Hong-Page Theorem and Landemore's epistemic argument]Fatal errors and misuse of mathematics in the Hong-Page Theorem and Landemore's epistemic argument [email protected] theoremTheorem[section] lemma[theorem]Lemma corollary[theorem]Corollary proposition[theorem]Proposition definition definition[theorem]Definition example[theorem]Example assumptionAssumption errorError remarkx[theorem]Remark remark In the pursuit of understanding collective intelligence, the Hong-Page Theorems have been presented as cornerstones of the interplay between diversity and ability. However, upon rigorous examination, there seem to be inherent flaws and misinterpretations within these theorems. Hélène Landemore's application of these theorems in her epistemic argument and her political proposal showcases a rather unsettling misuse of mathematical principles. This paper critically dissects the Hong-Page Theorems, revealing significant inconsistencies and oversights, and underscores the indispensable role of 'ability' in group problem-solving contexts. This paper aims not to undermine the importance of diversity, but rather to highlight the dangers of misusing mathematical principles and the necessity for a more nuanced comprehension of mathematical results when applying them to social sciences. [ Álvaro Romaniega August 12, 2023 ==================== In the burgeoning field of collective intelligence theory, one theorem has emerged as particularly influential: the Hong-Page Theorem. This theorem posits that in certain conditions, a diverse group of problem solvers has the potential to outperform a homogeneous group of high-ability problem solvers. The theorem has been instrumental in shaping discourses around the importance of diversity in decision-making and problem-solving contexts. One notable application of the Hong-Page Theorem is found in the work of Yale professor Hélène Landemore, who uses it as the cornerstone of her political proposal for an "Open Democracy." Landemore's thesis argues that greater cognitive diversity in a collective decision-making process not only enhances its epistemic properties, achieving the epistemic superiority of democracy, but is also serves as a normative benchmark. However, the theorem has not gone unchallenged. Mathematician Abigail Thompson has been among the most vocal critics, casting doubt on the use of the Hong-Page Theorem in this way and raising concerns about the validity of the conclusions drawn from it. In this paper we strive to unmask the Hong-Page Theorem as what we believe they truly are— substantially trivial findings, devoid of substantial social content. By doing this, we aim to reveal the misconceptions propagated through their unchecked acceptance, especially in their application to real-world socio-political phenomena, such as in Hélène Landemore's "Open Democracy" proposition and her epistemic argument for democracy. We hope that our critique stimulates introspection within the collective decision theory community and encourages a more discerning and judicious use of mathematical theorems. It is not appropriate to use a mathematical theorem, without further investigation, merely because its conclusions align with our views. The paper is organized as follows. We begin with a thorough dissection of the Hong-Page "Diversity Trumps Ability" Theorem in Section <ref>. We delve into the definitions that underpin these theorems and carefully examine the assumptions relating to the problems and problem solvers. We then derive and discuss a series of trivial corollaries from these assumptions, concluding with a concise analysis of other related results and a simplified proof of the theorem. In Section <ref>, the paper takes a critical turn, presenting a series of counterexamples that challenge the robustness of the Hong-Page Theorem. We start by challenging the necessity of the injective function and the 'unique best agent' assumption, eventually moving towards a discussion on the performance and selection of clones. The critique deepens in Section <ref> with the proposition of a new Hong-Page style theorem: 'Ability trumps diversity'. This section revises the original assumptions to allow a fair comparison between ability and diversity, ultimately culminating in the presentation of this theorem. Section <ref> then shifts focus onto the 'Diversity Prediction Theorem' and the 'Crowds Beat Averages Law', discussing their results and pointing out the asymmetric role of 'ability' and 'diversity'. Moving into Section <ref>, we delve into the misuse of mathematics in the original Hong-Page theorems. We analyze how the mathematics was exploited to obscure a trivial fact and answer questions it was not designed to address. We further discuss the issue of using the prestige of mathematics to lend credence to flawed interpretations, and highlight a basic mathematical error in advocating for diversity. In the penultimate section, Section <ref>, we turn our critique towards Hélène Landemore's political proposal. We present her misuse of mathematics and demonstrate a basic misunderstanding of the mathematical theorems. We proceed to analyze the misuse of the hypotheses of the 'Diversity Trumps Ability Theorem' and argue for the vacuousness of the 'Numbers Trump Ability Theorem'. Finally, Section <ref> wraps up the paper, offering a robust summary of the findings and their implications on the ongoing discourse surrounding collective intelligence and the role of diversity and ability within it. Certain sections of this text, particularly the initial ones, presume a degree of mathematical proficiency (although the mathematics used are not particularly involved). However, Sections <ref>, <ref>, and <ref> are essentially devoid of mathematics, instead referencing earlier sections and the results derived there. For the mathematically-intensive portions, I have strived to provide a non-technical explanation, typically prefaced with the phrase, "In other words". A final caveat, this critique should not be taken as a dismissal of the importance of diversity (which I consider one, among others, important epistemic factor) in decision-making, but rather as a call to address the misuse of mathematics in these contexts. It urges us to consider the rigorous and nuanced approach required when applying mathematical theories to sociopolitical constructs. As such, this paper makes a contribution to the ongoing discourse on collective intelligence, fostering a deeper understanding of the mathematical theorems used to achieve a desired conclusion. § THE "DIVERSITY TRUMPS ABILITY" THEOREM §.§ Definitions V_ϕ: X → [0,1] a function and X, for simplicity, finite. The Problem is finding the maximum of V_ϕ. A problem solver is a function ϕ:X→ X. The set of problem solvers is denoted by Φ. For a probability measure on X (full support), ν, the expected value of the performance of each agent is given by 𝔼_ν(V_ϕ∘ϕ)=∑_x∈ XV_ϕ(ϕ(x))ν(x) . Let's discuss the intuition behind the problem-solving process. Given an initial state x from the set X, a problem solver aims to find a solution to the problem by mapping x to ϕ(x). In other words, the problem solver transforms the input x into a potential solution, hoping that this transformation will maximize the value of the function V_ϕ. §.§ Problem assumptions [Unique problem] ∀ϕ,ϕ'∈Φ,   V_ϕ=V_ϕ'=:V. In other words, any two problem solvers evaluate the problem space in the same way. This assumption simplifies the analysis by ensuring that all problem solvers have a consistent evaluation criterion for the problem[In real settings, like the ones to which people attempt to apply the theorem, this assumption is far from accurate. People have different values, so right from the beginning, the hypotheses are not satisfied. Nevertheless, in this paper, I want to focus on more profound critiques, even though issues like diversity in values are quite significant.]. [Unique solution] ∃_=1 x^* /  V(x^*)=1. In other words, there is exactly one optimal solution to the problem that maximizes the value of the function V. This assumption allows us to focus on finding the unique solution. [Strictly increasing problem] V is injective, i.e., if V(x)=V(x'), then x=x'. That is, we can order X as {x_1,…,x_|X|} such that V(x_1)<…<V(x_|X|) . In other words, this assumption implies that the problem has a well-defined ordering of potential solutions. The original article did not state explicitly that the value function V is one-to-one. This assumption is necessary for the theorem to hold, as Thompson pointed out, <cit.>. §.§ Problem solver assumptions [Everywhere ability in problem solvers] ∀ ϕ∈Φ:  V(ϕ(x))≥ V(x) ∀  x∈ X . In particular, ϕ(x^*)=x^* . This assumption, in combination with Assumption <ref> and <ref>, states that all problem solvers are able to improve the value of any state. In other words, if a problem solver is applied to a state, the value of the state will never decrease. [No improvement, idempotence] ∀ϕ∈Φ,  ϕ∘ϕ=ϕ . This assumption states that problem solvers are idempotent. In other words, applying a problem solver to a state twice will have the same effect as applying it once. ["Difficulty", imperfect problem solvers] ∀ϕ∈Φ ∃ x  / ϕ(x)≠ x^* . In other words, by hypothesis, for every agent there are instances where they fail to find the optimal solution. [“Diversity”, sufficient unstuck problem solvers] ∀ x∈ X\{x^*} ∃ ϕ∈Φ / ϕ(x)≠ x . In other words, this assumption ensures a “diversity” of problem solvers in Φ, with at least one problem solver capable of making progress from any non-optimal state. [Unique best problem solver] |max_ϕ∈Φ{𝔼_ν(V∘ϕ)}|=1, i.e., there is only one best-performing agent In other words, this assumption states that there is only one problem solver that performs best on average. There is only one problem solver that is the most likely to lead to the optimal state. §.§ Group problem solver assumptions [In series deliberation] The agents Φ'{ϕ_1,…, ϕ_N}, when working together to solve the problem starting at x are equivalent to: * First, i_1 such that x_1ϕ_i_1^x(x)≠ x_0 x. * Second, i_2 such that x_2ϕ_i_2^x(x_2)≠ x_1. * Inductively, i_j such that x_jϕ_i_j^x(x_j-1)≠ x_j-1. This stops at x_n such that it is a fixed point for all elements of {ϕ_1,…, ϕ_N} (all the agents are stuck at the same point, unanimity). There can be multiple sequences arriving at the same point. The fixed point exists as x^* is a fixed point for all elements of Φ by assumption. The group performance is tantamount to composition of the functions in a proper way: ϕ^Φ'(x)ϕ_i_n^x∘…ϕ_i_1^x(x) . In other words, this assumption states that a group of problem solvers can be thought of as a sequence that takes turns applying the problem solvers in the group to the current state, such that we approach the optimal value. The group will stop when it reaches a state that is a fixed point for all of the problem solvers in the group, i.e., unanimity. [Clones] There exists an infinite amount of identical copies of each agent ϕ∈Φ. This assumption states that there are an infinite number of problem solvers available. This is necessary for the theorem to hold. §.§ Trivial corollaries from the assumptions By construction (i.e., from the assumptions and not the theorem), we have the following corollaries. Note that no profound or even standard mathematical results are needed; we just need the assumptions mentioned above combined with trivial arithmetic and trivial properties of sets. All members of Φ working together can solve the Problem ∀ x∈ X. This follows straightforwardly from Assumptions <ref>, <ref>, <ref>, and <ref>. Indeed, V(x)<V(ϕ_i_1^x(x))<… <V(ϕ_i_n^x(x_n-1))=1 , for some n≤|X|. ∃ x∈ X such that ∀ N_1∈ℕ, N_1 “clones” of the the best performing agent cannot solve the Problem. By Assumptions <ref> and <ref>, N_1 “clones” of the best performing agent work in the same way as a single clone alone. By Assumption <ref>, no agent can solve the problem alone for some x, so the corollary follows straightforwardly. So, just “rearranging" the assumptions we can trivially (again, no profound mathematics, just basic arithmetic and trivial properties of sets) prove the following version of the “theorem” containing the main conclusion. The advantage of this formulation is that no clones are needed. Given the Assumptions <ref>-<ref>, all problem solvers working together perform better than the best problem solver (in the sense that there is a state x such that the best problem solver cannot solve but the whole group can). Note that the best problem solver is included in the first group. This follows straightforwardly from the corollaries (i.e., the assumptions) given above. Again, this is by construction, based on the way we've formulated our hypotheses. Let us explain the proof in words. By assumption, we have arranged for the best agent not to always solve the problem. On the other hand, by mere assumption, for every state, there is an agent that can reach a better state. As the number of states is finite, they will reach the optimum in a finite number of steps. Note that the “diverse group” includes the best agent. The fact that the “diverse" group” outperforms the best agent is trivial, as they include additional agents and, by hypothesis, they do not worsen the solution and improve it for some states. To connect with the original statement, let us formulate the following version. Given the hypotheses: (𝒜_1) "Ability-diversity" assumptions. The assumptions given above, Assumption <ref>-<ref>. (𝒜_2): "Counting" assumptions. We choose two groups. In the first, N_1 (large) clones such that[Actually, even less is required: it is sufficient to have a subset Φ_0 such that ∀ x∈ X, there are agents that reach x^* in accordance with Assumption <ref>. For the sake of simplicity, we do not make this irrelevant distinction.] Φ⊂{ϕ^R_1,…,ϕ^R_N_1}=:Φ_R, and, in the second, there are N_1 clones of the best performing agent selected from a group of N, Φ_B. Then, the performance of Φ_R is better than Φ_B in the sense of (<ref>). It is trivial by the corollaries given above. Indeed, by the first corollary 𝔼_ν(V∘ϕ^Φ_2)=1 . By the second corollary, 𝔼_ν(V∘ϕ^Φ_1)<1 as ∃_≥ 1 x such that V(ϕ^Φ_1(x))<1. So, what did Hong and Page prove in their article? Essentially, they demonstrated that assumption 𝒜_2 holds almost surely (after defining some probability measures). This is a not very difficult probabilistic claim that has nothing to do with either ability or diversity, which are contained in the assumptions 𝒜_1. It is a probabilistic fact that can be shown, regardless of whether the objects considered are diverse agents, incapable problem solvers, balls in a box, or mathematical functions in a Hilbert space. Nevertheless, this is the heart of proof of their article published in the Proceeding of the National Academy of Sciences. But one might question, given that we have shown that Theorem <ref> (a trivial restatement of the assumptions) encapsulates all the information regarding diversity and ability, what is the necessity of introducing clones? This is why Thompson say that the theorem "is trivial. It is stated in a way which obscures its meaning. It has no mathematical interest and little content." We can compare the previous version with the original statement: Given the assumption above, let μ be a probability distribution over Φ with full support. Then, with probability one, a sample path will have the following property: there exist positive integers N and N_1, N > N_1, such that the joint performance of the N_1 independently drawn problem solvers exceeds the joint performance of the N_1 individually best problem solvers among the group of N agents independently drawn from Φ according to μ. As noted by Thompson, <cit.>, the theorem as originally stated was false because Assumption <ref> was not included. See Section <ref> for more details. Note that “N_1 individually best problem solvers” are just clones of the best problem solver (unique by assumption), not, for instance, the first and second according to their expected value (which will perform better). This restriction is imposed by assumption. More precisely, we have a new assumption: By hypothesis we have the following. * The first group is selected randomly from a pool of clones of the elements in Φ. The group size N_1 can be adjusted as required. * Similarly, the second group is chosen independently, but from an identically distributed set of clones of the elements in Φ of size N_1. This selection process follows the stipulations that: * the group size N can be adjusted as required, * the selection allows for the repetition of the best problem solvers. §.§ Other results and simpler proof Assuming the conditions of Theorem <ref> with N_1 large enough, with probability one, * the randomly selected group of N_1 problem solvers will invariably converge on the correct solution without any disagreement and unanimity, * the “random group” always contains the best-performing agent. These facts explain that they can always outperform the best problem solvers. This is straightforward from Corollary <ref> and that, following the first part of Assumption <ref>, the first group includes a copy of Φ μ- almost surely. It is also Lemma 1 in <cit.>. There is unanimity as, for every state x∈ X, the group solution is x^*, where everyone accepts as a solution, ϕ(x^*)=x^*. The second statement follows from the Strong Law of Large Numbers, see below for more details, Remark <ref>. For instance, the following Φ{ϕ_1,ϕ_2,ϕ_3} such that V (x) ϕ_1 (x) ϕ_2 (x) ϕ_3 (x) a 1/4 b a b b 1/2 b c b c 3/4 d c c d 1 d d d satisfies the hypotheses of the theorem, but, if the “random” group does not include the best performing agent[Assumptions can be made to exclude the best performing agent, while ensuring that there is another agent that performs as the best one does when needed. Consequently, it is no surprise that the theorem still holds. However, this approach is purely ad hoc.], ϕ_1, then it cannot outperform ϕ_1. In fact, a simpler proof of the theorem can be constructed based on this simple fact. This approach also exposes the theorem's triviality given its underlying assumptions. First, by hypothesis (Assumptions <ref> and <ref>), ∃ x_*∈ X, ϕ^*,ϕ_*∈Φ such that the best agent ϕ^*(x_*)≠ x^* and V(ϕ_*(ϕ^*(x_*)))>V(ϕ^*(x_*)). By hypothesis (Assumptions <ref> and <ref>), V∘ϕ^{ϕ^*, ϕ_*}≥ V∘ϕ^*, where the equality is strict for at least one point. Given that ν has full-support, 𝔼_ν( V∘ϕ^{ϕ^*, ϕ_*})> 𝔼_ν(V∘ϕ^*). Second, we introduce the probabilistic selection of clones. By the Strong Law of Large Numbers (SLLN), μ(ω∈Ω : ⋂_ϕ(lim_N→∞f^N(ϕ)=μ({ϕ})))=1 , where f^N(ϕ) represents the frequency of appearance of ϕ when the size of the group of clones is N. The intersection is finite. For this full-measure set, we define N_ϕ=N_ϕ(ω) as the integer such that, if N≥ N_ϕ, then f^N(ϕ)>μ({ϕ})/2. Following Assumption <ref>, we take N_1max{N_ϕ^*,N_ϕ_*, 2/μ({ϕ^*}), 2/μ({ϕ_*})} for the first event. By these definitions, at least one copy of ϕ_* and ϕ^* are included. For the second event, take N≥2/μ({ϕ^*})N_1, so there are more than N_1 copies of ϕ^* in the second group. The proof then follows from the first part of this argument. In other words, the first paragraph corresponds to the part of the theorem where diversity and ability are put into play, which essentially reduces to the following triviality: by assumption, there are two distinct agents - the best agent, and another agent - and a state x_* such that the best agent does not provide the optimal solution for this state. However, the other agent can improve upon the solution of the best agent for this state. This implies that the performance of a group consisting of the best agent and this additional agent surpasses the performance of the best agent alone, at least for some states. For other states, again by assumption, adding an agent does not worsen the situation, thus completing the deterministic clone-free part of the proof. Subsequently, we apply the strong law of large numbers to ensure that, under the setting of Assumption <ref>, the random group will always contain copies of these two agents, and the best performing agents are all copies of the unique best performing agent. Noting that if X is finite, then Φ must also be finite, we can choose N_1 such that almost surely every member appears in the random group. We just need to set: N_1max_ϕ∈Φ{ N_ϕ, 2/μ({ϕ})} . In the original proof by Hong and Page, N_1 is set so that every member needed to reach the optimum state x^* appears with probability one. Thus, N_1 must be large, which virtually[In the case where ϕ_0 is not needed at all and μ({ϕ_0}) is close to zero, we cannot ensure that, even for a large N_1, one copy is included almost surely.] guarantees that at least one copy of each member of Φ is included in the “random group”. In any case, this N_1 as defined is sufficient to ensure the theorem holds true. § REMOVING TECHNICAL HYPOTHESES: COUNTEREXAMPLES The theorem depends critically on certain assumptions that we are going to analyze now. In this section, I will refrain from critiquing certain empirical hypotheses, such as the assumption that agents share the same concept of problem-solving (Assumption <ref>), or that they can recognize the solution (ϕ(x^*)=x^*). Such critiques largely pertain to the plausibility inherent in every model, and one could always defend these by invoking ideal conditions, much as one might assume frictionless systems in physics. Although these critiques can be adequate, a different critique, following a “Moorean style”, will be presented in the following section, where we will revisit some empirical hypotheses (not the ones mentioned above), slightly modifying them to enhance their plausibility, which may lead to contrary conclusions. However, in this section, I wish to focus on certain technical assumptions, often overlooked, that are essential for the theorem to hold. Without these assumptions, the theorem fails. These technical assumptions, by their nature, involve facets of the model (not the underlying reality) that are difficult to verify, hence making it challenging to argue for their plausibility. This raises the question of why we should adopt these hypotheses, rather than others, unless we are trying to reach a particular conclusion. The distinction between empirical and technical assumptions might seem somewhat arbitrary, but it nonetheless serves a useful purpose in our analysis. For instance, assume that we apply the theorem to a jury in a criminal trial. As we will see, the values of V (apart from V(x^*)=1, the right option) are important for the theorem to hold; if certain conditions are not met, then the thesis of the theorem fails. However, how could one verify that the hypotheses on V hold when V is not empirically observable? Similar remarks apply, even more strongly, on how to model clones and select them for (almost infinite) groups. §.§ V is an injection, Assumption <ref> This was pointed out by Thompson and we reproduce it here with minor modifications. This assumption was not originally in <cit.>, making the theorem false. Let X = {a, b, c, d}. Define V (x) and three agents ϕ_1, ϕ_2 and ϕ_3 according to the table below: V (x) ϕ_1 (x) ϕ_2 (x) ϕ_3 (x) a 1/3 d c b b 2/3 b c b c 2/3 c c b d 1 d d d The set of agents Φ = {ϕ_1, ϕ_2, ϕ_3} satisfies all the hypotheses of the theorem. The agents ϕ_1, ϕ_2, ϕ_3 have average values 5/6, 9/12, 9/12 respectively, so ϕ_1 is the “best” agent. Notice that all three agents acting together do not always return the point d, where the maximum of V occurs. Indeed all three agents acting together work only as well as ϕ_1 acting alone. Hence in this case, no group of agents can outperform ϕ_1, or, equivalently, multiple copies of ϕ_1, hence no N and N_1 exist which satisfy the theorem. In real-life applications, the value of V can be highly uncertain. Therefore, it is sensible to assume that, in the case of two states, x,x'∈ X, where it is estimated that V(x)≈ V(x'), we set V(x)=V(x') for practical purposes. This situation should not be disregarded as uncommon. Nevertheless, as argued in <cit.>, cited by Landemore: You don’t fail to make it to the cashier in a grocery store when you are completely indifferent between buying one more apple or one more orange, nor do deliberators in a meeting fail to decide on some course of action if two options have precisely equivalent value. Adding a simple tie-breaking rule to the theorem is entirely sufficient to deal with the mathematical hiccup and move forward with the fundamental scientific question at hand. This argument completely misses the point. The problem is not that we are indifferent between the solutions b or c, but rather that no one knows the solution if we do start at b or c (no one moves from these states to d; they get stuck at c). The fact that the value function is “indifferent” implies that the hypotheses (in particular, the “diversity” assumption) are not sufficient to guarantee that d is reached. The thesis of the theorem still holds if we replace Assumption <ref> with ∀ x∈ X\{x^*} ∃ ϕ∈Φ /  V(ϕ(x))> V(x). However, this adjustment only serves to render the theorem more trivial and misapplies the term diversity. This condition simply implies that for every state, there exists an agent that can strictly improve that state. It is unsurprising that in a finite number of steps, these agents reach the maximum, which, by hypothesis, the best problem solver cannot always attain. Consequently, this adjustment does fix the theorem, but at the cost of making it more trivial and highlighting that what the theorem requires is not "diversity", but the existence of a more "able" problem solver who can improve upon areas where others fall short. §.§ Unique best agent, Assumption <ref> To justify this assumption, Hong and Page write: Let ν be the uniform distribution. If the value function V is one to one, then the uniqueness assumption is satisfied. This is a mathematical mistake. Let us consider X = {a, b, c, d}. Define V (x) such that 0<V(a)<V(b)<V(c)<1, V(b)<1/2(V(a)+1) and n agents ϕ_1, ϕ_2 and ϕ_i according to the table below: V (x) ϕ_1 (x) ϕ_2 (x) ϕ_i (x) a V(a) a c ϕ_i(a) b V(b) b c ϕ_i(b) c V(c) d c ϕ_i(c) d 1 d d d The set of agents Φ = {ϕ_1, ϕ_2, ϕ_i}_i=3^N satisfies all the hypotheses of the theorem and are ordered according to its expected value. If we set V(c)1/3(V(a)+V(b)+1) , then ϕ_1, ϕ_2 have the same “expected ability” under the uniform measure. Furthermore, now the theorem is false. Indeed, ϕ_1∘ϕ_2(a)=d,  ϕ_1∘ϕ_2(b)=d,  ϕ_1(c)=d,  ϕ_1(d)=ϕ_2(d)=d . In this case, no group of agents can outperform {ϕ_1, ϕ_2}, no N and N_1 exist which satisfy the theorem. Here, we have demonstrated an example involving two agents possessing identical "expected abilities". Of course, in real-world applications, there would likely be uncertainty or variability in the value of 𝔼_ν(V∘ϕ); thus, it would be prudent to consider an interval rather than a single point. In such circumstances, the top-performing agents might comprise multiple individuals with high probability. However, as demonstrated, the theorem may not necessarily hold in these scenarios. §.§ Clones performance As we saw, simply by the assumptions, one million Einsteins, Gausses or von Neumanns are the same as just one of them. Indeed, mathematically, by Assumption <ref>, {ϕ,…,ϕ} is a well-defined set of problems solvers such that ϕ^{ϕ,…,ϕ}Assumption <ref>=ϕ∘…∘ϕAssumption <ref>=ϕ. Again, just the assumptions they arbitrarily made. But this may not make much sense if we want to apply it to real-life scenarios. More realistic versions could be: * Improvement: V∘ϕ∘ϕ≥ V∘ϕ (strict inequality for some points). In other words, if an agent (competent) produced a solution after a certain amount of time, say one hour, it would provide a better answer if it had one-million hours, or if a “clone” could pick up where he left off. * Work in parallel[As a technical note, now {ϕ, ϕ} should be considered a multiset (the multiplicity distinguishes multisets).]: V∘ϕ^{ϕ, ϕ}≥ V∘ϕ (strict inequality for some point). In other words, one can imagine that a group of Einsteins would not work sequentially, always producing the same result, but would divide the work, resources, focus, etc. to produce a better answer once they have put all of their findings together. I am not certain about the most appropriate way to model clones, but the authors' approach does not seem plausible. However, this is necessary for the theorem to stand. Otherwise, as N_1→∞, no group of agents could generally outperform ϕ,N_1…,ϕ; we cannot guarantee the existence of an N that would satisfy the theorem. Following Jason Brennan's 'magic wand' thought experiment, let's imagine we are confronted with an exceedingly difficult problem to solve, for instance, the Navier-Stokes Millennium Problem. Suppose we have a magic wand at our disposal that can create agents to solve the problem for us. Should we choose Terence Tao, or should we use the magic wand to create 100 Terence Taos working together to solve our problem? According to the assumption of the Hong-Page Theorem, this magic wand would be useless. Regarding the issue of clones, the following is stated in <cit.>: [...] we present a simpler version of our result where X is assumed to be finite. This finite version makes the insight more straightforward, although it comes at the cost of trivializing some intricate assumptions and arguments. For example, the group of the best-performing agents is proven below to be comprised of identical agents. This is an artifact of the finite version. In the general version under reasonable conditions, the group of the best-performing agents can be shown to be similar, not necessarily the same. However, this explanation is far from accurate. Clones also appear in the less realistic case where X is not finite. This occurs because we have to take copies from Φ and, if ϕ has already appeared, it can appear again. Moreover, the finite version of the model is neither sufficient nor necessary for proving that the group of the best-performing agents is comprised of identical agents. In a scenario where X is finite, the best agents could be several different ones. This could be easily demonstrated by following my previous example from Section <ref> or <cit.>. In the version where X is not finite, according to Assumption 5 of their appendix, B(ϕ^*,δ)∩Φ={ϕ∈Φ | d(ϕ,ϕ^*)<δ} could contain only one agent, namely ϕ^*. It should also be noted that a finite X represents a more realistic setup. Typically, rendering things continuous simplifies the analysis, as it allows us to use standard calculus, for example, but this is not the case here. It is less realistic to assume that agents have answers to an infinite set of elements than to a finite set. §.§ Selection of clones Similarly, the selection of clones appears to be arbitrary and seems tailored to reach the intended conclusion. * The choice of two independent groups seems arbitrary. Why not fix N and, from the same group, select a random subgroup of size N_1, as well as the best N_1 problem solvers, and then compare? In such a scenario, the theorem might not hold. Indeed, we need N≫ N_1 such that the Strong Law of Large Numbers (SLLN) applies, μ({ϕ^*}) can be very small. However, a random group of N_1, Φ_N_1, agents might not include all the problem solvers of Φ, thus we cannot guarantee a probability of one, as the theorem does. That is, for N>N_1, there are setting such that ℙ(Φ⊂Φ_N_1)<1 . * Permitting repetition is also arbitrary. We could, for instance, select the best problem solvers without allowing repetitions. Recall from Section <ref> that adding a repeated clone is equivalent to adding nothing. This could prevent the paradoxical result that, by mere hypotheses, when choosing the best problem solvers from a group of size N is more beneficial when the group size is relatively small, i.e., for choosing the best it is preferable to have less options available. However, if we prohibit repetitions, then the theorem does not hold as the best problem solvers will include the ones (without taking repeated clones into account) of the "random" group, so no N and N_1 exist which satisfy the theorem. We should note the general approach adopted by Hong and Page. They introduce randomness into their model by employing clones. Subsequently, they invoke the Strong Law of Large Numbers to ensure that the frequency of appearance converges to the original probability μ, effectively eliminating the randomness that was introduced and obscuring the results. In the next section, we will remove clones. § NEW HONG-PAGE STYLE THEOREM: ABILITY TRUMPS DIVERSITY We are going to state and prove a new version of the Hong-Page theorems such that the hypotheses are going to be of the same kind and as plausible (or even more as we will see, for instance, no need of clones and disagreement is possible) as the ones in the Hong-Page theorem. Nevertheless, we will reach the opposite conclusion, “ability trumps diversity". I am not claiming that this theorem has any social content, it simply reflects that it is the assumptions the ones that are doing all the work. The moral would be if we create two groups from the group in the original theorem – one in which we make the minimal reduction in ability while ensuring full diversity, and another in which we considerably reduce diversity while ensuring ability – the less diverse group would systematically outperform the fully diverse group. In other words, ability trumps diversity. §.§ The new assumptions Among a set of agents Φ, we select two finite groups with different properties. We are going to modify some assumptions, but the other remain the same. First, let us introduce the possibility of disagreement following Assumption <ref> as: ϕ_i_j+1^x(ϕ_i_j^x(x_j-1))=x_j-1 ,    with   ϕ_i_j^x(x_j-1)≠ x_j-1 and i_j^x≠ i_j+1^x . A disagreement is a stopping point. In other words, if there is a cycle such that one agent proposes a new solution and other reverse back that solution there is a disagreement and that initial solution is given as the group solution. This is a simple model where disagreement is possible. Note that in the original formulation of the Hong-Page Theorem, for any group of any size, even if they might not be able to reach the correct solution, there will be no disagreement in any case, as per Assumption <ref>. This always leads to unanimity, which is highly unrealistic. Let also μ_x be a probability measure such that if x is the previous solution, μ_x({i}) represents the probability that ϕ_i(x) the next solution in the deliberation chain, see Assumption <ref>. This measure has full-support, no one is silenced. The indices are chosen independently. Once fixed the x_0, this defines a probability measure ℙ on the possible paths. That is, ℙ(x_k=x'| x_k-1=x)=μ_x({i  | ϕ_i(x)=x'})>0 . §.§.§ The ability group. Let us denote a group by Φ^A={ϕ_α}_α∈ A which is selected such that: * Ability: V∘ϕ_α≥ V. In other words, this group is chosen to ensure ability in the sense that each agent does not decrease the value of the initial state given. This is Assumption <ref>, but now is imposed by the selection of the group. * Common knowledge: ∃ X_CK⊂ X such that: Non-diversity set: ∀α,α'∈ A we have ϕ_α|_X_CK≡ϕ_α'|_X_CK. Knowledge: ϕ_α (x_c)=x^*  ∀ x_c∈ X_CK ∀ α∈ A. In other words, this group selection comes with a selection bias. The agents have a common knowledge that makes them similar; for the set X_CK, they all give the same solution. This is an extension of the second part of Assumption <ref>; agents are not only able to recognize that x^* is the solution, but they can do the same for other states x∈ X_CK. Note that x^*∈ X_CK. * Smaller diversity set: ∀ x∈ X\ X_CK, Assumption <ref> holds. In other words, for this group, the original assumption of Hong and Page only holds for the “smaller” set X_CK. * More importantly, we diminish diversity in a second way. For all x in X\ X_CK, there exists exactly one agent, ϕ^x, who provides a distinct answer, and the set of unique answers could be equidistributed, meaning it's not just one agent always giving the different answer. Formally, {x | ϕ^x=ϕ}≤|X|/|Φ^A|+1 for all ϕ∈Φ^A. Therefore, if the ratio is small enough, two agents are quite similar, signifying a lack of diversity, i.e., following <cit.>, their distance is relatively small. For the theorem to function, we don't require a large X_CK, but having it large makes the agents less diverse. It could be just x^* as in the original theorem. §.§.§ The diversity group. Let us denote a group by Φ^D={ϕ_j}_j∈𝒥 which is selected such that the maximum diversity is guaranteed. More precisely, there is a unique x^0∈ X such that: * Full-diversity with ability: ∀ x∈ X\{x^0, x^*} there is a set of agents {ϕ_j^x_k}_k=1^n_x⊂Φ^D such that ϕ_j^x_k(x)≠ x and such that all the states, and only those, closer to the solution x^* (that is, all improve the state) are the local optimum for some agent. * Minimal ability loss: there is only one agent ϕ_j_0∈Φ^D and only one state x^0 such that V(ϕ_j_0(x^0))<V(x^0). Note that this is the minimal ability that can be lost. §.§ The theorem Let Φ^A, Φ^D as above with the given assumptions. Then, the ability group outperforms the diversity group. To prove this theorem, we need to compare the performances of the two groups. First, we consider the ability group Φ^A. Any agent from Φ^A does not decrease the value of the given state. Moreover, for any state in the non-diversity set X_CK, all agents in Φ^A will return the optimal solution x^*. Thus, following the measure μ_x', for any x∈ X, V(x)≤ V(ϕ_i_1^x(x))≤…≤ V(ϕ_i_n^x(x_n-1))=1 . This is because p_x'μ_x'^A(α∈ A | ϕ_α(x')≠x')>0. This holds true even in the worst-case scenario where all agents, except one, are stuck at that point. Thus being stuck has probability, by the subadditive property, ∏_i=1^∞(1-p_x')=0 →ℙ(∃ n_0, x'  : ϕ_i^x_n(x')=x'  ∀ n≥ n_0)≤∑_n_0∑_x”∏_i=n_0^∞(1-p_x”) =0 . where the sum in x' is finite. Thus, with probability one, in a finite number of steps we have strict inequalities reaching x^*, returning this as the solution. Thus, for all x∈ X, every path starting at x leads to x^*. Thus, 𝔼_μ^A,ν(V∘Φ^A)∑_x∈ Xν(x)𝔼_μ^A(V∘Φ^A(x)))=1 . where Φ^A(x)ϕ^Φ^A. Now, consider the diversity group Φ^D. This group is selected to maximize diversity and only allows minimal ability loss. However, there exists exactly one agent ϕ_j_0 and one state x^0 such that V(ϕ_j_0(x^0))<V(x^0). Let x^(-1)ϕ_j_0(x^0) and let x such that V(x)≤ V(x'). Similar as above, by fineness, the probability that ℙ(∃  i^x_k | ϕ_i^x_k(x_k-1)=x')>0 . We have two possibilities: * If x_k-1=x^(-1) and V(x)≤ V(x^(-1)), then, a “disagreement cycle” can be completed, x_k-1=x^(-1)→ x^0→ x^(-1), returning x^(-1). This happens with probability ∑_k=1^∞ℙ(x_k-1=x^(-1))ℙ(x_k=x^0| x_k-1=x^(-1))μ^D_x^0(j_0)>0, where ℙ(x_k=x^0| x_k-1=x^(-1))=μ^D_x^(-1)({j∈ J  | ϕ_j(x^(-1))=x^0})>0, where we have used the full-diversity assumption. * Also, if x_k-1=x^0 again, completes a disagreement cycle, x^0→ x^(-1)→ x^0, returning x_0. Similarly, this happens with probability ∑_k=1^∞ℙ(x_k-1=x^0)μ^D_x^0(j_0)ℙ(x_k+1=x^(-1)| x_k=x^(-1))>0 . As V(x^(-1))<V(x^0)<1, thus, 𝔼_μ^D,ν(V∘Φ^D)∑_x∈ Xν(x)𝔼_μ^D(V∘Φ^D(x)))<1 . § THE DIVERSITY PREDICTION THEOREM AND THE CROWDS BEAT AVERAGES LAW §.§ The results. They also present another theorem that would be useful later. First, some definitions. Given a set of individuals labeled as i=1,…, N, we associate to each of them a signal or prediction of some magnitude, which has θ as true value. The squared error of an individual's signal equals the square of the difference between the signal and the true outcome: SE(s_i) = (s_i - θ)^2 . The average squared error is given by MSE(s) = 1/n∑_i=1^n(s_i - θ)^2 , with s (s_1, s_2, …, s_n). The collective prediction is c =c(s) = 1/n∑_i=1^n s_i . Predictive diversity of the collective is defined as: σ̂(s) = 1/n∑_i=1^n(s_i - c)^2 . This is simply a (biased) estimation of the variance. Two trivial theorems can be deduced. The first, a particular version of the Pythagoras Theorem: The squared error of the collective prediction equals the average squared error minus the predictive diversity: SE(c(s)) = MSE(s) - σ̂(s) . This is quite standard, but let us give a proof using the (generalized) Pythagoras Theorem. In ℝ^n we can define the standard Euclidean or l^2-norm. If c=(c,…, c) and analogously for θ, then ⟨s-c ,θ-c⟩_l^2=0 so the Pythagoras Theorem gives s-θ_l^2^2=θ-c_l^2^2+s-c_l^2^2 . The squared error of the collective's prediction is less than or equal to the averaged squared error of the individuals that make up the crowd. SE(c(s)) ≤MSE(s) . §.§ The asymmetric role of “ability” and “diversity”. Before we proceed, let's note two simple mathematical observations: MSE and σ̂ cannot be treated as independent as both depend on s. That is, altering one will generally change the other (it is not fixed), with the effect on the prediction error being, in principle, undetermined. Therefore, it would be a significant mathematical error to consider that for the prediction error, SE, to be small is enough to make “diversity”, σ̂ large. These observations are mathematically trivial. Also, they can be graphically demonstrated when we consider the case of n=2, which brings us back to the standard Pythagoras theorem, see Figure <ref>. Knowing either MSE(s) or σ̂ alone is not sufficient to determine the value of the prediction error. In fact, according to the Crowd Beats Averages Law, we can see: SE(MSE, σ̂) ∈ [0,SE^max(s)] . This bound is sharp, with SE^maxMSE. Since SE is not solely determined by either "ability" or "diversity", these variables can be observed in the context of the maximum prediction error, i.e., SE^max. More precisely: Let SE^max represent the maximum prediction error. Then, * If ΔMSE<0, then ΔSE^max<0. In other words, if "ability" increases, the maximum prediction error decreases. Particularly, if the increase in ability is large enough, the prediction error will decrease. * If Δσ̂>0, then ΔSE^max≥0. This implies that if "diversity" increases, the maximum prediction error also increases. In particular, an increase in diversity alone does not guarantee a reduction in the prediction error. Furthermore, if the increase in diversity is substantial enough, the maximum prediction error will also increase. This is a trivial consequence of MSE=SE^max and the twin inequality of the Crowd Beats Averages Law: σ̂≤SE^max. Using the Crowd Beats Averages Law (and other trivial results), we arrive at a seemingly contradictory result: increasing "ability" eventually reduces the prediction error, but increasing diversity ultimately increases the maximum prediction error. Consequently, the Diversity Prediction Theorem and the Crowd Beats Averages Law provide limited insight into how diversity impacts the prediction error in a general setting without controlling for ability. § HONG AND PAGE'S MISUSE OF MATHEMATICS: AN OBSCURED TRIVIAL THEOREM §.§ Misusing the mathematics to obscure a trivial fact The Hong-Page theorem is, in essence, a misuse of mathematics. It employs standard probability techniques, such as the Borel-Cantelli lemma (as my simpler proof demonstrates, unnecessarily), to obfuscate its hypotheses, making it inaccessible to individuals outside the field. That is, mathematics to complex a simple fact, not to simplify complex relations. Indeed, as we saw in Theorem <ref> the theorem's conclusion—that a group performs better than the best single individual—is inevitable by construction, by the way the theorem's premises are structured. It posits two fundamental hypotheses: first, that the "best" individual agent, ϕ^*, cannot always solve the problem optimally, and second, that a diverse group Φ of agents can always find an optimal solution. When these assumptions are in play, the conclusion of the theorem is logically guaranteed. But, to hide this simple fact the clone existence and selection is introduced, to invoke the probability apparatus. This is done in the second set of assumption of Theorem <ref>. They define a probability space and prove that, if we can select clones of Φ infinitely, with probability one the first group will contain at least one copy of each element of Φ and the second group will be chosen so that is made only of copies of ϕ^*. Using the previous paragraph, the conclusion follows directly. This constitutes the heart of their article's proof published in the Proceedings of the National Academy of Sciences. But, since we've shown that Theorem <ref>—a simple restatement of the assumptions—encapsulates all information regarding diversity and ability (the probabilistic part could be applied to anything, like colored stones in a box), one may question the necessity of introducing clones in the first place. Thus, it appears that the theorem's complexity may stem more from an obfuscation of its simple underpinnings than from a deep, mathematical truth about diversity and ability. Thus, while the Hong-Page theorem uses mathematical techniques, its conclusion is more a trivial product of its constructed premises than a deep, unexpected and universal truth revealed through rigorous mathematical exploration. §.§ Misusing the theorem to answer question it does not In <cit.>, they say: These results still leave open an important question: Can a functionally diverse group whose members have less ability outperform a group of people with high ability who may themselves be diverse? The main result of our paper addresses exactly this question. This is false. They insist: To make a more informed decision, the organization administers a test to 1,000 applicants that is designed to reflect their individual abilities in solving such a problem. Suppose the applicants receive scores ranging from 60% to 90%, so that they are all individually capable. Should the organization hire (i) the person with the highest score, (ii) 20 people with the next 20 highest scores, or (iii) 20 people randomly selected from the applicant pool? Ignoring possible problems of communication within a group, the existing literature would suggest that ii is better than i, because more people will search a larger space, but says little about ii vs. iii. The intuition that agents with the highest scores are smarter suggests that the organization should hire ii, the individually best- performing agents. The intuition that the randomly selected agents will be functionally diverse suggests that the organization should hire iii, the randomly selected ones. In this paper, we provide conditions under which iii is better than ii. This is false. By Proposition <ref>, the groups being compared consist of clones that include, at least, all agents necessary to always reach the correct solutions versus clones of the best agent, which, by assumption, is the same as the best agent alone. As N_1 is large enough, the best agent will be included in the first group. Expressed differently, if we consider only one copy for each agent (as more are, by assumption, see Section <ref>, redundant), the groups being compared are Φ versus ϕ^*. Note that ϕ^* ⊂Φ. No random selection is involved, as discussed in Section <ref>. Therefore, a more appropriate comparison would be: * the person with the highest score, * 20 people with the next 20 highest scores, * 20 people randomly selected from the applicant pool, * the 1000 applicants (or however many are needed to always reach the solution) working together perfectly. The Hong-Page paper deals with i) versus iv), a triviality, not, as they explicitly claim, ii) versus iii). During a conference at the European Central Bank (ECB), Page stated: I create a group of the 20 best agents - the best individuals - and I compare them to a random group of 20 agents [...] it turns out though if you do the math on this, the diverse group almost always outperforms the other group if you use reasonable-sized groups, like groups of size 10 or 20 [...] the paper and model I just showed you where diverse groups do better than random groups was written by myself and Lu Hong [...] He used Figure <ref> to illustrate this point. However, as we have mentioned before, this representation is not directly related to the theorem. In the "Alpha Group", the best agent, 138, should be the only member, and this agent should also be included in the Diverse Group, along with all the other agents, see Figure <ref>. Furthermore, groups of size 10 or 20 may not be large enough for the SLLN to hold, especially if μ({ϕ}) is small enough for some agents. In the same conference at the ECB, he further states: As the problem becomes complex, the best team doesn't consist of the best individuals. Why? Because the best individuals tend to be similar and what you really want on hard problems is diversity. However, this statement seems to confuse, either deliberately or unintentionally, an assumption with a factual result. The claim that "the best individuals are similar" (actually, clones of the same agent) is not a derived conclusion, but a trivial consequence of the presuppositions, see Section <ref>. The proof of this claim cannot be found in <cit.>; it's established by assumption, Section <ref>. Furthermore, as explored in Section <ref>, even when conducting a fair comparison between ability and diversity - and even when the ability group is characterized by relatively homogeneous problem solvers - ability can still outperform diversity. Therefore, this statement is completely misguided. §.§ Misusing the prestige of mathematics When Page claims: This theorem is no mere metaphor or cute empirical anecdote that may or may not be true ten years from now. It is a mathematical truth. It is as accurate as asserting: If p q, then p q . is no mere metaphor or cute empirical anecdote that may or may not be true ten years from now. It is a mathematical truth. To be more precise, the logical structure of the theorem is as follows: * Hypothesis 1, H_1: The best agent cannot always solve the problem. * Hypothesis 2, H_2: The “diverse group” can always solve the problem. * Conclusion, C: The “diverse group” outperforms the unique best agent at problem-solving, signifying that "diversity trumps ability". This argument is logically valid—a tautology, so the proposition H_1 H_2→ C is certainly true. However, the argument's soundness might be questionable as the hypotheses might not be factual. Thus, it doesn't provide any certainty regarding the conclusion, C, i.e., whether diversity indeed outperforms ability. Here, mathematics seems to be used as a tool of persuasion, asserting that it's not ideological, but pure math. However, as we have shown, they are not proving what they claim to be proving. §.§ A basic mathematical error in advocating for diversity Scott Page has argued that large diversity implies small prediction error. However, this conclusion, while favorable to the hypothesis that diversity reduces prediction error, constitutes a significant mathematical mistake. Indeed, in a https://www.youtube.com/watch?v=EXn4vOuU3BE list=WL ab_channel=OsirisSalazarlecture (University of Michigan), Page states: And you might also ask, where does the madness of crowds originate? How could it be that a crowd could get something completely wrong? Well, that's not difficult to understand either, because crowd error equals average error multiplied by diversity. If I want this to be large, if I want large collective error, then I need large average error, meaning that I need people to be getting things wrong, on average. Additionally, I need diversity to be small. So, the madness of crowds comes from like-minded individuals who are all incorrect, and once again, the equation provides us with this result. This mathematical misunderstanding involves a basic arithmetic error that we mentioned in Error <ref>. From the "Diversity Prediction Theorem" (with s term omitted for simplicity), SE= MSE - σ̂ , we cannot deduce that a large SE implies a small σ̂. Rather, it implies that MSE must be much larger than σ̂, where σ̂ could be as large as desired. See, for instance, Figure <ref> for an illustration, where the prediction error is large, but diversity is larger (so it cannot be "small"). § LANDEMORE'S MISUSE OF MATHEMATICS: AN INVALID AND UNSOUND ARGUMENT FOR HER POLITICAL PROPOSAL §.§ The argument The argument, in a nutshell, is, <cit.>: Democracy is here modeled as a collective decision-procedure involving the combination of two mechanisms: inclusive and egalitarian deliberation and simple majority rule. The claim is that democracy thus defined is more likely to yield better solutions and predictions on political questions than less inclusive and less egalitarian decision-rules because it structurally maximizes the cognitive diversity brought to bear on collective problems. Cognitive diversity—here defined as the fact that people see problems in the world and make predictions based on different models of the way the world works or should be interpreted —is a group property that has been shown to be a crucial factor of group performance in various contexts and indeed more important to the problem-solving abilities of a group than individual competence of the members itself (Page 2007). I argue that under the conditions of uncertainty that characterize politics (the fact that the bundle of issues to be faced by any polity over the medium to long term cannot be predicted ahead of time), political decision-making characterized by maximal inclusiveness and equality can be expected to be correlated with greater cognitive diversity, which, in turn, is correlated with better problem-solving and prediction. A central assumption of the argument is that politics is characterized by uncertainty. This uncertainty (which is an assumption about the world, not necessarily the subjective epistemic stage of the deliberators) is what renders all-inclusiveness on an equal basis epistemically attractive as a model for collective decision-making. Given this uncertainty egalitarian inclusiveness is adaptive or “ecologically rational” (Landemore 2014). And the conclusion is: The argument presented here is based on a simple model of democracy and is entirely deductive. It essentially credits the epistemic superiority of democracy to inclusive deliberation, that is, deliberation involving all the members of the community (whether directly or, where unfeasible, through their democratic representatives) [...] The advantage of my deductive epistemic argument, ultimately, is that even if it fails to explain the way actual democracies work, it can serve as a useful normative benchmark to diagnose the way in which existing democracies epistemically dysfunction and imagine alternative institutional arrangements. One implication of the epistemic argument is indeed that in order to obtain the theoretically promised epistemic benefits of democracy, we would need to make the decision-procedures used in actual democracies a lot more inclusive and a lot more egalitarian than they are at present. Institutional reforms that the argument points toward include the replacement of elected representatives with randomly selected ones and a greater use of simple majoritarian decision-making. While the argument is not explicitly stated[Despite claiming that 'The argument presented here is based on a simple model of democracy and is entirely deductive,' the precise premises, intermediary steps, and conclusions are never explicitly stated. This should be the first step in constructing the argument and possible replies.], a crucial hypothesis needed for the theorem assumes the following forms: * Hypothesis, H: Cognitive diversity, defined as individuals seeing problems and making predictions based on different models of the world, is a group property that improves group performance in various contexts. * Hypothesis', H': Greater cognitive diversity within a group correlates with better problem-solving and prediction abilities. To justify this, Landemore relies on the results of Hong and Page as described above <cit.>: To make that claim, I essentially rely on Hong and Page’s formal results about the centrality of cognitive diversity to the emergent property of collective intelligence. We aim to demonstrate that this hypothesis is unjustified, which subsequently renders the argument both logically unsound and inapplicable to real-world scenarios. Additionally, we will highlight instances where she incorrectly deduces propositions from these mathematical theorems, leading to a logically invalid argument. Some of the critiques presented in the previous section also apply to Landemore. For instance, when she informally discusses the theorem, she falls into the same misrepresentation as Hong and Page, as discussed in Section <ref>. For instance, she stated in a public https://youtu.be/HERmRx9wDXc?t=1654debate: There are multiple Hong-Page theorems. The one that I use mostly is the 'Diversity Trumps Ability' theorem. It's basically a formalization of the idea that under certain conditions, you're better off solving problems with a group of average people who think differently than with a group of experts or very smart people. As we have previously illustrated, this assertion is entirely false, see Section <ref> and below for more details. §.§ Basic misunderstanding of the mathematical theorems Landemore says (about the Theorem <ref>): Let me pause here to emphasize what a remarkably counterintuitive, indeed amazing, result his is. Where the conditions apply, you are better off with a random group of people who think differently than with a bunch of Einsteins! Who would have thought? In my view, this result should truly change our perspective on what makes groups smart to begin with; I believe it has huge implications for the way we should think about political bodies making decisions on our behalf. Also <cit.>, That theorem was sufficiently counterintuitive that they provided a computational example to provide intuition. This misunderstanding is significant. She is confusing the conclusions of the theorem with its hypotheses. The fact that a 'bunch of Einsteins' is equivalent to only one Einstein (who, by hypothesis, cannot always solve the problem) is not a conclusion; it's an assumption that she fails to mention. More precisely, the hypotheses stipulate that the "random" or "diverse" group always reaches the global solution, see Corollary <ref>. Moreover, by assumption, a group of Einsteins is considered equivalent to one Einstein (Section <ref>). Yet again by assumption, it's not always guaranteed that this group or an individual Einstein reaches the global solution (Assumption <ref>). How is this counterintuitive or surprising? It appears to be merely a reiteration of the assumptions, which Landemore never fully discloses. She fails to mention that clones working together are presumed to perform just like a single person working alone (refer to Section <ref>), or that the best agent ("Einstein") is postulated to be unique (see Section <ref>), as detailed in <cit.>, <cit.>. Further details will be elaborated below. Furthermore, by Proposition <ref>, the random group includes a collection of Einsteins. Thus, the basic structure of the argument is: * Hypothesis 1, H_1: Group G_R always reach the optimal solution. G_R includes a collection of Einsteins. * Hypothesis 2, H_2: A collection of Einsteins is not perfect. * Conclusion, C: Group G_R is "better" than a collection of Einsteins. Thus, the "under the right conditions" of Landemore is, basically, presupposing the conclusion. How can someone truly understand the theorem and consider this counterintuitive? Once the probabilistic component, which might be obscuring for non-mathematicians but standard for most mathematicians, is removed, the theorem is a triviality (see Section <ref>). This misunderstanding appears to have significant implications on Landemore's thought (from the same debate as before): The theorem's conclusions are not intuitive at all. I think they run against an entrenched belief that experts know best [...]. What this theorem unveiled for me is the possibility that when it comes to collective intelligence, we should stop thinking of it in terms of an addition of individual intelligences. It's really more about the group property. Does it contain enough diversity that we're going to push each other closer again to this global optimum? And that, I think, is not trivial at all. For me, it was a paradigm shift. Moreover, it leads her to compare the Hong-Page theorem, which is trivial (Section <ref>), with a genuinely profound and counterintuitive theorem, such as Arrow's impossibility theorem. However, she believes the difference in treatment between the theorems is based on the difference in their conclusions: For me, these results are remarkable. In fact, it's interesting to see that other theorems, like the Arrow's Impossibility Theorem, which leads to very negative conclusions about democracy, are considered brilliant and worth a Nobel Prize. It always seems that things are not considered surprising and trivial if they go in one particular direction. Despite of this, Landemore, after stating the Theorem <ref>, says <cit.> To the extent that cognitive diversity is a key ingredient of collective intelligence, and specifically one that matters more than average individual ability, the more inclusive the deliberation process is, the smarter the solutions resulting from it should be, overall. As we saw in Section <ref>, this is false. The theorem presupposes that every problem solver in every state improves the state to a new state closer to the global optimum. Furthermore, as shown by the more realistic Theorem <ref>, if we create two groups from the group in the original theorem – one in which we make the minimal reduction in ability while ensuring full diversity, and another in which we significantly reduce diversity while ensuring ability – the less diverse group would systematically outperform the fully diverse group. In other words, ability trumps diversity. There are also other severe mathematical errors with the "Diversity Prediction Theorem". Landemore says <cit.>: In other words, when it comes to predicting outcomes, cognitive differences among voters matter just as much as individual ability. Increasing prediction diversity by one unit results in the same reduction in collective error as does increasing average ability by one unit. This is mathematically incorrect and entirely wrong: the effect is undetermined, it's not of the same magnitude, and it's not necessarily a reduction, as explained in Section <ref>, see Error <ref>. It is a mathematical error to assume that the terms in Theorem <ref> can be changed while the other remain fixed. Furthermore, as we observed earlier in Proposition <ref>, the diversity and ability terms do not play the same role. While increasing ability eventually reduces the prediction error, increasing diversity does not have the same effect and, furthermore, it eventually increases the maximum prediction error. Therefore, Landemore's argument has a significant gap; without controlling for ability, increasing diversity does not guarantee a reduction in the prediction error. §.§ The misuse of hypotheses of the "Diversity Trumps Ability Theorem" To justify the use of the theorem, she says <cit.>, Importantly, the four conditions for this theorem to apply are not utterly demanding. The first one simply requires that the problem be difficult enough, since we do not need a group to solve easy problems. The second condition requires that all problem solvers are relatively smart (or not too dumb). In other words, the members of the group must have local optima that are not too low; otherwise the group would get stuck far from the global optimum. The third condition simply assumes a diversity of local optima such that the intersection of the problem solvers’ local optima contains only the global optimum. In other words, the participants think very differently, even though the best solution must be obvious to all of them when they are made to think of it. Finally, the fourth condition requires that the initial population from which the problem solvers are picked must be large and the collection of problem solvers working together must contain more than a handful of problem solvers. This assumption ensures that the randomly picked collection of problem solvers in the larger pool is diverse—­and in particular, more cognitively diverse than a collection of the best of the larger pool, which would not necessarily be the case for too small a pool relative to the size of the subset of randomly chosen problem solvers or for too small a subset of problem solvers in absolute terms. This is, once again, incorrect. Those are not the only conditions required for the theorem to apply. Among others, she doesn't mention the hypotheses from Sections <ref>, <ref>, <ref>, and <ref>. If these conditions do not hold, the theorem doesn't hold (see the counterexamples). And, as we've seen, these conditions can be rather restrictive (such as assuming that a billion Einsteins will not outperform a single Einstein). Therefore, her statement of the theorem is incorrect. Landemore is following Page's book, which also neglects to mention these conditions. Moreover, his Condition 2 (Landemore's second condition; see also <cit.>) is ill-stated. The 'Calculus Condition' requires that ϕ(X) is countable (which is trivial if X is finite), but he interprets it as 'all problem solvers are smart.' This condition doesn't relate to being smart, contrary to Page's and, consequently, Landemore's interpretation. For instance, consider the function ϕ:X→ X defined as ϕ(x)=x_m, where V(x_m) is a global minimum of V (e.g., 0). Then, ϕ(X)={x_m}, finite, which hardly represents being 'smart.' In fact, it's the worst agent conceivable since it assigns the solution furthest from the global optimum to every state. Nevertheless, Page (and subsequently Landemore) refers to this as being 'smart.' It's also noteworthy that in his book, Page's conditions are subject to Thompson's critique (Section <ref>), although Page denied it. As Landemore, citing Page, puts it in <cit.>, Condition 3: The Diversity Condition. Any solution other than the global optimum is not a local optimum for some nonzero percentage of problem solvers. However, in his response to Thompson, Page does not refer to his stated Condition 3, but to what he incorrectly thinks Condition 3 requires (which is the same mistake present in <cit.> and pointed out by Thompson). Landemore defends the hypotheses of Theorem <ref>, see the previous quote or the section "The meaning and empirical plausibility of the assumptions behind the Diversity Trumps Ability Theorem" in <cit.>. However, other sets of hypotheses are plausible, or even more so, which is problematic. More specifically, in a Moorean style, we are going to construct a set of incompatible propositions, requiring us to reject (at least) the least plausible one. We are going to use Theorem <ref> for this task. The propositions are as follows: * Hong-Page's framework can be used for a deductive argument for epistemic collective-decision systems in the sense that it can serve as a benchmark or be useful in deriving implications to obtain epistemic benefits (as in Section <ref>). * The assumptions of Theorem <ref>. * The assumptions of Theorem <ref>. Note that (1) and (2) imply that "diversity trumps ability," but (1) and (3) imply "ability trumps diversity", so at least one of these propositions must be rejected, but: * Rejecting the first would invalidate Landemore's argument, as the theorem would then have no relevance to collective-decision systems. * Rejecting the second would undermine Landemore's proposition that "cognitive diversity is a key ingredient of collective intelligence, and specifically one that matters more than average individual ability". * Rejecting the third, without rejecting (2), amounts to "biting the bullet". The assumptions of Theorem <ref> are relatively more plausible than those in (2). For instance, there is no need to assume the existence of clones, that 100 Einsteins working together to solve a problem are the same as one, or that there will be no disagreement. Furthermore, unlike Hong and Page's Theorem, it provides a fair comparison between ability and diversity. See Section <ref> and references therein for more details. Note that I am not claiming that Theorem <ref> has any social content (I personally reject several of the propositions above), but it suffices to form the Moorean set of incompatible propositions. It also serves to show how Landemore commits an equivocation fallacy[This can aso be seen as a motte-and-bailey fallacy, which is also present in Page's presentation of the theorems.] or misuses natural language to represent mathematical statements. For instance, Assumption <ref> is read as agents are "relatively smart (or not too dumb)" and then she discusses that voters satisfy this, <cit.>. But note that the hypotheses of Section <ref> only reduce by an almost insignificant amount the ability, so they can be considered "relatively smart (or not too dumb)". But the thesis changes radically. Thus, Landemore's justification for the use of the theorem is severely flawed. Finally, even assuming that the hypotheses are plausible, there exists a significant contradiction in Landemore's work. Recall that the hypothesis of Theorem <ref> not only guarantees that the "random" group is better than the best agent, but also ensures that they always reach the correct conclusion without disagreement or dissent, as shown in Proposition <ref>, see also Remark <ref>. These are the same hypotheses that, according to Landemore, make cognitive diversity more crucial than individual ability. This perfect deliberation is what enables the "diverse" group to surpass the best agent; the former is perfect by assumption, while the latter is not. Nonetheless, Landemore states in <cit.>: Deliberation is far from being a perfect or complete decision-mechanism, in part because it is time-consuming and rarely produces unanimity. And in <cit.>, she further notes: I thus do not need to assume away, as Quirk seems to accuse me of doing, the possibility of disagreement. Therefore, if she rejects an implication of the theorem, she must also reject at least one of its hypotheses. However, as we have seen, she defends the hypotheses of the theorem she applies. This creates a contradiction. §.§ The vacuousness of the Numbers Trump Ability "Theorem" Landemore's key innovation is the following, as stated in <cit.>: The second step of my argument—my addendum to Page and Hong— proposes that the “cheapest” (i.e., easiest and most economical) way to achieve cognitive diversity in the absence of knowledge about the nature of complex and ever-changing political problems is to include everyone in the group. [...] This “Numbers Trump Ability Theorem” thus supports a strong epistemic case for democracy, in which my key innovation is to support inclusiveness for its instrumental, specifically epistemic properties: Under the right conditions, including everyone in the decision-making process simply makes the group more likely to get the right (or, at least better) answers. The argument is straightforward: if, for epistemic reasons, diversity is what matters, then including everyone is the simplest way to increase diversity. Aside from practical issues, which Landemore somewhat considers, the problem with this reasoning (which is not actually a theorem) is that the premise is false. We have argued that in both Hong-Page theorems, ability plays a crucial role, as seen in Section <ref> and <ref>. Thus, increasing the number of people can have detrimental effects. Therefore, the "theorem" is false. Nevertheless, it can be "corrected" as: Under the right conditions and given the uncertainty in the ability of the agents, including everyone with ability above a certain threshold in the decision-making process makes the group more likely to arrive at the correct (or, at least, better) answers than merely including people without controlling for ability. If we acknowledge the 'absence of knowledge about the nature of complex and ever-changing political problems', it would be prudent to select problem solvers who are competent enough to handle these uncertain problems. Hence, we must take ability into account. In other words, once corrected, this theorem lends support to a version of epistocracy. As I've previously stated, I don't find the Hong-Page theorems particularly enlightening, so I do not advocate for this theorem. Nonetheless, if we follow Landemore's line of reasoning, this interpretation would be more accurate. In general, all the theorems that Landemore uses for her political defense of democracy (<cit.>) presuppose certain levels of ability and knowledge. This is the case of the Hong-Page theorems, as seen in Section <ref> and <ref>, as well as the Condorcet Jury Theorem (CJT) and the Miracle of Aggregation, as shown in <cit.>. These latter two theorems, which belong to the same general theorem (non-homogeneous CJT), are far more likely if we include epistemic weights that are stochastically correlated (with a measurement error) with epistemic rationality. Also, if ability is not controlled, these theorems can operate in the opposite direction, ensuring that we almost surely choose the wrong option. Thus, from an epistemic and instrumental perspective, these theorems strongly suggest including ability thresholds or, the more feasible and semiotically problem-free case of epistemic weights with a minimum of 1 (no one excluded) and stochastically correlated (the inevitable measurement error is taken into account, perfect correlation is not assumed) with epistemic rationality, for a starting practical proposal, see <cit.> or, for a lengthier discussion[In Spanish, but see references therein (in English).], <cit.>. This could serve as a preliminary proposal that needs to be tested and experimented with. While it might still be far from perfect, it should be evaluated in comparison to the existing alternatives. Nevertheless, Landemore staunchly opposes epistocracies. In her chapter "Against Epistocracies", <cit.>, she says: My first question to Brennan is this: What would such exclusion achieve? Recall that in my model deliberation does most of the epistemic work. Most filtering of bad input or bad reasoning occurs at that deliberative stage. So there is no reason not to include everyone as one more, howsoever uninformed, voice will not pollute the outcome but will at most delay the conclusion of the deliberation. This is incorrect. First, if no selection is done, one cannot ensure the conditions of the Hong-Page theorem, so one cannot expect the result (that deliberation works) to hold. Second, as we have seen in Theorem <ref>, introducing these kinds of agents can pollute the outcome, getting stuck at a solution far from the global maximum. This is the same error that pollutes all of Landemore's analysis, so we insist here; all these theorems assume a certain amount of ability, but Landemore just presupposes[For instnace, Assuming that, on average, the citizens from among whom we select representatives meet a minimal thresh- old of individual competence, random selection is a more promising, authentically democratic way of selecting representatives that maximizes cognitive diversity in the face of political uncertainty. See Section <ref> for more.] this without questioning seriously enough and focusing mainly on diversity, which has a "secondary" effect (Theorem <ref> and Section <ref>). For instance, she continually emphasizes the uncertainty: As time goes by and circumstances change, however, it becomes very likely that his epistocracy will run into issues where it will miss the very voices and votes it purposely excluded. Even if the probability is low, the expected cost might still be huge. Why take the risk? There may be a short window of time in which a Brennanist epistocracy would work, perhaps even better than a democracy. But probabilistically, this superiority is bound to vanish over time. The question is when. [...] Most importantly, there is no reason to exclude any voice in a model that assumes democratic deliberation itself can weed out the bad input. However, this same uncertainty, when translated into uncertain abilities of the problem solvers, could lead to the inclusion of some problem solvers who, rather than aiding, actually obstruct us from reaching the optimal solution. Thus, her "probabilistic" claims like "But probabilistically, this superiority is bound to vanish over time" and that the expected cost is substantial are unfounded, and she provides no valid proof for such strong propositions. It's important to note that there may be merit in including all voices in some capacity. The purpose of this part is not to criticize that, but to critically analyze her use of mathematical results to draw certain conclusions. Therefore, it is not reasonable—and borders on begging the question—to assume that democratic deliberation itself can weed out bad input. § CONCLUSION Our rigorous dissection of the Hong-Page Theorems has uncovered significant issues. The misrepresentation of ability, the negligence of certain assumptions, and the fundamental misuse of mathematical principles have led to their flawed application in sociopolitical constructs. Hong and Page's application of mathematics in their theorem obscures its inherent triviality. By employing mathematical complexity, they have managed to present trivial facts as profound insights, thus misrepresenting the actual implications of their theorem. It is vital that we apply mathematics with extreme caution and rigor, especially when it serves as the foundation for decisions that can have substantial impacts on our social structures and institutions. As such, despite its thousands of citations, <cit.> should not be regarded as a serious contribution to the field of collective decision problem solving. Similarly, with our additional analysis of the "Diversity Prediction Theorem" and Page's misinterpretation, Section <ref>, basic claims of Page's book The Difference are affected, from the preface: Perhaps because The Difference takes time to digest, eventually, accurate readings won out. Reviewers recognized that The Difference explores the pragmatic, bottom-line contributions of diversity. It does so using models and logic, not metaphor. The book’s claims that “collective ability equals individual ability plus diversity” and that “diversity trumps ability” are mathematical truths, not feel-good mantras. Hélène Landemore's application of mathematical reasoning in her political proposition is wanting in both validity and soundness. Specifically, her 'Numbers Trump Ability' theorem, derived from her interpretation of the Hong-Page Theorems, demonstrates significant flaws, as do many other conclusions based on these results, including her use of the 'Diversity Prediction Theorem'. For instance, as we have shown, her epistemic argument is both unsound and invalid. Consequently, the central thesis of her book <cit.> is seriously compromised. As Landemore herself states in <cit.>: Let me briefly rehearse what I see as the main argument of the book. At its heart is a simple model of what, under certain conditions that I deem plausible enough, can be expected of an inclusive political decision process in a comparison with less inclusive ones. [...] In my eyes, the main value of my book is to create a simplified, relatively rigorous framework for the meaningful comparison of the properties of basic political “regimes.” A similar problem arises in other foundational claims related to her political proposal of 'Open Democracy', found in works such as <cit.>, <cit.>, <cit.>, and <cit.>. This critique should not be taken as a dismissal of the importance of diversity in decision-making, but rather as a call to address the misuse of mathematics in these contexts. It urges us to consider the rigorous and nuanced approach required when applying mathematical theories to sociopolitical constructs. As such, this paper makes a contribution to the ongoing discourse on collective intelligence, fostering a deeper understanding of the mathematical theorems used to achieve a desired conclusion. amsplain
http://arxiv.org/abs/2307.04450v1
20230710100015
Toward a generative modeling analysis of CLAS exclusive $2π$ photoproduction
[ "T. Alghamdi", "Y. Alanazi", "M. Battaglieri", "L. Bibrzycki", "A. V. Golda", "A. N. Hiller Blin", "E. L. Isupov", "Y. Li", "L. Marsicano", "W. Melnitchouk", "V. I. Mokeev", "G. Montana", "A. Pilloni", "N. Sato", "A. P. Szczepaniak", "T. Vittorini" ]
hep-ph
[ "hep-ph", "hep-ex", "nucl-th" ]
JLAB-THY-23-3881 [email protected] AI-supported algorithms, particularly generative models, have been successfully used in a variety of different contexts. In this work, we demonstrate for the first time that generative adversarial networks (GANs) can be used in high-energy experimental physics to unfold detector effects from multi-particle final states, while preserving correlations between kinematic variables in multidimensional phase space. We perform a full closure test on two-pion photoproduction pseudodata generated with a realistic model in the kinematics of the Jefferson Lab CLAS experiment. The overlap of different reaction mechanisms leading to the same final state associated with the CLAS detector's nontrivial effects represents an ideal test case for AI-supported analysis. Uncertainty quantification performed via bootstrap provides an estimate of the systematic uncertainty associated with the procedure. The test demonstrates that GANs can reproduce highly correlated multidifferential cross sections even in the presence of detector-induced distortions in the training datasets, and provides a solid basis for applying the framework to real experimental data. Toward a generative modeling analysis of CLAS exclusive 2π photoproduction T. Vittorini0009-0002-4390-5670 August 12, 2023 ========================================================================== § INTRODUCTION Photoproduction of two pions, with photon energies in the few-GeV range, is an important process in hadron spectroscopy. It has been widely used to address several fundamental quests, such as the `missing baryons' problem, and to demonstrate that multiparticle final states are necessary to determine the spectrum. While copious data are available for single-pion photoproduction, and the correspondent phenomenology is well understood, the addition of a third particle in the final state makes the description of this reaction considerably more complicated. At fixed photon energy, the unpolarized single-pion photoproduction cross section is described by a single independent variable, while for two pions three additional variables are needed. At beam energies of a few GeV, the highest statistics data sample is available from the Jefferson Lab Hall B CLAS experiment  <cit.>. Even in this case, some bins in the multidimensional space are unpopulated or subject to large statistical fluctuations. This results in large uncertainties in extracting the underlying reaction mechanisms. The problem has been addressed by studying one or two variables at a time, while integrating over the others. During integration, correlations between variables, which in turn contain relevant physics information, are partially lost, making the results strongly model dependent. In this context, generative models based on machine learning (ML), which learn the original data distribution and create new so-called synthetic data that mimic the original distribution, can provide new opportunities for extracting the physics information preserving correlations. Furthermore, these models can provide another way to extract the `true' values from experimental data removing detector effects, with a procedure known as unfolding. Recently, an event-level unfolding analysis using generative adversarial networks (GANs) in inclusive electroproduction was performed <cit.>. The analysis was able to reconstruct accurately single-variable cross sections. Here, we extend our analysis framework to a multiparticle final state, demonstrating for the first time that GANs can be used to reproduce scattering reactions in a higher dimensional phase space. Specifically, we optimize our ML analysis framework to the case of two-pion photoproduction at CLAS kinematics. This study serves as an excellent testing ground for evaluating the effectiveness of the ML analysis framework in a highly nontrivial case. The presence of baryon and meson resonances with diverse production mechanisms, which overlap within a limited phase space, generate intricate structures and correlations. Moreover, the CLAS detector's highly non-uniform response introduces additional complexities and distortions, adding another layer of complication to the analysis. To test and validate the framework, we generate Monte Carlo (MC) pseudodata with a realistic model of two-pion photoproduction. We produce a synthetic copy with an “unfolding” GAN trained on pseudodata that incorporate detector effects through GEANT simulations <cit.>. This would be equivalent to train the GAN with experimental data. The detector effects are unfolded using a “detector-simulation” GAN, independently trained on a second MC pseudodata sample generated according to phase space and passed through the GEANT model of the detector. We test the quality of the procedure by a quantitative comparison between the generated MC data and its synthetic copy. This closure test, based on MC pseudodata, is a necessary step before applying our analysis framework to experimental data. The paper is organized as follows: in Sec. <ref> we review the importance of two-pion photoproduction in hadron spectroscopy and provide a detailed description of the kinematics. In Sec. <ref> we describe the MC framework used to generate pseudodata and incorporate the CLAS detector response. In Sec. <ref> we present the ML framework used for reproducing the detector effects and unfold the `true' distributions from the reconstructed pseudodata. The GAN results are reported in Sec. <ref>, where we compare the generated events with the synthetic copy. Finally, in Sec. <ref> we summarize the procedure and outline work in progress to extend the current framework to the analysis of real CLAS data from Jefferson Lab. § TWO-PION PHOTOPRODUCTION §.§ The physics case The ππ N final state is one of the largest contributors to the total photoproduction cross section off protons at center-of-mass (CM) energies W≲ 2.5 GeV. Studies of this final state have considerably extended the available information on the spectrum of the excited states of the nucleon (N^*) and their photoexcitation amplitudes. The quantum numbers of these resonances can be assessed by studying the correlations between the invariant mass and the angular dependencies of their decay products. Theoretical estimates based on phenomenological approaches <cit.>, continuum Schwinger methods <cit.> as well as from first principles within lattice QCD calculations <cit.>, have predicted more states than apparently observed in experiments (for reviews, see Refs. <cit.>), which is referred to as the `missing baryons' problem. A strategy to improve the sensitivity to the most elusive states is to impose consistency constraints by performing combined analyses of several final states at once, with ππ N playing a pivotal role for the resonances heavier than 1.6 GeV. This allows one to disentangle process-dependent nonresonant contributions, and extract the resonance properties in a nearly model-independent manner <cit.>. Furthermore, combining photoproduction and electroproduction data has recently proven to be effective in identifying overlapping resonances with the same quantum numbers, as in the case of the N(1720) and N'(1720) states <cit.>. In the same reaction, by looking at the invariant mass distribution of the ππ pair, one can study meson resonances, such as the ρ or the f_2(1270). While the properties of these resonances are well known, a detailed understanding of their production mechanisms is still missing. At low W ≲ 2 GeV one can study how each N^* state contributes to the meson production process. At higher energies, above the N^* resonance region, the reaction is well described in terms of Regge theory <cit.>. The two energy regimes are smoothly connected, making it nontrivial to study the intermediate region rigorously. A formalism to do so has been proposed recently for the production of single π or η mesons <cit.>. The extension to two-pseudoscalar final states requires having the full multidimensional dependence under control <cit.>. In particular, a complete understanding of meson production mechanisms in the ππ N final state, where resonances are well known, is necessary before facing the more complicated ηπ N and η^'π N channels, where exotic hadrons are expected to appear <cit.>. §.§ γ p →π^+ π^-p kinematics Measurement of the three-body final state in two-pion photoproduction represents a significant challenge to experiment. Recently a large body of data on π^+π^-p photoproduction observables has become available from measurements by the CLAS Collaboration, with W ≤ 2.9 GeV <cit.>. For a given collision energy, the differential cross section for this process depends on five independent variables, which can be chosen to be the invariant masses of the two pions, M_π^+π^-, and the proton-π^- pair, M_pπ^-, and three angles in the CM frame. Two of the angles are the polar angle θ_π^+, with the z-axis along the photon three-momentum, and the angle α_[π^+ p][π^-p'] between the plane containing the initial target proton p and π^+ three-momenta and the plane containing the π^- and recoiling proton p' three-momenta. An equivalent choice would replace θ_π^+ with the invariant momentum transferred t_π^+, defined as the difference squared between the photon and π^+ four-momenta. The fifth variable ϕ is the azimuthal angle of π^- with respect to the plane containing the photon three-momentum and the polarization vector, and is relevant only in experiments with polarized beam or target. For unpolarized data, one can still define ϕ by pointing the polarization vector in an arbitrary direction, resulting in a ϕ-independent cross section. Other possible choices for variables are M_pπ^+ (invariant mass of the proton-π^+ pair), t_π^- (momentum transferred between photon and π^-), t (momentum transferred between target and recoil protons), or cosθ (cosine of the angle between target and recoil protons in the CM frame). Multidimensional analyses are becoming standard, albeit computationally difficult, in modern high statistics experiments <cit.>. However, some specific reactions can suffer from limited statistics. In particular, the direct extraction of π^+ π^- p photoproduction events at a given W value, on a 5D grid (or 4D, if integrated over the angle ϕ) with a bin size acceptable for physics analyses, is quite challenging. Even the highest statistics π^+ π^- p photoproduction sample collected with CLAS <cit.> results in a limited number of counts in the 4D cells (typically <10 events per cell). In Ref. <cit.>, theoretical curves were fitted to the marginal 1D distributions, determined by integrating the acceptance- and efficiency-corrected 5D distribution over the remaining four variables. This procedure largely washes out the correlations present in the original data, leading to a significant loss of relevant information contained in the joint distribution. In this paper we aim to overcome this problem with ML techniques. To illustrate this, in Fig. <ref> we show two examples of 2D distributions and their 1D projections, as measured in CLAS experiment without efficiency corrections <cit.>. From these distributions one immediately sees the presence of intermediate resonances that appear as enhancements in the invariant mass of the system in which they decay. For example, the band at M^2_p π^+≃ 1.5 GeV^2 corresponds to the Δ(1232) baryon resonance, which appears as an intermediate unstable state in the reaction γ p →Δ^++π^- → p π^+ π^-. The band centered at M^2_π^+ π^-≃ 0.6 GeV^2 corresponds instead to the ρ(770) meson resonance, in the reaction γ p → p ρ^0 → p π^+π^-. The two resonances are clearly visible as bumps in the respective 1D projections. Looking at 1D projections only, one can easily miss the presence of a resonance if the relevant invariant mass distribution is not explicitly considered. This is an example of loss of information that is contained in correlations. Moreover, because of quantum interference, the production of ρ^0 and Δ^++ are not independent processes, and it is impossible to associate one event exclusively with either process. This interference appears in the correlations between the invariant masses, and can be partially lost in the 1D projections. §.§ Two-pion photoproduction with CLAS The CLAS spectrometer in Hall B at Jefferson Lab was based on a ∼ 1.25 T toroidal magnet which bends charged particles produced in the hadronic interaction along the polar angles θ_lab (the z-axis along the photon beam), while the preserving azimuthal angles ϕ_lab. The polarity of the field determined if positive/negative charges were bent towards/away from the beam line into the acceptance of the detector. A system of three layers of multi-wires drift chambers <cit.> provided momentum information with the resolution, σ_p/p, ranging from 0.5 to 1.0%, depending on the kinematics. Charged hadron identification was obtained by time-of-flight scintillators <cit.>. Photoproduction experiments were conducted with a bremsstrahlung photon beam produced by the CEBAF continuous electron beam impinging on 8 × 10^-5 radiation lengths thickness gold foil. A bremsstrahlung tagging system <cit.> with a photon energy resolution of 0.1% was used to measure the photon energy in each recorded event. The target cell was a 4 cm in diameter and 40 cm long Mylar cylinder, filled with liquid hydrogen at 20.4 K. The experimental conditions reported in this paper, and simulated in the framework described in Sec. <ref>, correspond to the experiment that ran in CLAS in 2004. During the experiment, the torus field was such that positive particles were bent away from the beam line. The detector geometrical acceptance for each positive particle in the relevant kinematic region was about 40% and somewhat less for negative particles (bent towards the beamline and out of the detector acceptance). The primary electron beam energy was 4.02 GeV, providing a tagged photon beam in the energy range from 0.8 to 3.8 GeV. For this analysis we focus on the highest energy region, 3.0–3.8 GeV, that was analyzed in Ref. <cit.>. The exclusive reaction γ p →π^+ π^-p was isolated by detecting the proton and the π^+ in the CLAS spectrometer, while the π^- was reconstructed from detected particle four-momenta using the missing-mass technique. In this way, the exclusivity of the reaction was ensured, keeping the contamination from the multipion background to a minimum level. Only events within a fiducial volume were retained in the analysis, in order to avoid the regions at the edge of the detector acceptance. Cuts were defined on the minimum proton momentum and the hadron minimum and maximum polar angle. After all the cuts, approximately 40 M events were identified as produced in exclusive two-pion photoproduction, making the dataset the largest statistics sample of this reaction in the above photon energy range. Details of the analysis can be found in Ref. <cit.>. § MC SIMULATION FRAMEWORKS In this section we describe the simulation frameworks used to perform the closure test. Pseudodata corresponding to two-pion photoproduction in the kinematics of the experiment were generated using two different MC event generators that produce the four-momenta of the final state particles. A realistic GEANT simulation was used to reproduce the finite resolution and limited acceptance of the CLAS detector. Detector effects were assessed with a first MC generator based on a pure phase-space distribution. To perform the closure test, we deployed a second MC generator based on a realistic physics model. The use of two different MC generators minimizes the model dependence in the extraction of the original information and mimics a real situation, where the detector effects are estimated with simulations that are similar but not identical to the experimental distributions. §.§ Two-pion event generators The two MC generators simulate the interaction of an incoming unpolarized photon beam with a bremsstrahlung spectrum, in the energy range 3.0–3.8 GeV, with a target proton at rest. With the choice of variables described in Sec. <ref>, the yields are proportional to the differential cross section, and thus to the squared of the production amplitude A summed over polarizations, ^5 σ/M^2_pπ^-M^2_π^+π^-t_π^+α_[π^+ p][π^-p']ϕ ∝[(W^2 - (M_pπ^- + m_π)^2)(W^2 - (M_pπ^- - m_π)^2)]^-1/2 ×∑_pol|A (M^2_pπ^+,M^2_π^+π^-,cosθ_π^-,α_[π^+ p][π^-p'])|^2 . The first MC generator, referred to as phase space or PS-MC, distributes final state events according to the π^+ π^-p phase space. This corresponds to assuming that the production amplitude is a constant. This is clearly unrealistic since, as discussed above, two-pion photoproduction has a much more complicated structure. However, it has the advantage of being well-defined, agnostic to physics models, and distributes events uniformly across the full reaction kinematics. The 1D-projected PS-MC event distributions are shown in Fig. <ref>, while the 2D distributions are illustrated in Fig. <ref>. The second MC event generator, which we refer to as realistic or RE-MC, considers the amplitude squared as an incoherent sum of the three dominant intermediate resonances observed, γ p →(p ρ^0, Δ^++π^- , Δ^0π^+ )→π^+ π^-p, added to a ∼ 10% constant that mimics the nonresonant two-pion photoproduction contribution. Each process has been weighted with the corresponding contribution to the total cross section as reported in Ref. <cit.>. The angular distributions relative to resonance production are parametrized from measured differential cross sections reported in the same database. The decays ρ→ππ and Δ→ p π are described using the correct spin structure with the decay matrix elements detailed in Ref. <cit.>. The resulting 1D and 2D projections for events generated by RE-MC are shown in Figs. <ref> and <ref>, respectively. We note that this model neglects the interference terms between the intermediate resonances. Despite this, the resulting distribution provides a reasonable description of the experimental data, showing resonance structures in the invariant masses and the correct angular behavior of particles in the final states. §.§ CLAS detector simulation The CLAS detector response has been simulated using the standard GEANT Monte Carlo simulation package, GSIM, used by the CLAS Collaboration <cit.>. It consists of a central steering and control package that calls a number of independent detector geometry and response packages. A post-processing code (GSIM-Post-Processor or GPP) has been used to fine tune the GSIM output to match the tails of the experimental resolution and other effects, such as the detector's dead channels, not described by the idealized GEANT-based simulation. The GSIM output has been fed to the same reconstruction code, RECSIS, used to process experimental data. We will refer to REC or detector-level events to identify the set of pseudodata as processed by the detector simulation, while GEN or vertex-level will identify the `true' events as generated by the MC code. As reported in Sec. <ref>, the CLAS detector has a nonuniform acceptance, reduced in the azimuthal angle ϕ_lab (around the beam) by the presence of the six coils of the toroidal magnet, and in the polar angle θ_lab (with respect to the beam direction) by the limited area covered by the drift chambers, calorimeter and time-of-flight systems. A further limitation concerns the minimum accepted momenta of charged hadrons, due to the energy loss in materials crossed along the track and to the effect of the toroidal magnetic field that bends low-momentum particles out of the detector acceptance. The limited CLAS acceptance results in a reduced yield in REC with respect to GEN events, since not all generated events are reconstructed. The effect of the CLAS acceptance on the π^+ variables in the laboratory frame is shown in Fig. <ref>. As any detector, CLAS has finite resolution, which `smears' the measured kinematic variables resulting in a difference between REC and GEN, even when the event is accepted. The smearing affects the reconstructed three-momenta of any detected particle within the CLAS acceptance with a distortion depending on the three-momentum of the particle. Figure <ref> shows the resolution on the detected (REC) π^+ momentum and polar angle as a function of the `true' (GEN) momentum, along with the projections in 1D corresponding to the CLAS relative momentum and angular resolution. Fitting the two curves to a double Gaussian line, we obtained δ p / p ∼ 0.8% and δθ / θ∼ 0.5%. A similar smearing affects the kinematic variables of the detected proton. The resolution of the CLAS detector is sufficiently high so as to allow the use of the missing mass technique to identify the exclusive two-pion reaction against the multipion background. The technique uses knowledge of the initial state and of the detected particles to calculate the invariant mass of the undetected system to fulfill energy-momentum conservation, within detector resolution. If all particles are detected, the missing mass is zero. If a single particle is undetected, its mass appears as a peak in the missing mass spectrum. If two or more particles are lost, the missing mass of the system is unconstrained and does not peak, but rather distributes smoothly. The technique is only applicable if the experimental resolution is sufficient to disentangle the missing mass peak from this multiparticle background. Clearly, the more particles are detected, the lower is the resolution for the missing mass due the error propagation, limiting the validity of the technique to reactions with a small number of particles in the final state. When the missing particle has been identified, its four-momentum is determined by energy and momentum conservation, and the final state can be fully reconstructed. In two-pion photoproduction, the requirement of at most a single undetected particle corresponds to the following topologies (missing particle in parentheses): pπ^+(π^-), pπ^-(π^+), π^+π^-(p) and π^+π^-p (all three detected). Considering the CLAS acceptance, the yield of different topologies is quite different, with a ratio of (100 :37:30:35) for the respective topologies. Since the pπ^+(π^-) is by far the dominant contribution to REC data, we focus on this topology, although similar conclusions also hold for the others. Each topology is in one-to-one correspondence with different areas of the allowed phase space, and a combination of different topologies would therefore extend the kinematic coverage of the measurement, mitigating the effect of the limited detector acceptance. Figure <ref> shows the missing mass distribution of the pπ^+(π^-) topology. This exclusive final state is identified by selecting events with missing mass in the peak. Since these simulations only contain the two-pion final state, no multiparticle background populates the plot. The equivalent distribution for data shows a significant multipion background <cit.> that populates the positive side of the missing mass spectrum, and is rejected during the analysis to assure the reaction exclusivity. § GAN-BASED UNFOLDING METHODOLOGY GANs, a type of neural networks that have gained significant attention in recent years, are powerful generative models highly effective in generating high-quality, realistic data in various fields <cit.>. The architecture of a typical GAN involves a generator network that learns to produce data and a discriminator network that learns to differentiate between the generated and reference data. The two networks are trained alternately in a competitive setting, where the generator tries to produce more realistic data to fool the discriminator, and the discriminator tries to correctly identify the generated data. This iterative process leads to the generation of data that are progressively more realistic, with the ultimate goal of producing synthetic data that are indistinguishable from the reference data. GANs have been widely applied in many domains, such as image synthesis <cit.>, text generation <cit.>, music composition <cit.>, and videos <cit.>, and have demonstrated impressive results. In image synthesis, GANs have been used to generate highly realistic images visually indistinguishable from real images, which has numerous practical applications in fields such as gaming, film, and art. Successfully training GANs can be notoriously challenging, however. Numerous GAN models experience significant issues, such as mode collapse, non-convergence, model parameter oscillation, destabilization, vanishing gradients, and overfitting, resulting in an unbalanced training of the generator and discriminator <cit.>. In contrast to typical GAN applications, the success of a GAN-based event generator in nuclear and particle physics depends on its ability to accurately reproduce correlations among the momenta of the particles, which becomes increasingly challenging beyond two dimensions. Moreover, the multidimensional momentum distributions of events associated with nuclear and high-energy physics reactions, such as the two-pion photoproduction process considered in this work, exhibit highly complex patterns and range over orders of magnitude across the phase space. The task of developing an appropriate GAN architecture that is able to simultaneously reproduce all the correlations among particle momenta, and accurately reproduce multidimensional histograms, is therefore rather difficult. Machine learning event generators have gained prominence as efficient fast simulation tools in various scientific fields, including high-energy and nuclear physics <cit.>. Unlike traditional simulation methods that rely on a theoretical framework for the underlying reaction, machine learning event generators learn from large datasets and use this knowledge to produce new events with high fidelity. GANs have emerged as powerful tools in the field of fast simulation, where they learn to generate events that closely resemble reference data, capturing the underlying physics processes and their distributions <cit.>. Furthermore, GANs have been employed to address the challenge of simulating detector effects in fast simulation <cit.>. This application of GANs helps bridge the gap between simulated and reference data, enabling more realistic and precise simulations for experimental analyses. A comprehensive survey of existing ML-based event generators can be found in Ref. <cit.>. In this study, we employ the architectural framework of the Least Squares GAN, which involves substituting the cross entropy loss function in the discriminator component of a conventional GAN with a least square term. For further details, see Ref. <cit.>. In the following, we describe the GAN architecture used to generate the synthetic data that reproduce the γ p →π^+ π^-p RE-MC pseudodata. As mentioned above, two different GANs were developed and combined. The detector simulation GAN (DS-GAN) was trained on PS-MC pseudodata to learn the detector effects, and was later inserted between the generator and the discriminator of the unfolding GAN (UNF-GAN) to unfold the GEN vertex-level information from REC pseudodata. §.§ Detector simulation GAN (DS-GAN) In order to capture the detector effects, we have developed an ML-based detector simulation using a conditional GAN <cit.>, as illustrated in Fig. <ref>. Our approach involves training a conditional GAN generator to simulate the detector's smearing effect so that it generates synthetic REC detector-level events from input noise and PS-MC GEN events. The GEN PS-MC accepted events are passed through the GEANT chain to obtain REC pseudodata. As proposed by Bellagente et al. <cit.>, both the synthetic REC and REC pseudodata are “concatenated” with original GEN events and fed to the GAN discriminator as input to facilitate convergence. After successful training, the DS-GAN generator serves as the ML detector surrogate that will be integrated into the UNF-GAN architecture. Summarizing the model architecture of the DS-GAN, the generator, conditioned on accepted events (GEN), takes in as input a 100-dimensional array of random values with a mean of 0 and a standard deviation of 1. The generator network consists of five hidden layers, each with 128 neurons, using a leaky rectified linear unit (ReLU) activation function. The final hidden layer is connected to a four-neuron output layer, which uses a linear function to represent the generated features. At the end of the training, the DS-GAN generator learns how to convert the GEN accepted events into REC events, effectively mimicking the smearing due to the detector as described by GEANT. The discriminator is made of a neural network with five hidden dense layers. The first three layers have 256 neurons each, while the fourth has 128 neurons and the fifth has 32 neurons. A leaky ReLU activation function is used for all the layers. To prevent overfitting during training, a 5% dropout rate is implemented for each hidden layer. The last hidden layer is fully connected to a single-neuron output, activated by a linear function, where “1” indicates a true event and “0” is a fake event. The DS-GAN was trained using about 1M two-pion event samples for 80K adversarial epochs, with an epoch defined as one pass through the training dataset. Both the generator and discriminator were trained using the Adam optimizer <cit.> with a learning rate of 10^-5 and exponential decay rates for the moment estimates (β1 = 0.5, and β2 = 0.9). §.§ Unfolding GAN (UNF-GAN) The training process for the UNF-GAN is illustrated in Fig. <ref>, which depicts the variation of a typical GAN model structure consisting of a conditional generator and a discriminator. The generator takes as input the photon energy generated by the RE-MC, along with a 100-dimensional white noise vector centered at zero with a unit standard deviation. This combination of inputs allows the generator, implemented as a deep neural network, to transform the noise and photon energy into a minimal set of event features/variables that effectively describe the two-pion photoproduction reaction. To strike a balance between execution time and convergence, the generator network is designed with 7 hidden dense layers. The number of neurons in each layer follows the sequence: 16, 32, 64, 128, 256, 512, and 1024, all of which are activated by the ReLU function. The last hidden layer is fully connected to a 4-neuron output layer, activated by a linear function. This output layer represents the independent variables M^2_π^+π^-, M^2_pπ^-, t_π^+, and α_[π^+ p][π^-p'] that are specifically chosen to describe the reaction. The synthetic GEN event features, generated by the conditional GAN generator, are then fed into the DS-GAN to incorporate the detector effects, and then compared to REC pseudodata obtained by passing the GEN RE-MC pseudodata through GEANT. The training process involved utilizing approximately 400k two-pion event samples for a duration of around 200k adversarial epochs per UNF-GAN model. Consistent configuration parameters for the Adam optimizer were maintained, utilizing the same settings as employed for the DS-GAN. During the training, the generator and the discriminator engage in an adversarial competition, with both updating their parameters throughout the process. Eventually, the generator is able to generate synthetic REC samples that are indistinguishable from the REC pseudodata samples. This means that the discriminator's ability to correctly classify whether a sample is genuine or synthetic approximates random chance. §.§ Uncertainty quantification As neural networks become increasingly employed in physics analysis, it becomes crucial to accurately assess the reliability of ML predictions. The statistics of the synthetic samples can be made arbitrarily high, so that there is no need to consider a statistical uncertainty. However, it is important to quantify the systematic uncertainty related to the training procedure, and for this a bootstrap resampling technique was employed. For the DS-GAN, the procedure involved training a total of 20 neural networks independently from the beginning. Each one was trained on a different random sample set drawn from the original dataset with replacement, resulting in datasets of the same size but with potentially different observations. For the UNF-GAN a similar procedure was adopted, with 20 different networks trained independently using the same bootstrap resampling technique. Moreover, each of the 20 UNF-GANs used a different DS-GAN of the 20 discussed above. In this way, the systematic uncertainties associated with the DS- and UNF-GANs are effectively combined. While it is possible that using a higher number of bootstraps could potentially lead to more precise uncertainty estimates, we found that training 20 GANs provided reasonably stable and consistent results. It is important to note that the specific number of bootstraps can vary depending on the characteristics of the problem, available data, and desired level of uncertainty quantification. In this particular case, 20 bootstraps were deemed sufficient for accurately capturing and quantifying the uncertainties associated with the observables. Furthermore, changing the network architecture was not essential because the convergence we achieved, along with the estimated error and uncertainty quantification, clearly indicate that this architecture is capable of accurately reproducing the data without introducing further systematic uncertainties. § RESULTS In this section we now discuss the DS-GAN and UNF-GAN performance, comparing synthetic to the REC and GEN pseudodata. We use the nomenclature REC_SYN and GEN_SYN to indicate synthetic data at the detector and vertex levels, respectively. To visualize the comparison, we build marginal 1D and 2D histograms for some kinematic variables. To show that correlations are correctly accounted for, we also study the distribution of one variable in some slices of the other variables. Synthetic data are generated with the bootstrap procedure detailed in Sec. <ref>, so that the standard deviation σ_SYN corresponds to the systematic uncertainty. In all our results, the average μ_SYN is shown as a solid line, together with an error band of width ± 1σ_SYN, while pseudodata are represented by dots with their statistical uncertainty σ_pseudodata. To quantify the level of agreement between the synthetic data and pseudodata, we plot the pull for each bin, defined as pull = μ_SYN-μ_pseudodata/√(σ^2_SYN+σ^2_pseudodata), where μ_pseudodata denotes the mean of the pseudodata. §.§ DS-GAN The DS-GAN is trained on four independent variables: the invariant masses M_pπ^-^2 and M_π^+π^-^2, t_π^+, and the angle α_[π^+ p][π^-p']. The comparison between REC_SYN and pseudodata PS-MC REC distributions is shown Fig. <ref>. In Fig. <ref> the comparison is extended to other physics-relevant distributions not used in the training and derived from the four above-mentioned variables, namely M_pπ^+^2, t_π^-, t, and cosθ. The agreement, quantified by the pull distributions shown at the bottom of each plot, is remarkable, in both cases, with most of the points lying within 1σ. This indicates that the DS-GAN is indeed able to learn the CLAS detector effects. Bidimensional distributions from MC and synthetic data are shown in Fig. <ref>. The π^+ absolute momentum resolution as obtained from pseudodata (REC-GEN) is shown in Fig. <ref>, along with synthetic data (REC_SYN-GEN). The two distributions are in very good agreement, indicating that synthetic data incorporate the correct resolution of the detector. Similar results hold for other kinematic variables of all particles. These comparisons demonstrate the ability of the DS-GAN to learn and reproduce detector effects in a multidimensional space, even in the tails of the distributions. This confirms that generative models can indeed be used as an efficient and fast proxy for more computational expensive GEANT simulations <cit.>. §.§ UNF-GAN As described in Sec <ref>, the final step in the closure test is to use REC RE-MC pseudodata to train the UNF-GAN, extract the GEN_SYN distributions, and compare them with GEN pseudodata. Figure <ref> shows the comparison between GEN and GEN_SYN for the four training variables. We can see a very good agreement between pseudo- and synthetic data at the vertex level, despite the fact that the UNF-GAN was trained on detector-level pseudodata. This clearly demonstrates the success of the unfolding procedure. Moreover, the vast majority of pulls lie within ± 1σ, indicating that the uncertainty quantification is appropriate. The key point of this closure test is to demonstrate that synthetic data maintain the correlations of the original pseudodata. We checked that this is indeed the case: in Fig. <ref> we display an example of 2D distributions featuring strong correlations. We give a quantitative determination of the success of the procedure by calculating the pulls, shown in Fig. <ref>, which turn out to be normally distributed, as expected. The good agreement and preservation of correlations remains valid for derived kinematic variables that were not used for training. Examples are shown in Fig. <ref> for invariant and CM variables, and in Fig. <ref> for variables in the lab frame. It is worth noting that in the lab frame the GEN pseudodata exhibits sharp features due to detector acceptance. These features cannot be properly captured by the GANs, which is trained on invariant variables. Even so, this results in a ≲ 2σ local discrepancy in the 1D projections. If better agreement is needed, lab frame variables can be added to the training set. Finally, in Fig. <ref> we compare 1D distributions in a given bin of the other variables. The success of this test shows that correlations underlying the multidifferential cross section are correctly reproduced in the synthetic datasets. § CONCLUSIONS AND OUTLOOK One of the central results of this paper is the demonstration that a generative adversarial network can be used to reproduce a realistic multibody physics reaction. As a case study, we have used two-pion photoproduction in the kinematics of the Jefferson Lab CLAS experiment. This process represents an ideal test case, where several baryon and meson production mechanisms overlap, resulting in rich and complex observable distributions. The nonuniformity of the CLAS detector response further adds complication to the challenge. In order to validate the framework, we have performed a closure test to demonstrate that synthetic data correctly reproduce the multidifferential cross section preserving correlations between kinematic variables. Detector effects were also correctly unfolded by the procedure. We deployed two MC event generators, one distributed according to pure phase space, and the other incorporating a realistic physics model. Generated pseudodata were fed into a GEANT-based detector model to realistically take into account the detector response. Phase-space pseudodata were used to train a GAN-based proxy to learn the detector effects, and realistic pseudodata were then used to train the unfolding GAN and generate synthetic copies of MC events. The uncertainty quantification of the entire procedure was assessed by combining a bootstrap for the two NNs. Comparison between the true and GAN-generated samples demonstrated that, within the quoted systematic error, the NN is able to reproduce training and derived kinematic variables, as well as to unfold the detector effects in multiple dimensions. This work represents a first step towards a full AI-supported analysis of CLAS exclusive two-pion photoproduction data. It demonstrates that the same analysis framework, trained on CLAS data, can provide a synthetic copy of the experimental data, preserving correlations between kinematic variables and unfolding the detector effects. Physics interpretation in term of production mechanisms, separating different contributions and extracting resonance parameters from the unfolded data, will follow. An extension of this framework to include the different topologies and extrapolating in a controlled (albeit model-dependent) outside detector acceptance is also in progress. We thank J. Qiu for helpful discussions. This work was supported by the Jefferson Lab LDRD project No. LDRD19-13 and No. LDRD20-18, and in part by the U.S. Department of Energy contract DE-AC05-06OR23177, under which Jefferson Science Associates, LLC, manages and operates Jefferson Lab. ANHB is supported by the DFG through the Research Unit FOR 2926 (project number 409651613). TA was supported by a Ph.D. scholarship from Al-Baha University, Saudi Arabia. The work of NS was supported by the DOE, Office of Science, Office of Nuclear Physics in the Early Career Program. This work contributes to the aims of the U.S. Department of Energy ExoHad Topical Collaboration, contract DE-SC0023598. apsrev4-1
http://arxiv.org/abs/2307.04922v1
20230710220908
Programmable XY-type couplings through parallel spin-dependent forces on the same trapped ion motional modes
[ "Nikhil Kotibhaskar", "Chung-You Shih", "Sainath Motlakunta", "Anthony Vogliano", "Lewis Hahn", "Yu-Ting Chen", "Rajibul Islam" ]
quant-ph
[ "quant-ph", "cond-mat.other", "cond-mat.soft", "cond-mat.supr-con", "physics.atom-ph" ]
APS/123-QED Institute for Quantum Computing and Department of Physics and Astronomy, University of Waterloo, Waterloo, ON, N2L 3G1, Canada We propose and experimentally demonstrate an analog scheme for generating XY-type (J_ij^x+ J_ij^y) Hamiltonians on trapped ion spins with independent control over the J_ij^x and J_ij^y terms. The Ising-type interactions and are simultaneously generated by employing two spin-dependent forces operating in parallel on the same set of normal modes. We analytically calculate the region of validity of this scheme, and provide numerical and experimental validation with ions. This scheme inherits the programmability and scalability of the Ising-type interactions with trapped ions that have been explored in numerous quantum simulation experiments. Our approach extends the capabilities of existing trapped ion quantum simulators to access a large class of spin Hamiltonians relevant for exploring exotic quantum phases such as superfluidity and spin liquids. Programmable XY-type couplings through parallel spin-dependent forces on the same trapped ion motional modes Rajibul Islam August 12, 2023 ============================================================================================================= Trapped ions are ideal quantum simulators of interacting spin systems <cit.> due to their tunable long-range interactions <cit.>, long coherence times <cit.> and high fidelity quantum state preparation and measurement <cit.>. Interacting spin models illuminate a large variety of many-body phenomena such as quantum magnetism and phase transitions, spin liquids, superconductivity, and superfluidity <cit.>. Spins encoded in the internal degrees of freedom (such as hyperfine states) of individual trapped ions can interact via off-resonant excitation of their collective phonon modes through laser-driven spin-dependent dipole forces. By varying the laser parameters, long-range and tunable Ising-type interactions have been experimentally demonstrated and used in a large number of quantum simulations to explore both equilibrium and dynamic phenomena. Further, it has been proposed that the inherent long-range Coulomb interactions can be used to realize an arbitrarily programmable, all-to-all connected Ising spin system <cit.>. Existing proposals to simulate models that capture symmetries and phenomena beyond the reach of Ising models, such as XY and Heisenberg models, are either experimentally challenging or unfeasible, or limited in their applicability and tunability. For example, non-local propagation of spin correlations <cit.> and many-body localization <cit.> were studied, on an effective XY-Hamiltonian, by applying a large transverse magnetic field to the Ising Hamiltonian. The transverse field restricts the Hilbert space contributing to the spin dynamics and results in an effective XY Hamiltonian only at discrete times <cit.>. The above approach also breaks down for long evolution times <cit.>, and does not allow the simulation of the anisoptopic XY model (i.e., interactions of the form J^x_i,j+ J^y_i,jwith J^x_i,j≠ J^y_i,j, where σ_x(y)^i are the usual spin-1/2 Pauli matrices). Alternate proposals make use of orthogonal sets <cit.> of motional modes, with each set mediating an independent Ising term (such as or ). However, exciting multiple sets of orthogonal normal modes require additional laser beams, electronic controls, and complex optical design beyond the scope of current experimental setups. Further, this scheme may not produce the same form and range of interactions along different spin axes <cit.>, limiting the usefulness of such simulations. Here, we demonstrate the creation of XY-type interactions, including the anisotropic XY-model, that simulates the equivalent spin dynamics in continuous time (limited by the coherence time of the system), and is readily implemented in existing experimental setups. Our key insight is that the same set of motional modes can mediate both and interactions and the error in the evolution can be made negligible with the proper choice of the applied spin-dependent forces. We experimentally demonstrate the dynamics of two ion spins under the simulated XY Hamiltonian and numerically show that the scheme is easily scaled for larger systems. Spin-spin interactions can be induced <cit.> between ions by employing spin-dependent forces (SDF) that off-resonantly excite their collective vibrational modes. Such SDFs can be applied using Raman transitions from lasers that are far detuned from unwanted atomic excitation. The Mølmer-Sørensen scheme <cit.> for generating Ising-type interactions uses an SDF at a frequency μ that is generated by simultaneously applying Raman `beatnote' frequencies ω_hf±μ (the so-called `blue' and `red' sidebands) <cit.>. Here, ω_hf is the frequency splitting between the two spin states. Under the rotating wave (ω_hf≫μ) and the Lamb-Dicke approximations <cit.>, the spin-phonon Hamiltonian is: H = ∑_iΩ_i cos( μ t + ψ_i ) ( δk⃗·x⃗_i ) σ_θ_i^i where, δk⃗·x⃗_i = ∑_m η_im ( â_m e^-i ω_m t + â_m^† e^ i ω_m t ). Here, Ω_i is the Rabi frequency at the i^th ion position, â_m and â_m^† are the phonon annihilation and creation operators for the m^th motional mode at frequency ω_m. The Lamb-Dicke parameters η_im = b_im |δk⃗| √(ħ/2Mω_m) include the normal mode transformation matrix element b_im of the i^th ion and m^th normal mode <cit.>. Where ∑_i |b_im|^2 = ∑_m |b_im|^2 = 1, M is the mass of the ion and σ_θ_i^i = σ_x^i cosθ_i + σ_y^i sinθ_i. The spin phase θ_i and the motional phase ψ_i are determined from the relative phases of the red and blue sidebands <cit.>. The evolution operator for this Hamiltonian can be found using the Magnus expansion, which terminates after the second term, U(τ) = exp(-i ∫_0^τdt H(t) - 1/2∫_0^τ dt_1 ∫_0^t_1dt_2 [ H(t_1), H(t_2) ] ) = exp( ∑_i ϕ̂_i(τ)σ_θ_i^i + i ∑_i<jχ_i,j(τ) σ_θ_i^i σ_θ_j^j ). In the `slow' regime (|μ - ω_m| ≫η_imΩ_i), ϕ̂_̂î(τ) is negligible <cit.> and χ_i,j(τ) in Eq. (<ref>) is dominated by a `secular' term, which grows linearly with t, giving rise to an effective Hamiltonian, H_eff = ħ∑_i<j J_i,jσ_θ_i^i σ_θ_j^j where, the Ising coupling, J_i,j = Ω_i Ω_j ħ ( δk⃗ )^2/2M∑_m b_im b_jm/μ^2 - ω_m^2. Note that, the unitary evolution in Eq. (<ref>) will, in general, include additional AC Stark shifts such as from off-resonant excitation of the `carrier' spin transition from the SDFs (which we did not include in Eq. (<ref>) and (<ref>), as in Refs. <cit.> for simplicity, but they must be accounted for in experiments). Multiple SDFs operating in parallel have been theoretically suggested <cit.> for creating more complex interactions. Recent experiments have used SDFs along two orthogonal modes to generate parallel quantum gates <cit.>. In our protocol, we apply two SDFs at frequencies μ_1 and μ_2 (Fig. 1(a)), with both of them exciting (off-resonantly) the same motional modes. We choose the red and blue sideband phases to generate a spin phase of 0 (corresponding to σ_θ_i = σ_x in Eq. (<ref>) ) for the first SDF and π/2 (corresponding to σ_θ_i = σ_y) for the second SDF. The resulting spin-phonon Hamiltonian becomes, H = H_x + H_y, where, H_x = ∑_i,mη_imΩ_i^x cos( μ_1 t + ψ^x_i ) ( â_m e^-i ω_m t + h.c.) σ_x^i, H_y = ∑_i,mη_imΩ_i^y cos( μ_2 t + ψ^y_i ) ( â_m e^-i ω_m t + h.c.) σ_y^i. Here, Ω^x_i and Ω^y_i are Rabi frequencies, and ψ_i^x, ψ_i^y are motional phases corresponding to the 2 SDFs respectively. Again, if we operate each of the two forces in the slow regime (|μ_1 - ω_m| ≫η_imΩ_i^x, |μ_2 - ω_m| ≫η_imΩ_i^y ), the first term in the Magnus expansion is negligible <cit.> and the evolution operator arising from the second term in the Magnus expansion becomes, U(τ) = exp( - 1/2∫_0^τ dt_1 ∫_0^t_1dt_2 [ H(t_1), H(t_2) ] ) = exp( -iτ∑_i<j J_ij^x σ_x^i σ_x^j -iτ∑_i<jJ_ij^y σ_y^i σ_y^j + ∑_i<jΛ_ij(τ)σ_x^i σ_y^j + ∑_i ζ̂_i(τ) σ_z^i ). Where, J_ij^x = Ω_i^x Ω_j^x ħ ( δk⃗ )^2/2M∑_m b_im b_jm/μ_1^2 - ω^2_m, J_ij^y = Ω_i^y Ω_j^y ħ ( δk)^2/2M∑_m b_im b_jm/μ_2^2 - ω^2_m. The first two terms in the exponent in Eq. (<ref>) come from [ H_x(y)(t_1),H_x(y)(t_2) ], and result in the desired spin-spin interactions. The last two terms come from the cross terms, i.e. [ H_x(y)(t_1),H_y(x)(t_2) ], and lead to an undesirable spin-phonon coupling. For μ_1=μ_2, the single frequency SDF can be rewritten with a different spin phase, as can be seen from Eq. (<ref>), and therefore the resulting effective spin-spin Hamiltonian is Ising type (σ_θ^i σ_θ^j) (Fig. <ref>(b)). For μ_1 ≠μ_2, there is no secular term in Λ_ij(τ) and ζ̂_i(τ), however, they may have non-trivial oscillatory terms (see appendix). As |μ_1 - μ_2| is increased beyond zero, the contribution of the oscillatory terms diminishes (Fig. <ref>(c)), and the evolution becomes consistent with an effective XY-type Hamiltonian (Fig. 1(d)), H_eff = ħ∑_i<j J_ij^x σ_x^i σ_x^j + ħ∑_i<j J_ij^y σ_y^i σ_y^j, when, |μ_1-μ_2| ≫max_i,j(| J_ij^x| ), |μ_1-μ_2| ≫max_i,j( |J_ij^y| ). In the following section, we provide experimental validation of the above analysis. The experiments are performed on ions in a four-rod Paul trap with trap frequencies ω_X ≈ 2π× 1.135 MHz, ω_Y ≈ 2π× 0.920 MHz and ω_Z ≈ 2π× 201 kHz. The spins are encoded in the two hyperfine `clock' states, S_1/2|F=0,m_F=0⟩ (|↓⟩) and S_1/2|F=1,m_F=0⟩ (|↑⟩), of the ions, separated in energy by the hyperfine splitting ω_hf/2π = 12.643 GHz. Here, F and m_F are quantum numbers representing the total atomic angular momentum and its projection along a weak magnetic field of around 3.5 G. We perform coherent operations on the ions, through 2-photon Raman transitions with a 355 nm pulsed laser with a repetition rate of  80 MHz <cit.>. The wave-vector difference of the two Raman beams, δk⃗, is oriented such that we can excite phonon modes along both transverse trap axes, X' and Y' (Fig. <ref>(a)) . We modulate the frequency of the Raman 1 beam with four harmonic tones to create four beatnotes driving SDFs at two frequencies μ_1 = ω^X'_TILT - 2π× 8 kHz, μ_2 = ω^X'_TILT - 2π× 5 kHz respectively (Fig. <ref>(b)). Here, ω^X'_COM=ω_X and ω^X'_TILT= 2π× 1.117 MHz are the frequencies of the COM and TILT modes in the X' direction respectively. The SDF detunings are chosen to be smaller than the separation between the modes to minimize the contribution from all modes except the X' TILT mode. The experimental sequence is as follows. We apply 1.5 ms of Doppler cooling and 8 ms of Raman sideband cooling to cool all transverse modes to n < 1 to be in the Lamb-Dicke regime, and global optical pumping for 20 μs to initialize in state. An optional π-pulse driven by microwave radiation (at frequency ω_hf) and a site-selective optical pumping (that maintains the coherence of the neighboring ion with ∼ 99.9% fidelity <cit.>) can alternatively prepare the initial states and . The spin-dependent forces are then applied (Eq. (<ref>)) for a pulse duration τ. We finally measure the spin states by state-dependent fluorescence on a photo-multiplier tube (PMT) for 1.5 ms. We calibrate the fluorescence counts by preparing the spins in with a microwave π-pulse in a separate experiment and obtain approximately 80 PMT counts for this state. Since global fluorescence measurements cannot distinguish between and states, we apply a local optical pumping pulse on the first ion just before the measurement to convert it to a single ion measurement, when necessary. Figure <ref>(c) shows the spin dynamics when initialized in state. As in the numerical simulations Fig. <ref>, we tune the Rabi frequencies to achieve J^x_12≈ J^y_12 to get an effective XY model when the interactions are mediated simultaneously. We set the Rabi frequencies, Ω^x_i = 2 π× 15 kHz and Ω^y_i = 2 π× 11.5 kHz (approximately equal between the two ions). With H_x and H_y applied separately, we observe oscillations between and , as expected. We estimate J^x_12 = 2π×77(2) Hz and J^y_12 = 2π×80(3) Hz. When applied simultaneously, we find no observable oscillations, which is the signature of the XY Hamiltonian, since (σ_x^1 σ_x^2 + σ_y^1 σ_y^2 )= 0. The slow increase in the fluorescence counts for the XY Hamiltonian is likely due to the decoherence in the system and slow drifts in the intensity of the laser and the trap frequency (estimated to contribute <15% drift in J^x_12 or J^y_12) over the duration of the data run. To further validate that the and the couplings are present simultaneously, we initialize the ions in the state and repeat the experiment. Here, we expect oscillations between the and the states, at frequency J^x_12 + J^y_12 (see appendix). With a global detection, we expect the fluorescence counts to be flat as observed in Fig. <ref>(d). However, with an individual detection on ion 1 (i.e. by pumping one ion before detection), we observe oscillations in the fluorescence counts as expected from oscillations between the and . From the data in Fig. <ref>(d), we observe oscillations at 2π× 178(2) which is within the expected 10% fluctuations of the extracted J^x_12 + J^y_12 from previous experiments. The inherent full-connectivity of trapped ions allows for the scheme to be readily scalable to a large number of ions like in the case of the tunable Ising interactions <cit.>. For example, an interaction profile with a power law decay that has been widely studied for quantum Ising models <cit.> can be extended to XY interactions. To achieve this interaction profile, first the Rabi frequencies (Ω_i^x) and the detuning (μ_1) can be chosen to obey an approximate power law in the coupling matrix J^x in Eq. (<ref>). Then, μ_2 can be chosen to satisfy Eq. (<ref>) while keeping it close to μ_1, to maintain the same form for J^x and J^y. Further, if J^x=J^y is desired, then Ω_i^ys can be calculated using a global scaling w.r.t Ω_i^xs to compensate for the unequal μ_1 and μ_2. Figure 3 shows that the interaction profiles along x and y-directions match to better than 99% with the global scaling of Rabi frequencies, even when |μ_1-μ_2| is chosen to be 30 times higher than max(J^x). We find that this approach of scaling the Rabi frequencies works well whenever μ_1(μ_2) is parked close to a motional mode since the contribution to the J_ij^x (J_ij^y) is dominated by a single motional mode. Note that, Eq. (<ref>) is weaker than the constraint for applying each of the forces in the slow regime (|μ_1(2) - ω_m| ≫η_imΩ_i^x(y) ). This is because, in the slow regime, individual matrix elements J_ij^x (J_ij^y) are an order of magnitude smaller than |μ_1(2) - ω_m| and hence leave enough freedom to satisfy Eq. (<ref>). Thus, by simultaneously applying a pair of SDFs near each motional mode, the full spin-spin interaction profile can be engineered arbitrarily <cit.>. It should be noted, however, that the calculation of the coupling matrix should take into account any AC Stark shift induced from the spin-dependent forces when the relative scale of the couplings needs to be precisely matched. In summary, we have demonstrated tunable long-range XY-type couplings (J_ij^x+ J_ij^y) by the parallel application of two spin-dependent forces on the same motional modes. Our approach allows for analog quantum simulations of the XY and anisotropic XY models, as the effective Hamiltonian (Eq. (<ref>)) is valid in continuous time. This opens possibilities to study ground state order of frustrated XY-type models, in principle on programmable lattice geometries <cit.>, and investigate exotic quantum phases, such as spin liquids <cit.>. Further, evolution under the XY Hamiltonian can be interspersed with single spin quantum gates in analog-digital hybrid quantum simulations <cit.> to investigate dynamical phase transitions, Hamiltonian quenches, and quantum transport. Our demonstration of parallel SDFs on the same motional modes can further be extended to simulate XYZ-type Hamiltonians by adding a σ_z-SDF (readily implemented using the light-shift gate schemes <cit.>). We thank Jingwen Zhu for helping us on the experimental setup. We acknowledge financial support from the Natural Sciences and Engineering Research Council of Canada Discovery (RGPIN-2018-05250) program, Ontario Early Researcher Award, Canada First Research Excellence Fund (CFREF), New Frontiers in Research Fund (NFRF), University of Waterloo, and Innovation, Science and Economic Development Canada (ISED). apsrev.bst § DISTINGUISHING BETWEEN XX AND XY HAMILTONIAN (2 IONS) If we start with the Hamiltonian H = J^x_12σ_x^1 σ_x^2 + J^y_12σ_y^1 σ_y^2 and the evolution operator U such that U(τ) = exp(-iHτ). It is easy to show that: * U = cos(τ(J^x_12 - J^y_12) ) - i sin(τ(J^x_12 - J^y_12) ) * U = cos(τ(J^x_12 - J^y_12) ) - i sin(τ(J^x_12 - J^y_12) ) * U = cos(τ(J^x_12 + J^y_12) ) - i sin(τ(J^x_12 + J^y_12) ) * U = cos(τ(J^x_12 + J^y_12) ) - i sin(τ(J^x_12 + J^y_12) ) For the case of the XY Hamiltonian, we have that J^x_12 = J^y_12. When initialized in , we do not expect to see oscillations between ( , ). But when initialized in , we expect to see oscillations between ( , ) at a frequency of (J^x_12 + J^y_12). § DETAILED DERIVATION OF CONSTRAINT IN EQ. (<REF>) After RWA to Eq. (<ref>) and with δ_xm = ω_m - μ_1 and δ_ym = ω_m - μ_2 we have, H_x = ∑_i,mη_imΩ_i^x ( a_m e^-i (δ_xm t + ψ_x) + h.c.) σ_x^i, H_y = ∑_i,mη_imΩ_i^y ( a_m e^-i (δ_ym t + ψ_y) + h.c.) σ_y^i The first two terms in the exponent of (<ref>) come from calculations already described in <cit.>. The last two terms come from the cross commutators, [H_x(t_1),H_y(t_2)] + [H_y(t_1),H_x(t_2)] = [∑_i,mη_imΩ_i^x ( a_m e^-i (δ_xm t_1 + ψ_x) + h.c.) σ_x^i, ∑_j,nη_jnΩ_j^y ( a_n e^-i (δ_yn t_2 + ψ_y) + h.c.) σ_y^i ] + [∑_i,mη_imΩ_i^y ( a_m e^-i (δ_ym t_1 + ψ_y) + h.c.) σ_y^i, ∑_j,nη_jnΩ_j^x ( a_n e^-i (δ_xn t_2 + ψ_x) + h.c.) σ_x^i ] = ∑_i,m,nη_imη_inΩ_i^x Ω_i^y ( a_m e^-i (δ_xm t_1 + ψ_x) + h.c.) ( a_n e^-i (δ_yn t_2 + ψ_y) + h.c.) [σ_x^i,σ_y^i] + ∑_i,m,nη_imη_inΩ_i^y Ω_i^x ( a_m e^-i (δ_ym t_1 + ψ_y) + h.c.) ( a_n e^-i (δ_xn t_2 + ψ_x) + h.c.) [σ_y^i,σ_x^i] + ∑_i,j,mη_imη_jmΩ_i^x Ω_j^y [( a_m e^-i (δ_xm t_1 + ψ_x) + h.c.), ( a_m e^-i (δ_ym t_2 + ψ_y) + h.c.)] σ_x^i σ_y^j + ∑_i,j,mη_imη_jmΩ_i^y Ω_j^x [( a_m e^-i (δ_ym t_1 + ψ_y) + h.c.), ( a_m e^-i (δ_xm t_2 + ψ_x) + h.c.)] σ_y^i σ_x^j = 2 i ∑_i,m,nη_imη_inΩ_i^x Ω_i^y ( a_m e^-i (δ_xm t_1 + ψ_x) + h.c.) ( a_n e^-i (δ_yn t_2 + ψ_y) + h.c.) σ_z^i - 2 i ∑_i,m,nη_imη_inΩ_i^x Ω_i^y ( a_m e^-i (δ_ym t_1 + ψ_y) + h.c.) ( a_n e^-i (δ_xn t_2 + ψ_x) + h.c.) σ_z^i + 2i ∑_i,j,mη_imη_jmΩ_i^x Ω_j^y σ_x^i σ_y^j (sin( δ_ym t_2 - δ_xm t_1 + ψ_y - ψ_x) - sin( δ_ym t_1 - δ_xm t_2 - ψ_y + ψ_x) ) Using the above expressions ( and performing the integrals in Eq. (<ref>) we find that For μ_1 ≠μ_2, there is no secular term in Λ_ij(τ) and ζ̂_i(τ), however, the oscillatory term in Λ_ij(τ) could get unbounded due to the presence of (δ_xm - δ_ym) in the denominator. Let us define Λ_ij to be the coefficient of the largest oscillatory term in Λ_ij(τ). It can then we shown that |Λ_ij| ≤4 ×max( |J^x_ij| , |J^y_ij| )/ |μ_1 - μ_2| . When Eq. (<ref>) is satisfied, Λ_ij remains small and the contribution of the cross terms in Eq. (<ref>) becomes negligible and the XY-type couplings remain in the effective Hamiltonian.
http://arxiv.org/abs/2307.03911v1
20230708060513
A Novel Pseudo-Random Number Generator Based on Multi-Objective Optimization for Image-Cryptographic Applications
[ "Takreem Haider", "Saúl A. Blanco", "Umar Hayat" ]
cs.CR
[ "cs.CR", "cs.IT", "math.IT" ]
aa]Takreem Haiderbb [email protected] aa]Saúl A. Blanco [email protected] ab]Umar Hayatbb [email protected] [bb]Corresponding author [aa]Department of Computer Science, Indiana University Bloomington, IN 47408, USA [ab]Department of Computer Science, University of Surrey, Guildford, Surrey, GU2 7XH, UK Pseudo-random number generators (PRNGs) play an important role to ensure the security and confidentiality of image cryptographic algorithms. Their primary function is to generate a sequence of numbers that possesses unpredictability and randomness, which is crucial for the algorithms to work effectively and provide the desired level of security. However, traditional PRNGs frequently encounter limitations like insufficient randomness, predictability, and vulnerability to cryptanalysis attacks. To overcome these limitations, we propose a novel method namely an elliptic curve genetic algorithm (ECGA) for the construction of an image-dependent pseudo-random number generator (IDPRNG) that merges elliptic curves (ECs) and a multi-objective genetic algorithm (MOGA). The ECGA consists of two primary stages. First, we generate an EC-based initial sequence of random numbers using pixels of a plain-image and parameters of an EC, that depart from traditional methods of population initialization. In our proposed approach, the image itself serves as the seed for the initial population in the genetic algorithm optimization, taking into account the image-dependent nature of cryptographic applications. This allows the PRNG to adapt its behavior to the unique characteristics of the input image, leading to enhanced security and improved resistance against differential attacks. Furthermore, the use of a good initial population reduces the number of generations required by a genetic algorithm, which results in decreased computational cost. In the second stage, we use well-known operations of a genetic algorithm to optimize the generated sequence by maximizing a multi-objective fitness function that is based on both the information entropy and the period of the PRNG. By combining elliptic curves and genetic algorithms, we enhance the randomness and security of the ECGA. To evaluate the effectiveness and security of our generator, we conducted comprehensive experiments using various benchmark images and applied several standard tests, including the National Institute of Standards and Technology (NIST) test suite. We then compared the results with the state-of-the-art PRNGs. The experimental results demonstrate that the ECGA outperforms the state-of-the-art PRNGs in terms of uniformity, randomness, and cryptographic strength. Pseudo-random number generatorElliptic curveGenetic algorithmMulti-objective optimization § INTRODUCTION Pseudo-random number generators (PRNGs) are extensively used in numerous fields, such as statistics, computer science, cryptography, and gaming <cit.>. PRNGs generate a sequence of numbers that appear random but are produced from a predetermined starting point using a mathematical formula. The quality of a PRNG is of utmost importance in the field of cryptography as the security of a cryptographic system depends on the randomness and unpredictability of the keys generated. A good PRNG should possess the following key features to ensure the quality and security of the generated random numbers <cit.>. 1) Randomness: To achieve the quality of a reliable PRNG, the generated number sequence must exhibit no distinguishable characteristics when compared to a truly random sequence. The generated output should be devoid of any identifiable patterns, and each number in the sequence should not correlate with the numbers that precede or follow it. 2) Unpredictability: The PRNG needs to possess resistance against attacks aimed at forecasting future outputs by analyzing past outputs. To achieve this, the PRNG must possess a substantial internal state and employ a robust cryptographic algorithm. 3) Periodicity: Each PRNG has a specific point at which the sequence it produces starts repeating. A PRNG is considered to be of high quality if its period is significantly longer, meaning it is approximately equal to the total number of possible outputs. 4) Security: The primary focus of the PRNG should be on maintaining strong security measures and guaranteeing resilience against commonly recognized forms of attacks, such as brute-force attacks. 5) Efficiency: The PRNG should possess efficiency in terms of both computational speed and memory consumption, particularly for tasks that involve generating a substantial number of random values, such as the creation of cryptographic keys. Designing a PRNG with optimal randomness is a challenging task that requires balancing many different factors <cit.>. The quality of random numbers generated by a PRNG is crucial in many applications, and it is essential to carefully evaluate and choose a PRNG that meets the quality and security requirements of the application or system. In recent years, chaotic systems have become popular in the development of PRNGs due to their desirable features such as unpredictability, irreducibility, sensitivity to initial conditions, ergodicity, and chaoticity <cit.>. Various PRNGs have been designed based on chaotic maps, for instance, PRNGs described in <cit.>. Murillo et al. <cit.> introduced a PRNG that uses an improved 1D logistic map to generate pseudo-random numbers with strong statistical characteristics. Hamza <cit.> presented a method that uses the Chen chaotic system to construct a PRNG for cryptographic purposes involving images. This method <cit.> addresses the issue of non-uniform distribution commonly found in pseudo-random number sequence (PRNS) generated by the Chen chaotic system and produces PRNS with a high level of randomness. Xia and Zheng <cit.>, developed a novel PRNG that utilizes a controlled digital chaotic system. The purpose of this generator is to enhance the dynamic degradation that arises from the use of chaotic systems. Meranza et al. <cit.> utilizes an improved version of the Henon map to design a PRNG. Their research indicates that the cryptographic properties of PRNS generated by the enhanced Henon map are superior to those produced by the traditional Henon map. Barani et al. <cit.>, designed a PRNG for creating PRNS by utilizing a generalized Newton complex map. To ensure the randomness of the generated sequences, several security measures were implemented and the outcomes indicated that this generator can produce secure PRNS. Zhao et al. <cit.> used a hyper-chaotic system to design a PRNG that exhibits high levels of randomness. In addition, Wang et al. <cit.> constructed a PRNG that is based on a logistic chaotic system. Gayoso et al. <cit.> introduced a new PRNG that utilizes the residue number system, which enables the creation of an exceptionally efficient circuit that operates distinctly compared to conventional generators. Furthermore, in <cit.>, the authors create a structure resembling a Hopfield neural network where each neuron is substituted with a compact PRNG. Yu et al. <cit.> developed a PRNG that uses a chaotic system and an improved Hopfield neural network. Their PRNG is designed to decrease the impact of chaotic degradation and enhance the quality of PRNS. Cang et al. <cit.> presented a PRNG based on a generalized conservative Sprott-A chaotic system. Agarwal et al. <cit.> designed a PRNG that is based on the cascade fractal function. The cascade function is created using a combination of two seed maps, which improves the unpredictability and randomness of PRNG. Shi and Deng <cit.> proposed a new PRNG that is based on Baker chaotic map and can generate highly random PRNS. Zang et al. <cit.> developed an algorithm for generating PRNS using complex polynomial chaotic maps. The PRNS generated by this method shows strong randomness and are vulnerable to differential attacks. A significant limitation associated with chaos-based cryptography arises from the fact that chaotic maps are designed to work with real numbers, which is not ideal for cryptographic applications that use finite numbers. Round-off errors in quantizing real numbers can create issues that result in irreversible functions, making decryption impracticable. The use of elliptic curve cryptography (ECC) is gaining popularity in modern cryptographic applications due to its effectiveness, strong security measures, and resilience against attacks. The difficulty of solving the elliptic curve discrete logarithm problem (ECDLP) is a significant factor that motivates the preference for elliptic curves (ECs) over chaotic maps in the design of cryptographic algorithms. Furthermore, ECs exhibit the advantage of necessitating significantly reduced key sizes in comparison to chaotic maps. This characteristic renders them more efficient and viable for implementation within environments that face limitations in terms of available resources. As a result, various PRNGs using the arithmetic of ECs have been designed. Hayat and Azam <cit.>, proposed an algorithm for generating PRNS which is based on ordered ECs. This method is efficient when compared with previously introduced PRNGs over ECs. However, the generator <cit.> is not suitable for ECs over large prime p due to high space and time complexity such as 𝒪(p) and 𝒪(p^2), respectively. A PRNG based on Mordell elliptic curve is introduced by Ullah et al. <cit.>, which is more efficient and has better cryptographic properties than <cit.>. However, the time and space complexity of the generator <cit.> is 𝒪(mp) and 𝒪(m), respectively, where m ≤ p is the size of PRNS, due to which this generator is not compatible with ECs associated with large prime p. An isomorphic EC-based PRNG, developed by Haider et al. <cit.>, produces sequences with high randomness and outperforms existing generators in terms of cryptographic properties; however, it faces compatibility issues with large prime ECs when the parameters of EC and the size of the ordered set are not predetermined. Recently, Adhikari and Karforma <cit.> presented a PRNG over large prime ECs. To generate pseudo-random numbers, firstly, the y-coordinate of the generated point over the EC is extracted and then the least significant 8 bits of the extracted y-coordinate are converted to its decimal representation. Although this PRNG is compatible with ECs over large primes to obtain PRNS with length ℓ, this algorithm needs to generate ℓ number of points over the EC. So, for large ℓ, this method is not suitable for real-world applications. The existing EC-based PRNGs have exhibited favorable outcomes, however, they do not guarantee the generation of PRNs with security levels closely approximating the theoretically optimal values. §.§ Our contribution To address the aforementioned issues, our focus is directed toward the design of a PRNG that produces random numbers of high quality, exhibiting optimal randomness. The following steps outline our contributions: 1) To overcome the challenges posed by low randomness, predictability, and vulnerability to cryptanalysis attacks, we employ a multi-objective genetic algorithm (MOGA) optimization technique. This approach is chosen because MOGA allows us to simultaneously optimize multiple objectives, such as randomness, predictability, and resistance to cryptanalysis attacks. By employing MOGA, we can enhance the overall quality and security of the generated random numbers. 2) Our method takes advantage of the image itself as the seed for the initial population in the genetic algorithm optimization process. This choice is motivated by the image-dependent nature of cryptographic applications. By using the image as the seed, the PRNG can adapt its behavior to the unique characteristics of the input image. This adaptation improves the security of the PRNG, making it more resistant to differential attacks and increasing the level of protection against potential threats. 3) We generate an initial sequence of random numbers based on both the plain-image and elliptic curve. This departure from traditional methods of population initialization is selected for a specific reason. By using the plain-image and elliptic curve, we ensure that the initial solution provided to the genetic algorithm is well-chosen and has desirable properties. This decreases the number of generations required by the genetic algorithm and thus minimizes the overall computation time. 4) The genetic algorithm is utilized to improve the generated sequence by maximizing a fitness function that considers both information entropy and the period of the pseudo-random sequence. This choice is made because the genetic algorithm is effective in optimizing problems that have multiple objectives. By maximizing the fitness function, we can enhance the information entropy and period of the generated sequence, leading to superior-quality random numbers. 5) To evaluate the performance and security of our proposed PRNG, extensive experiments are conducted using various benchmark images. The results are then compared with the existing state-of-the-art PRNGs. The experiments serve as empirical evidence supporting our claims of enhanced performance and security. §.§ Paper organization The rest of the paper is organized as follows. In <ref>, we present the preliminary theoretical background and notions used in the paper. Moreover, in <ref>, we describe our method ECGA for the construction of the proposed IDPRNG based on elliptic curves and a genetic algorithm. In <ref>, we present a comprehensive security analysis and the comparison of the ECGA with the state-of-the-art PRNGs, establishing empirically that our method is superior. Finally, in <ref>, we summarize the findings of the ECGA and discuss possible future directions of our line of work. § PRELIMINARIES In this section, we present fundamental concepts that are used in our algorithm and hold significance for comprehension. §.§ Elliptic Curves Let p>3 be a prime number and a,b be two integers, and we use 𝔽_p to denote the finite field of p elements. An elliptic curve (EC) denoted by E_p,a,b is defined as a set of solutions (x,y) ∈𝔽_p ×𝔽_p satisfying the equation y^2≡ x^3+ ax +b p, along with an additional point at infinity denoted by ∞, where a, b ∈𝔽_p and 4a^3+27b^2≢0 p. The group law operation +_g on an elliptic curve E_p,a,b is defined as follows <cit.>. We denote the multiplicative inverse of a∈𝔽_p by a^-1. Point addition: Let G_1 = (x_1,y_1) ans G_2 = (x_2,y_2) be two points on E_p,a,b such that G_1≠ G_2 and x_1≠ x_2, then the computation of the resulting point G_1 +_g G_2 = (x_3, y_3) when performing point addition over E_p,a,b is given as: λ ≡ (y_2 - y_1)(x_2 - x_1)^-1p. x_3 ≡λ^2 - x_1 - x_2p, y_3 ≡λ(x_1 - x_3) - y_1p. Point doubling: Let G_1 = (x_1,y_1) be a point on E_p,a,b such that y_1≠ 0, then the computation of the resulting point G_1 +_g G_1=(x_3, y_3) when performing point doubling over E_p,a,b is given as: λ ≡ (3 x^2_1)(2 y_1)^-1p x_3 ≡λ^2 - 2x_1p, y_3 ≡λ(x_1 - x_3) - y_1p. §.§ Genetic Algorithm In recent times, there has been significant interest among researchers in evolutionary algorithms (EAs), which have been recognized as valuable in numerous applications (see, e.g., <cit.> and references within). One of the most widely recognized types of EAs is genetic algorithms (GAs), which are search heuristics. Several applications have been devised using genetic algorithms <cit.>. Fundamentally, a GA can be divided into four primary stages <cit.>, which we describe below. * Initial population: During the creation of the population, an initial set of individuals or solutions is generated. These individuals are designed to form the starting point for the genetic algorithm. They represent possible solutions to the problem being addressed and are encoded in a format that allows the algorithm to work with and manipulate them. * Selection: During the selection step, a portion of individuals from the population is selected as parents for the next generation. This selection is primarily determined by the fitness or quality of each individual solution. Individuals with higher fitness scores are more likely to be chosen as parents since they are considered to have superior solutions. * Crossover: During the crossover step, new solutions are generated by merging the genetic material of chosen parent individuals. The genetic material, typically represented as chromosomes or sequences of values, is swapped between parents to produce offspring. This procedure emulates the natural recombination of genes that takes place during reproduction. * Mutation: Mutation involves a spontaneous and random alteration in a specific aspect or trait of a solution. During the mutation step, slight modifications are made to the genetic composition of individual solutions. This randomness plays a crucial role in introducing fresh genetic variations within the population. A genetic algorithm explores the search space through multiple iterations, aiming to discover an optimal or nearly optimal solution for the given problem. § ELLIPTIC CURVE GENETIC ALGORITHM (ECGA) In this section, we provide a novel method namely elliptic curve genetic algorithm (ECGA) for generating pseudo-random numbers by using elliptic curves and a genetic algorithm. The effectiveness of a PRNG highly depends on its tendency to produce random and unpredictable sequences. The quality of a PRNG is significantly enhanced by two important factors, namely high entropy and a long period. The presence of high entropy in a sequence of numbers makes it difficult for an attacker to predict the next number, while a long period decreases the chances of repetitive patterns that could be used for exploitation. We propose a generator that consists of two important stages. Initially, we utilize points on elliptic curves to create a sequence of random numbers. Subsequently, we employ the operations of a genetic algorithm to maximize a fitness function that considers multiple objectives. This fitness function guarantees a higher degree of randomness by considering both the information entropy and the period of the sequence. By employing an appropriate initial solution based on ECs, the optimization process reduces the number of generations needed, thus decreasing computational time. Furthermore, our proposed method ECGA enhances the unpredictability and randomness of the PRNG by incorporating elliptic curves and a genetic algorithm. We provide a concise overview of each stage of the ECGA as follows. §.§ Initialization We use elliptic curves to generate an initial sequence of pseudo-random numbers. The initialization process comprises the following procedures: Step 1: Let I be a plain-image in a two-dimensional array with dimensions r× s, where each element belongs to the symbol set [0, 2^m-1] and m represents the number of bits in the pixel of an image. Here, if n_1,n_2 are two integers with n_1<n_2, [n_1,n_2]={n_1,n_1+1,…,n_2}. We consider an elliptic curve E_p,a,b defined by the equation y^2≡ x^3+ax +b p, where p, a, and b are parameters of the curve and G=(x_0,y_0) is a base point on the curve. Let us assume that H_I, H_a, H_b, and H_p denote the SHA-256 hash value of I, a, b, and p, respectively. The proposed approach begins by selecting an initial point denoted as G_0=(x_0,y_0) on E_p, a,b. Subsequently, a series of n points G_k=(x_k,y_k) are generated, where 1 ≤ k ≤ n. Step 2: Calculate the binary values for the x and y coordinates of the point G_k=(x_k,y_k), with 1≤ k≤ n. We define the function B(x_k) that takes as input the decimal representation of x_k and outputs its binary representation as a sequence of u bits: B(x_k) =x_k^1,x_k^2, … , x_k^u, where u is the number of bits needed to represent x_k in binary form. Similarly, we define the function B(y_k) that takes as input the decimal representation of y_k and outputs its binary representation as a sequence of v bits: B(y_k) =y_k^1,y_k^2, … , y_k^v, where v is the number of bits needed to represent y_k in binary form. Step 3: From Step 1, let ∈{ I, p, a, b } and H_ be the SHA-256 sets from Step 1. Namely, H_ = { h_^i : 1 ≤ i ≤ 256, h_^i∈{ 0, 1 }}. We furthermore define B^x_a,b to be a binary sequence of length 3 ℓ', where ℓ'= min(u,256), obtained by repeatedly merging the corresponding binary bits of H_a, B(x_k), and H_b in an alternating manner as follows: B^x_a,b = h_a^1, x_k^1, h_b^1, h_a^2, x_k^2, h_b^2, …, h_a^ℓ',x_k^ℓ',h_b^ℓ'. Similarly, we define B^y_I,p to be a binary sequence of length 3 ℓ”, where ℓ”= min(v,256), obtained by merging the binary bits of H_I, B(y_k), and H_p as follows: B^y_I,p = h_I^1, y_k^1, h_p^1, h_I^2, y_k^2, h_p^2, …, h_I^ℓ”,y_k^ℓ”,h_p^ℓ”. Step 4: Generate a binary sequence B^x,y_I, p, a,b of length ℓ, where ℓ = 3(ℓ' + ℓ”), by concatenating B^x_a,b and B^y_I,p as follows: B^x,y_I, p, a,b = B^x_a,b +_C B^y_I,p. Note that the concatenation operation denoted by +_C is simply the operation of appending the second sequence to the end of the first sequence. Step 5: Let B_z denote a randomly selected binary sequence of length ℓ. We define the resultant binary sequence B^x,y_I, p, a,b,z of length ℓ as follows: B^x,y_I, p, a,b,z= 0 if ξ_i=ξ'_i ∀ i∈{ 1,2,…,ℓ} 1 if ξ_i≠ξ'_i ∀ i∈{ 1,2,…,ℓ}, where ξ_i and ξ'_i are the i-th element of the binary sequences B^x,y_I, p, a,b and B_z, respectively. Step 6: Divide the binary sequence B^x,y_I, p, a,b,z into segments of a predetermined length, m, and convert each segment into its decimal representation. Let S_d denotes the decimal representation of B^x,y_I, p, a,b,z. Therefore, S_d can be expressed as a sequence of decimal values δ_1, δ_2, ..., δ_t, where t = ⌊ℓ/m ⌋ . Note that each of the decimal values in the sequence S_d falls within the range of [0, 2^m-1]. Step 7: Repeat Step 2 through Step 6 for each k ∈{ 1, 2, …, n }. Let Δ(I,p,a,b,x,y,z,n) be the resulting sequence of length n × t. Step 8: Choose three positive integers ϕ, ψ, and φ. The proposed IDPRNG is represented by the function Ω : Δ→ [0, 2^m-1], where Ω(δ_i) ≡ϕδ_i + ψδ_i+1 + φ2^m. Thus, Ω(I,p, a,b,x,y,z,n,ϕ, ψ, φ) is the initial sequence of random numbers based on the parameters I,p,a,b,x,y,z, n,ϕ, ψ, and φ. Henceforth, we represent it by Ω_I. §.§ Fitness function A fitness function is a tool used to measure how closely a given solution matches the ideal solution. The proposed algorithm aims to find the best possible solution for a given problem. The degree of uncertainty in a PRNS is often measured using information entropy (H), which is an important measure of randomness. For a PRNS to be considered effective, it must contain a high level of uncertainty. The higher the entropy value, the stronger the generator is considered to be. Let Ω be a PRNS taking values from [0, 2^m-1]. The entropy H(Ω) of Ω is defined as: H(Ω) = - ∑_i=1^2^mP(ω_i) log_2 P(ω_i), where P(ω_i) represents the probability of an i-th element ω_i in Ω. Apart from entropy, the period of a PRNS is also a significant factor in assessing its randomness. A PRNS with a long period is generally considered good for cryptographic purposes. The period of Ω, denoted by T(Ω), is the the least positive integer T for which ω_i + T = ω_i for all i ≥ 1. The case T(Ω)=ℓ(Ω) is optimal since then Ω is considered more secure. To achieve our objective, a multi-objective optimization function is employed that seeks to maximize both the information entropy and the period of a pseudo-random number sequence Ω of length ℓ which is initially generated. The purpose of this function is to generate PRNSs that have both maximum entropy and period, to obtain the best possible results. Our optimization problem is based on the following fitness function. Maximize f(Ω) = H(Ω) + T(Ω), where 0 ≤ H(Ω) ≤ m and 1 ≤ T(Ω) ≤ℓ. §.§ Crossover Let Ω_I = (ω_0, ω_1, ..., ω_2^m-1) be the set of pseudo-random numbers generated during the initialization phase. The goal of the crossover operator is to replace elements in Ω_I with those that result in a higher fitness value. In other words, elements with lower fitness values are replaced with those having comparatively higher fitness values. More concretely, the crossover operation is carried out as follows: i) Let P_e be a random permutation of the integers in the range [1, 2^m]. ii)Let V = (v_1, v_2, ... , v_2^m) be a vector of 2^m elements randomly selected from Ω_I. iii)Define Ω_C as the sequence obtained by replacing the v_i-th element of Ω_I with the i-th element of P_e, i.e., ω_i^C=ω_P_e(i) if i = v_j  for some j ω_i otherwise. The purpose of this operation is to increase the diversity of the sequence and improve the chances of finding a globally optimal solution. Specifically, the crossover operation ensures that all integers in the range [0, 2^m-1] are present in Ω_C. To evaluate the quality of the new sequence Ω_C, we compute its entropy H(Ω_C) and period T(Ω_C). If H(Ω_C) ≥ H(Ω_I) and T(Ω_C) ≥ T(Ω_I), then we consider Ω_C as input for the mutation operation. Otherwise, if H(Ω_C) < H(Ω_I) or T(Ω_C) < T(Ω_I), we use Ω_I as input to the mutation phase. §.§ Mutation Let Ω_M be a sequence of length ℓ obtained after crossover operation, and let r and r' be two integers randomly selected from the interval [1,ℓ]. The swapping mutation operator μ_s can be defined as: μ_s(Ω_M,r,r') = Ω_M', where Ω_M' is the sequence obtained after swapping the element at position r with the element at position r' in Ω_M. Since the entropy of a sequence is not affected by the swapping operation, therefore in the mutation phase we only compute the period of the obtained sequence. If T(Ω_M') ≥ T(Ω_M), then Ω_M' is selected for the next generation, otherwise, Ω_M is retained for the next generation. §.§ Termination The stopping criteria of any optimization algorithm depend primarily on two significant factors. The first factor is the number of generations or iterations, where if an algorithm attains the pre-defined number of generations, it terminates. The second factor is based on the optimal solution, where an algorithm terminates if it achieves the optimal solution to the fitness function. It is essential to note that the number of generations alone cannot serve as a feasible parameter to stop an algorithm since it may terminate without any improvement. As the objective is to generate highly random pseudo-random number sequences, therefore the proposed algorithm employs the optimal solution to the problem as the termination condition. In other words, the algorithm stops when the required sequence attains optimal entropy and optimal period, producing a highly random sequence that is well-suited for cryptographic applications. Thus, the PRNS obtained after the termination phase represents the optimized sequence of pseudo-random numbers characterized by optimal entropy and period. We denote the optimized PRNS by Ω_Z. § SECURITY ANALYSIS AND COMPARISON This section evaluates and compares the effectiveness of the ECGA through extensive experiments and security evaluations, which involve a variety of tests such as randomness analysis, entropy analysis, period analysis, Hurst exponent analysis, correlation analysis, key sensitivity analysis, and key space analysis. Additionally, we assess the efficiency of the ECGA in two different ways: 1) by comparing the initially generated sequences with their optimized sequences, and 2) by comparing the ECGA with the state-of-the-art generators <cit.>. For our experiments, we used MATLAB R2022b on a machine with an Intel Core m3-7Y30 @1.61 GHz and 8 GB of RAM. We generated 100 random sequences by selecting parameters at random, and the selected parameters are listed in <ref>. We used two elliptic curves recommended by NIST <cit.>, namely E_p,a,b and E_p',a',b' with prime numbers of 256 bits and 521 bits, respectively. The parameters for these elliptic curves are provided in <ref>. In addition, we used four different standard images with three distinct dimensions and five sets of integer-based triplets to generate these sequences. Our approach generates these 100 sequences by modifying one of the four parameters, namely (a) the elliptic curve, (b) the plain-image, (c) the size of the plain-image, and (d) the triplet (ϕ, ψ, φ) while keeping the others constant. §.§ Randomness analysis The NIST 800-22 test suite <cit.> is widely recognized as a suitable tool for evaluating the randomness of binary sequences. The suite consists of 15 tests and 174 sub-tests and requires, in general, at least 1 million bits to assess the randomness of a sequence. The suite calculates the probability of p_value for each sequence, and if p_value≥λ (or p_value < λ), the sequence is considered random (or non-random), where λ is a predefined threshold known as the significance level. For cryptographic purposes, λ is usually set between 0.001 and 0.01 <cit.>. Moreover, the proportion range of (1-λ) ± 3 √(λ (1-λ)/N) is considered acceptable, where N ≥ 1/ λ indicates the sample size (number of sequences). The NIST suite and its corresponding parameters are listed in <ref>, while a brief explanation of the suite can be found in <cit.>. To ensure the randomness of the ECGA, we tested numerous optimized random sequences generated by our generator using all the tests in the NIST 800-22 test suite. For the experiments, we used a significance level λ = 0.01, a sample size N ∈{ 100,800 }, and a sequence of length n = 10^6 bits. We converted 100 resultant sequences generated by the ECGA based on the randomly selected parameters listed in <ref> into their binary representation for the NIST analysis. Each generated sequence values lie in the range of [0, 2^8-1], resulting in a total of (8 × 100× 10^6) bits for the evaluation of the NIST test suite. We performed NIST tests on two data samples: firstly, by taking the first (1 × 10^8) bits from the total of (8 × 10^8) bits with N = 100, and then by taking the total (8 × 10^8) bits with N = 800. The results are presented in <ref>. The results indicate that for N=100, p_value≥ 0.01 for all tests except the block frequency test, for which p_value= 0.006, which is very close to the acceptable value of 0.01. Additionally, for N=100, the proportion of all the tests is greater than the lower bound of acceptable proportion, which is 0.96. Moreover, for N=800, p_value≥ 0.01 and proportion≥ 0.97 for each test included in the NIST test suite, where 0.97 is the lower bound of the acceptable proportion for a sample size of 800. As a result, the ECGA passed all tests and is capable of generating highly random sequences. We compared the results of the ECGA with the state-of-the-art generators <cit.> using the NIST test suite, the results are presented in <ref>, <ref>, and <ref> and are summarized as follows. 1) The results listed in <ref> are based on 100 different random sequences each of length 10^6 bits. The ECGA passed all 100 sequences, attaining a proportion of 1 for six different tests. On the other hand, generators <cit.>, <cit.>, and <cit.> attained proportions of 1 for three, five, and two tests, respectively. The ECGA also satisfied the acceptable proportion value 0.96 for all the tests listed in <ref>. However, the generator <cit.> failed the Random Excursions Test (RET) with a proportion of 0.72. The main objective of RET is to analyze the occurrence of a specific number of visits in a cumulative sum random walk. The purpose of conducting this test is to assess whether the number of visits to a particular state within a cycle deviates from the expected frequency for a random sequence. The test comprises a total of eight individual tests, each focusing on one specific state: -4, -3, -2, -1, +1, +2, +3, and +4. Thus, the ECGA not only passed all the tests but also attained the highest proportion for the maximum number of tests compared to the generators in <cit.>. 2) <ref> compares the results based on eight distinct sequences, each of length 10^6 bits. The ECGA passed the p_value criterion for each test, while generator <cit.> failed the Non-Overlapping Template Matching Test with a p_value of 0. Furthermore, our generator achieved p_value≥ 0.9 for five tests, compared to only one test for generator <cit.>. Hence, our generator outperforms <cit.>. 3) The comparison results of 30 random sequences with a total of (30 × 10^6) bits are illustrated in <ref>. Out of 41 tests, the ECGA attained a proportion value of 1 for 37 tests, while the generator <cit.> attained a proportion of 1 against 30 tests. Thus, the ECGA passed more tests with a proportion rate of 1 when compared with the generator <cit.>. Thus, the ECGA performed better than other existing PRNGs based on the NIST statistical test suite. The ECGA passed all the tests with the highest proportion rate for the maximum number of tests when compared to the generators <cit.>. Therefore, it can be concluded that the ECGA is suitable for various applications that require randomness and unpredictability. §.§ Entropy analysis The degree of uncertainty in a PRNG can be measured by its information entropy <cit.>, which is typically expressed in bits. A good PRNG should produce sequences with high entropy, meaning a greater degree of uncertainty. In this study, 100 sequences of length 10^6 with values ranging from 0 to 2^8-1 were generated and their entropy was calculated. The optimal entropy denoted by H_max, in our case, is 8. The average entropy of the 100 sequences before optimization was between 6.7702 and 7.1084, with an average of 6.9305. However, after optimization, all 100 sequences achieved their maximum entropy of 8, as demonstrated in <ref> and <ref>. These findings indicate that the ECGA can be very useful for cryptographic purposes, as it significantly increased the entropy of the generated sequences. Furthermore, the results listed in <ref> demonstrates that the ECGA outperformed several state-of-the-art PRNGs <cit.> in generating sequences with optimal entropy. §.§ Period analysis To ensure that a PRNG generates a sequence that is sufficiently random and has a long enough period for its intended use, it is crucial to perform the period test <cit.> on it. The period is a significant attribute of a sequence generated by a PRNG, as it indicates the length of the shortest repeating cycle present in the sequence, if it exists. Specifically, it is the smallest positive integer T for which the k-th element of the sequence matches the (k+T)-th element for all k ≥ 0. If the period of a sequence is equal to its length, then it is the optimal period of the sequence denoted by T_max. We evaluated the effectiveness of the ECGA by generating 100 random sequences, each containing 10^6 numbers, using the ECGA. We analyzed the strength of the sequences by measuring their period, both before and after optimization. Our goal is to determine how much the period increased after optimization. The results are presented in <ref>. Our findings indicate that, before optimization, the sequences have periods ranging from 192 to 998712, while after optimization, all sequences have an optimal period of 10^6. This suggests that our optimization algorithm has significantly increased the period of the random sequences, making them suitable for cryptographic applications. In other words, all the generated sequences have an optimal period and can be used for secure communication. §.§ Hurst exponent The Hurst exponent <cit.> is a statistical test, that determines the trend in data. It is denoted as H_E and falls between 0 and 1. There are three possible scenarios when calculating H_E: 1) If H_E=0.5, the data is random or independent, meaning there is no correlation between the current and previous values. 2) If 0.5<H_E≤1, the data is persistent, meaning if there is an increasing trend in the values, the next values will likely follow the increasing trend. 3) If 0≤H_E<0.5, the data is anti-persistent, meaning if there is an increasing trend in the values, the next values will likely follow the decreasing trend. A value closer to 0.5 indicates more random data. A good PRNG should have an H_E value close to 0.5. The rescaled range (R/S) analysis <cit.> is the most commonly used method to compute H_E. We have calculated the Hurst exponent (H_E) using the (R/S) method for 100 sequences, each of length 10^6. The results are presented in <ref> and shown in <ref>. The results indicate that before optimization, H_E ranged from 0.0665 to 0.2900, while after optimization, it ranged from 0.4916 to 0.5410. This demonstrates that after optimization, all generated sequences have H_E values very close to the ideal value of 0.5, indicating that the sequences are highly random. We have also shown the Hurst plots of the first five sequences in <ref>. We also compared the Hurst exponent of the ECGA with the state-of-the-art generators <cit.>. The results are presented in <ref>, which shows that the ECGA H_E values are closer to the ideal value of 0.5 compared to the generators <cit.>. Thus, the ECGA can generate highly random sequences when compared with the generators described in <cit.>. §.§ Correlation analysis The correlation coefficient <cit.> is a crucial measure for determining the level of similarity between two random sequences of the same length. Given two random sequences, Ω={ω_j}_j=1^l and Ω'={ω_j' }_j=1^l, with length l, the correlation coefficient (R) is calculated using the following formula: R(Ω, Ω') = ∑_j=1^l(ω_j-Ω)(ω_j'-Ω')√(∑_j=1^l(ω_j-Ω)^2)√(∑_j=1^l(ω_j'-Ω')^2). Here, Ω and Ω' represent the mean of Ω and Ω', respectively. The resulting R(Ω, Ω') is a value between -1 and 1. When R(Ω, Ω') is close to 0, the sequences are considered independent, whereas a value of 1 or -1 indicates a high level of dependency. We calculated R(Ω_i, Ω_j') for 100 sequences we generated, where i,j ∈{ 1, 2, ..., 100 } and i ≠ j. From our experiments, we determined that the minimum and average values of the optimized generated sequences for all pairs of i,j excluding when i=j are 0 and 0.0638, respectively. Since the average value is very close to the ideal value of 0, we conclude that the ECGA can generate highly independent sequences. §.§ Key sensitivity analysis Key sensitivity analysis enables the study of how minor changes to the input parameters or initial conditions can cause changes in the resulting output. If a PRNG has high sensitivity, even a slight change in the input can result in a significant difference in the output. This implies that a PRNG should exhibit a high level of sensitivity, even at the single-bit level <cit.>. To demonstrate the high sensitivity of the ECGA, we conducted an experiment in which we slightly altered the parameters of the ECGA. For our purpose, we generated two sequences, Ω and Ω', which represent the original sequence and a slightly varied sequence, respectively, to analyze the sensitivity of the ECGA. A sequence Ω is generated using: the parameters of EC E_p,a,b and the plain-image (a) which are defined in <ref>, r × s = 256 × 256, and (ϕ, ψ, φ) = (25,73,121). Subsequently, slight modifications to E_p,a,b, PI, r × s, and the triplet (ϕ,ψ,φ) resulted in the generation of four distinct sequences: Ω_E_p',a',b'', Ω_PI'', Ω_r' × s'', and Ω_(ϕ', ψ', φ')', respectively. The values of E_p',a',b' are listed in <ref>, while PI' represents the plain-image (b) shown in <ref>. The dimensions of r' × s' were set to 512 × 512, and the triplet (ϕ',ψ',φ') was set to (123,33,77). We analyzed the sensitivity of the ECGA using three different methods: 1) by graphical representation; 2) by computing the number of bit change rate (NBCR); and 3) by computing the correlation coefficient. §.§.§ Graphical representation We analyzed the sensitivity of the ECGA by visually displaying both the original sequence Ω and a slightly altered version of it, denoted as Ω'. The impact of various parameters is investigated and is presented in <ref>. Specifically, <ref>(a) depicts the effects of varying the parameters of the triplet (ϕ, ψ, φ), while <ref>(b) shows the impact of changing the parameters of EC. <ref>(c) and <ref>(d) demonstrate the effects of modifying the parameters PI and the size r × s, respectively. As illustrated in <ref>, a slight modification in any of the parameters E_p',a',b', PI', r' × s', and (ϕ', ψ', φ') resulted in a distinct sequence Ω' that differed from the original sequence Ω. Hence, it can be concluded that the ECGA is highly sensitive to input parameters. §.§.§ Number of bit change rate The number of bit change rate (NBCR) <cit.> is a common measure used to evaluate the sensitivity of a PRNG. To calculate NBCR for two randomly generated sequences Ω and Ω', we use the following equation: NBCR(Ω, Ω')= d_H(Ω,Ω')/n, where, d_H denotes the Hamming distance between Ω and Ω', and n represents the total number of bits in either sequence. An ideal value of NBCR is 50%, which means that the closer the value of NBCR is to 50%, the more sensitive the algorithm is. We have computed the NBCR of the original sequence Ω and a slightly changed sequence Ω' and the results are listed in <ref>. These results depict that the value of NBCR is very close to the optimal NBCR 50%, which indicates that the ECGA is highly sensitive to the input parameters and thus applicable for security applications. We also examined the NBCR of the proposed sequences both before and after the optimization process. The results, as shown in <ref>, indicate that NBCR values are within the intervals of [49.04, 51.19] and [49.99, 50.04] for the sequences before and after optimization, respectively. Additionally, we compared the NBCR of the ECGA with other PRNGs <cit.>, in terms of NBCR. <ref> shows that the NBCR value of the ECGA is identical to the optimal value of 50%, while the NBCR of the other PRNGs is closer to 50%. Therefore, the ECGA is more sensitive to input parameters when compared to the generators <cit.>. §.§.§ Correlation coefficient The sensitivity of the ECGA is also evaluated by calculating the correlation coefficient R between two sequences, Ω and Ω', where Ω represents the original sequence while Ω' represents a slightly modified version of the original sequence. To ensure high sensitivity, R(Ω, Ω') should be close to 0. We analyzed the results of R(Ω, Ω_i') where i belongs to the set E_p',a',b', PI', r' × s', (ϕ', ψ', φ'), and presented in <ref>. The results show that R(Ω, Ω') values ranged between [0.0005, 0.1392] and [0.0004, 0.0017] before and after optimization, respectively. These results indicate that our designed generator is highly sensitive to its parameters, as both before and after optimization, the R values are very close to 0. Furthermore, there is a significant improvement in the R values after the optimization process. Moreover, we conducted a comparison between the sensitivity of the ECGA and the state-of-the-art generators <cit.>, using correlation coefficient as the metric. The outcomes of this comparison are shown in <ref>. The findings from the <ref> indicate that the ECGA is more sensitive than the PRNGs <cit.>. §.§ Key space analysis The evaluation of the security of a cryptographic algorithm is closely linked to the concept of key space. When a PRNG is used for cryptographic purposes, it is essential to analyze its key space, which is the set of possible keys that can be used to generate a sequence. A larger key space makes the algorithm more resistant to exhaustive attacks, thereby improving its resistance to cryptanalysis. To ensure the security of a cryptosystem, it is recommended that the key space should be at least 2^128 <cit.>. The ECGA is based on several parameters, including I, E_p,a,b, B_z, ϕ, ψ, and φ, as described in <ref>. The SHA-256 hash code of I, p, a, and b is also utilized. Additionally, the random sequence B_z has at least 256 bits, and the parameters ϕ, ψ, and φ range from 0 to 255. As a result, the key space of the ECGA is at least 2^256 × 5. In comparison with the recommended key space of 2^128, this is a very significant increase, indicating that the ECGA is capable of withstanding modern cryptanalysis due to its extremely large key space. We conducted a comparative analysis of the key space of the ECGA with that of the existing state-of-the-art generators <cit.>. The findings of our analysis are presented in <ref>. The results indicate that the ECGA has a superior key space in comparison to the PRNGs developed in <cit.>. § CONCLUSION We presented a novel method ECGA for the construction of an image-dependent pseudo-random number generator (IDPRNG) specifically for image-cryptographic applications. We addressed the limitations of traditional PRNGs by using a multi-objective genetic algorithm (MOGA) optimization method and integrating elliptic curves into our approach. The ECGA comprises two key phases. During the initial phase, we utilize pixels from the image along with the parameters of the elliptic curve to generate an initial sequence of random numbers. During the second phase, a genetic algorithm is utilized to enhance the generated sequence by maximizing a fitness function that is based on both the information entropy and period of the pseudo-random sequence. We conducted thorough experiments and security evaluations to assess the performance of the ECGA. These evaluations covered a wide range of tests, including randomness analysis, entropy analysis, period analysis, correlation analysis, Hurst exponent analysis, key sensitivity analysis, and key space analysis. Furthermore, we have compared the ECGA with the existing state-of-the-art generators, from which it is evident that the ECGA: exhibits superior performance relative to other state-of-the-art generators <cit.> as per the NIST statistical test suite; outperformed several state-of-the-art generators <cit.> in generating sequences with optimal entropy; produces better Hurst exponent results that are closer to the ideal value of 0.5 when compared with the generators <cit.>; is more sensitive to input parameters than the generators <cit.>; and has a superior key space in comparison to the generators <cit.>. Future work could explore further optimizations, evaluate the performance on large-scale data sets, and investigate the applicability of the IDPRNG in other domains requiring secure and unpredictable random number generation. plainnat
http://arxiv.org/abs/2307.05170v1
20230711110510
Neural Quantile Optimization for Edge-Cloud Computing
[ "Bin Du", "He Zhang", "Xiangle Cheng", "Lei Zhang" ]
cs.NI
[ "cs.NI", "cs.AI" ]
Journal of Class Files, Vol. 14, No. 8, August 2021 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals Neural Quantile Optimization for Edge-Cloud Computing Bin Du^1^1School of Mathematical Sciences, Peking University, Beijing 100871, China ([email protected]), He Zhang^2^2Beijing International Center for Mathematical Research, Peking University, Beijing 100871, China ([email protected]), Xiangle Cheng^3^3Huawei 2012 Network Technology Lab, Beijing 100095, China ([email protected]), Lei Zhang^4^4Beijing International Center for Mathematical Research, Center for Machine Learning Research, Center for Quantitative Biology, Peking University, Beijing 100871, China ([email protected]) August 12, 2023 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We seek the best traffic allocation scheme for the edge-cloud computing network subject to constraints and burstable billing. First, we formulate a family of quantile-based integer programming problems for a fixed network topology with random parameters describing the traffic demands. Then, to overcome the difficulty caused by the discrete feature, we generalize the Gumbel-softmax reparameterization method to induce an unconstrained continuous optimization problem as a regularized continuation of the discrete problem. Finally, we introduce the Gumbel-softmax sampling neural network to solve optimization problems via unsupervised learning. The neural network structure reflects the edge-cloud computing topology and is trained to minimize the expectation of the cost function for unconstrained continuous optimization problems. The trained network works as an efficient traffic allocation scheme sampler, outperforming the random strategy in feasibility and cost value. Besides testing the quality of the output allocation scheme, we examine the generalization property of the network by increasing the time steps and the number of users. We also feed the solution to existing integer optimization solvers as initial conditions and verify the warm-starts can accelerate the short-time iteration process. The framework is general, and the decoupled feature of the random neural networks is adequate for practical implementations. Quantile Optimization, Cloud-Edge Traffic, Integer Programming, Gumbel-Softmax, Unsupervised Learning. § INTRODUCTION Cloud wide area networks (WANs) play a key role in enabling high-performance applications on the Internet <cit.>. Edge computing is a distributed computing technology that performs data storage and computing through devices near the data source. With edge computing technology, instead of transmitting raw data, we transmit data processed at the data source, thereby reducing the amount of data transmission, helping to reduce network latency, and providing a better user experience. Cloud computing is an Internet-based computing technology. The information of devices in the cloud-sharing network can be shared with other devices in need according to the cloud protocol. In recent years, with the advent of the Internet of Things(IoT) era<cit.>, the demand for decision-making based on real-time data of devices has increased rapidly, so we need to move from edge to cloud, which is the driving force for the development of cloud edge computing technology. Edge cloud computing has many application scenarios in the Internet of Things era, such as 5G, augmented reality, and vehicle-to-vehicle communications by Internet<cit.>. However, limited network bandwidth constraints on how to offload data to edge servers. Therefore, operators must consider designing better task scheduling strategies so that users can obtain better, low-latency, low-cost service guarantees. The network scheduling problem has been a research hot spot in network flow optimization in recent years. The greedy strategies are common approaches to finding allocation schemes for real-time task scheduling<cit.>. Another way to solve network scheduling is to mark the bandwidth allocation of tasks by binary variables. The network scheduling of edge computing can be modeled as a class of constrained integer programming problems with input parameters<cit.>. Characterized by their discrete search spaces, solving the constrained integer programming is NP-hard <cit.>. Many algorithms have been developed for integer programming problems, such as traditional greedy algorithm<cit.>, evolutionary algorithm <cit.>, exact algorithms represented by branch and bound and cutting plane methods<cit.>. Among them, based on the theory of precise algorithms, researchers have developed software, such as SCIP<cit.>, CPLEX<cit.>, and Gurobi<cit.>. However, for specific problems, the performance of the solver depends heavily on the initial guesses <cit.>. In practical applications, commercial solvers utilizing precise methods often require a significant amount of time to generate an initial solution, with some solutions being of suboptimal quality. To address this issue and expedite the generation of high-quality initial solutions for discrete edge-cloud traffic scheduling problems, we develop a neural network that employs sampling techniques to produce feasible solutions of superior quality rapidly. In this work, the pricing scheme of our edge-cloud traffic scheduling problem considers the 95^th percentile billing method commonly used in industry standards,<cit.> and the different traffic requirements of several edge-devices compete for limited bandwidth resource allocation. We model the allocation selection as a class of constrained integer programming problems with the traffic demand and the link capacities as input parameters. Since the objective function is a piecewise linear function of the billable bandwidth depending on the 95^th percentile of the bandwidth distribution over a monthly time period, the resulting problem belongs to the field of quantile optimization, which adds extra computational complexity to the optimization problems but benefits the stability <cit.>. We refine the WAN Egress Traffic Allocation Model proposed in <cit.> by considering more complicated network topology and constraints. In particular, we split the data traffic according to the bandwidth ratio of the selected peering link, transforming a mixed-integer programming problem into an integer programming problem. The network topology in our model has two layers, and the objective function also includes the cost generated by the link between the hub and internet service providers, which couples the edges with each other. Thus, the resulting constrained optimization problem also describes the competition among the traffic demands from different edges. To the authors' knowledge, implementing neural networks to help solve constrained integer or mixed integer programming is still an ongoing research area <cit.>. Motivated by the generalization property of the neural networks, these methods interpret the problem parameters as input and represent the optimizer using a neural network trained by the objective function. Even though exact solvers aim for global optimality, finding good feasible solutions fast is at least as important, especially in network scheduling problems with limited time. Thus, the neural network representation provides an end-to-end scheme for general problem parameters, which meets the requirements in edge-cloud computing. For integer and mixed integer programming problems, the discrepancy between the discrete feature of the decision variables and the gradient-based neural network training algorithms introduces extra difficulties. To resolve the issue, we consider the decision variable a random variable modeled by a random neural network. In other words, the random neural network works as a sampler, which generates candidates of the decision variables from the categorical probability distribution controlled by the network parameters. We adopt the Gumbel-Softmax reparameterization trick<cit.>, which introduces smooth approximation to non-differentiable layers of the random neural network so that the back-propagation algorithm can be implemented in training. §.§ Contributions and Organization of the Paper We make the following main contributions. * We develop models for the network scheduling problem based on the 95^th percentile billing strategy. The resulting constrained integer programming problems describe more complicated WAN systems with coupled edge traffic. * We creatively apply the Gumbel-Softmax reparameterization technique to connect discrete integer programming with continuous optimization to close the gap between the discrete domain of the decision variables and the continuous neural network output. The methodology can be applied to general integer programming problems with constraints. * We propose a sampling network to solve the integer programming problem. In particular, the neural network architecture is constructed to ensure the equality constraints are satisfied, while the inequality constraints are taken care of by the penalty terms in the soft-loss objective function in unsupervised learning. The trained network can generate feasible allocation schemes for a family of edge-cloud traffic scheduling problems of the same topology. * We test the proposed sampling network's performance in various aspects, including the generalization property. Based on the numerical results, our sampling network significantly outperforms random baseline and can accelerate commercial solvers by treating the samples as warm starts. The paper is organized as follows. Section <ref> covers the Mathematical preliminaries of our work. We review the Gumbel-Softmax reparameterization trick and introduce a general framework for approximating an integer programming problem by a series of continuous problems so that the later implementation of the random neural network is possible. The target constraint integer programming problem that models the edge-cloud network scheduling is introduced in Section <ref>. We discuss how we use random neural networks to generate warm starts for the target problems in Section <ref>, while Section <ref> includes the corresponding numerical results. We close the paper with a brief summary and discussion. § THE GUMBEL-SOFTMAX REPARAMETERIZATION This section introduces a general framework for transforming integer programming problems into continuous optimization problems. In particular, given an integer optimization problem, a series of unconstrained continuous optimization problems are introduced whose minimizers approximate the underlying categorical solution asymptotically. We first absorb the constraints by the method of Lagrange multipliers<cit.>. Afterward, considering the decision variable as a random variable, we rewrite the cost as a function of the categorical distribution. Finally, we apply the Gumbel-Softmax reparameterization trick<cit.> as a sampling process whose parameter gradients can be easily computed to implement the backpropagation algorithm. Consider a family of integer programming problems min_y∈Ω f(y;θ), s.t. g(y;θ) ≤ 0, h(y;θ) = 0, where θ∈Θ denotes the problem data and y corresponds to the decision variable. For any fixed parameter θ, the constrained programming problem (<ref>) poses a task of minimizing the objective function f, as a function of the decision variable y, subject to the condition that the constraints are satisfied. The domain of the decision variable, denoted by Ω, is a finite discrete set encoded as a set of d-dimensional one-hot vectors lying on the corners of the (d-1)-dimensional simplex Δ^d-1<cit.>. In particular, denote Ω = {y_1,y_2, …, y_d}, where y_i is an one-hot vector such that i^th-component is 1 and others are zero. The equality and inequality constraints are represented by vector-valued functions h and g (potentially nonlinear and non-convex) in (<ref>), respectively. We denote Ω_θ as the feasible set of the decision variable, that is, Ω_θ = {y ∈Ω | g(y;θ)≤ 0, h(y;θ) = 0}. For simplicity, we assume that Ω_θ is nonempty for all θ∈Θ, and for any fixed problem parameter θ∈Θ, the objective function f and each component of g and h in (<ref>) are smooth functions of y in Δ^d-1. Since d is the cardinality of the decision variable domain, it suffers from the curse of dimension in general. For example, if we consider the 0-1 knapsack problem<cit.> of n objects, then the decision variable y is encoded as a 2^n-dimensional one-hot vector, while the parameter θ stores the information of the object's weights and values. To begin with, by viewing the constraints in (<ref>) as a form of regularization<cit.>, we formulate the soft-loss function f_soft = f(y;θ) + λ_gReLU(g(y;θ)) ^2_2 + λ_h h(y; θ)_2^2, where λ_g, λ_h>0. The composite loss in (<ref>) contains objective and two penalty terms representing equality and inequality constraint violations. In general, we cannot apply gradient-based methods to find the optimizer of the soft-loss function f_soft(y;θ) due to the discrete feature of Ω. We approximate the integer optimization problem with a series of continuous problems to overcome the challenge. We introduce a random variable Y on Ω with a location parameter<cit.> α = (α_1, α_2, …, α_d) ∈ (0, +∞)^d satisfying ℙ(Y = y_i) = α_i/α_1, α_1 = ∑_i=1^d |α_i|, that is, after a normalization, α/α_1 corresponds to the probability mass vector of the random variable Y. Here, we do not impose α satisfying α_1 = 1 since in Section <ref>, we connect the location parameter to the output of a random neural network, which is positive but not necessarily normalized. We have the following naive equivalence min_y∈Ω f_soft(y; θ) ⇔min_α∈ (0,+∞)^d𝔼[ f_soft(Y; θ)], where 𝔼[·] denotes the expectation with respect to (Ω, ℙ), that is, 𝔼[ f_soft(Y; θ)] = α_1^-1∑_i=1^dα_i f_soft(y_i; θ). Thus, even though 𝔼[ f_soft(Y; θ)] is an explicit function of the continuous variable α, its evaluation requires knowing the value of f(y;θ) over the entire Ω, that is, the continuous optimization problem in (<ref>) shares the same computational complexity with the equivalent integer optimization problem. To avoid visiting the value of f_soft(y;θ) over entire Ω, we consider an empirical estimator of the expectation in (<ref>) 𝔼[ f_soft(Y; θ)] ≈1/N∑_k=1^N f_soft(Y^(k); θ), where {Y^(k)}_k=1^N are i.i.d. samples of the random variable Y with location parameter α. We can also interpret the RHS of (<ref>) as the empirical loss function or empirical risk function<cit.>. The Gumbel-Softmax reparameterization trick<cit.> provides a way to sample Y^(k) based on the location parameter α. For τ >0, we define the concrete random variable X = (X_1, X_2, …, X_d)∈Δ^d-1, given by X_k = exp( (log(α_k) + G_k)/τ)/∑_i=1^dexp( (log(α_i) + G_i)/τ), where G_i∼Gumbel(0,1). The Gumbel(0, 1) distribution can be sampled using inverse transform sampling by drawing u_i from the uniform distribution on [0,1] and taking g_i = -log(-log(u_i)). We review two useful properties of the concrete random variables<cit.>. Let X ∼Concrete(α, τ) with location parameter α∈ (0, +∞)^d and temperature τ∈ (0, +∞), then for k= 1,2,…, d, we have * (Rounding) ℙ(X_k > X_i, ∀ i≠k) = α_k/α_1, * (Zero temperature) ℙ(lim_τ→ 0^+ X_k=1) = α_k/α_1. Compared with (<ref>), Propositioin <ref> confirms that, after rounding, the sample of the concrete random variable X ∼Concrete(α, τ) follows the same distribution as Y, and the asymptotic behavior of the concrete random variable as temperature τ goes to 0^+ is the same as rounding. Thus, let {X^(k)}_k=1^N be i.i.d. samples of the concrete random variable X ∼Concrete(α, τ), and we can rewrite the approximation in (<ref>) as 𝔼[ f_soft(Y; θ)] ≈1/N∑_k=1^N f_soft([X^(k)]; θ), where [X^(k)] denotes the rounding of X^(k), that is, [X^(k)] = y_j∈Ω⇔ X^(k)_j > X^(k)_i, ∀ i≠ j. Notice that the domain Ω corresponds to the corners of Δ^d-1, while X is a random variable defined on Δ^d-1. Therefore, from a geometric perspective, the rounding procedure seeks the corner of Δ^d-1 nearest to the sample X^(k). The reparameterization in (<ref>) connects the objective function with the samples X^(k), which smoothly depend on the location parameter α according to (<ref>). We further drop the non-differentiable rounding procedure in (<ref>) and formulate the following unconstrained continuous optimization problem as the main result of this section. (Gumbel-Softmax reparameterization) Consider the integer programming problem in (<ref>) with soft-loss function (<ref>), given temperature τ>0 and i.i.d. samples of Concrete(α, τ) denoted as {X^(k)}_k=1^N, the corresponding Gumbel-Softmax reparameterization problem is min_α∈ (0, +∞)^df̂_soft^N,τ(α;θ), f̂_soft^N,τ = 1/N∑_k=1^N f_soft(X^(k); θ). Moreover, if α̂^N, τ is a minimizer of the problem (<ref>), then we call the rounding of the normalized minimizer ŷ^N,τ =[α̂^N, τ^-1_1α̂^N, τ] the Gumbel-Softmax approximated solution to the original problem in (<ref>). Suggested by the zero temperature limit in Proposition <ref>, we know the objective function f̂_soft^N,τ in (<ref>) converges to the empirical estimator in (<ref>) in probability as τ goes to 0^+. While f̂_soft^N,τ is differentiable with respect to α, it is not identical to the empirical estimator in (<ref>) for non-zero temperature. In other words, there is a trade-off between small temperatures, where samples are close to one-hot but the standard deviation of the gradients is large, and large temperatures, where samples are smooth but the standard deviation of the gradients is small. In practice, we shall tune the temperature τ from high to a small but non-zero value <cit.>. Since the objective function of the Gumbel-Softmax reparameterization problem in (<ref>) is random, the approximated solution in (<ref>) should be viewed as a random variable as well. From the first glance, the randomness of f̂_soft^N,τ is inconsistent with the deterministic feature of the original integer programming problem in (<ref>). However, in practice, the uncertainty of the approximated solution in (<ref>) is beneficial in several aspects. For example, since we are solving a series of unconstrained approximated problems based on the soft-loss function in (<ref>), the feasibility of the solution in (<ref>) is not guaranteed. Thus, the randomness allows us to generate samples of the approximated solution and select the one that produces the lowest objective function value among all the samples in the feasible domain. We have to admit that our framework suffers from the curse of dimensionality. As the decision variable of the reparameterized problem (<ref>), variable α is of the same dimension as the one-hot vector that encodes the decision variable y in the original integer programming problem (<ref>), which grows exponentially as the problem dimension increases. The dimensionality issue in our applications makes it impossible to directly model α by a neural network. In Section <ref>, we overcome this issue by decoupling. § PROBLEM FORMULATION In this section, we formalize the offline version of the cloud network bandwidth costs model with a WAN of fixed topology in Figure <ref> containing a single hub and N edges. The traffic demands are assumed to be randomly sampled in advance. Each edge has K different traffic types and connects to EL edge links. For simplicity, we adopt a constant value of 8 for the parameter K and 4 for the parameter EL, but our method can be extended to arbitrary network size. Edge links are billed individually according to their percentile utilization. Utilization-based, per-megabit billing is the industry standard for paid peer and transits Internet Service Provider (ISP) contracts<cit.> and is the billing model considered in our paper. To determine the billable bandwidth from the network utilization, ISPs measure the average utilization of peering links in five-minute intervals in both inbound and outbound directions denoted by {f̅^t}_t=1^T, {f^t}_t=1^T, respectively, where T corresponds to the total number of five-minute intervals in a single billing cycle, and the billable bandwidth of the billing cycle is z = max{ g_95({f̅^t}_t=1^T), g_95({f^t}_t=1^T) }, where g_95 denotes the 95^th percentile function, which is the same as the k-max function with k = T/20. Let E = ⋃_n=1^N E_n, E_n = {e_n,1, e_n,2,e_n,3, e_n,4}, where E_n represents the four peering links physically connected to edge-n. We use L = {ℓ_1, ℓ_2, ℓ_3, ℓ_4} to denote the four peering links between ISPs and Hub. Given the traffic demands within a billing cycle, we aim to minimize the sum of bandwidth costs on peering links E and L. We need a series of concepts and notations to formulate the corresponding mathematical model. Decision variables. In each five-minute interval, for edge-n, the traffic allocation scheme assigns the network flow from each type of traffic demand to at least one peering link in E_n. Let Y = { y_e,k,n^t} be the set of 0-1 decision variables, where y_e,k,n^t = 1 means at time slot t, the type-k traffic demand on edge-n is assigned to peering link e∈ E_n. The dimension of the decision variable Y is K*EL*N*T. Objective function. Let z_e, e∈ E and z_ℓ_i, ℓ_i∈ L be the billable bandwidth of all peering links defined by the 95^th percentile of the snapshots of the utilization as in (<ref>). The cost incurred on each link is the product of the peering link rate and the amount of the billable bandwidth that exceeds the basic link capacities, that is, let σ(x) = max{x, 0}, and we have f = ∑_e∈ E r_eσ ( z_e - c_e^b) + ∑_i=1^4 r_ℓ_iσ ( z_ℓ_i - c_ℓ_i^b), where c_e^b and c_ℓ_i^b denote the basic link capacities of the link e∈ E and ℓ_i∈ L, respectively. In the actual usage, the customer commits to pay a fixed rent for each billing cycle to the ISP, which is not shown in the cost function in (<ref>) since it does not affect the minimization problem. The split ratio of the traffic volume. At each edge, when a traffic demand is assigned to more than one peering link, the demand is split, and the split ratio is proportional to the basic link capacities of the connected peering links. In particular, let (d̅_k,n^t, d_k,n^t) denote the the averaged type-k inbound/outbound traffic demand on edge n at time slot t, then the inbound/outbound traffic carried by peering link e∈ E_n are given by x̅_e,k,n^t = y_e,k,n^tc_e^b/∑_e∈ E_n y_e,k,n^tc_e^b·d̅_k,n^t, x_e,k,n^t = y_e,k,n^tc_e^b/∑_e∈ E_n y_e,k,n^tc_e^b·d_k,n^t. The splitting in (<ref>) naturally satisfies the traffic demand constraints, that is, d̅_k,n^t = ∑_e∈ E_nx̅_e,k,n^t, d_k,n^t = ∑_e∈ E_nx_e,k,n^t. Here, the averaged traffic demand (d̅_k,n^t, d_k,n^t) and the link capacities c_e^b are part of the problem data (θ in (<ref>)), which are available under the offline setup. We sum x̅_e,k,n^t and x_e,k,n^t over different types of traffic demands, and we get the inbound/outbound traffic on the edge links f̅^t_e,n = ∑_k=1^8x̅^t_e, k,n, f_e,n^t = ∑_k=1^8x^t_e, k,n, ∀ e∈ E_n, which define the billable bandwidth z_e via (<ref>). As for the traffic on the ISP links, we further need to sum f̅^t_e,n and f^t_e,n over n, respectively, which lead to the billable bandwidth z_ℓ_i. The corresponding formulas are collected in (<ref>). Constraints. From the previous discussion, we have seen that the traffic allocations are subject to constraints on link capacities and traffic demand. Besides, the service level agreement (SLA), e.g., <cit.>, introduces another set of constraints, which identify whether a peering link satisfies the requirement of the traffic demand. In reality, since the environment of the WAN is not static, such constraints change over time. In the model, we simplify the SLA constraints into the concept of admissible peering link, denoted by E_k,n. As a subsect of E_n, the type-k demand on edge-n can only be assigned to links in E_k,n. Under the assumption, E_k,n is part of the problem data given in advance. Express in terms of equality constraints, and we have y_e,k,n^t = 0, ∀ e∉E_k,n, which hold for all time slots t. For the link capacities, we introduce the basic link capacities, which appear in the objective function (<ref>) and the split ratio (<ref>). We shall also introduce upper bounds for the billable bandwidth and the link utilization, denoted by c_e^m (maximum link capacity) and c_e^M (physical maximum link capacity), respectively. Unlike the physical maximum link capacity determined by the infrastructure, the ISP chooses the maximum link capacity to set the valid interval for the billing rate c_e and c_ℓ_i in the cost (<ref>). To summarize our discussion, we introduce the following integer programming problem, which models the network scheduling problem for edge-cloud computing. min_Y f s.t. ∑_e∈ E_k,n y_e,k,n^t≥ 1, ∀ e∈ E_n, ∀ n,t; f̅^t_e,n = ∑_k=1^8x̅^t_e, k,n, f_e,n^t = ∑_k=1^8x^t_e, k,n, f̅^t_e,n, f_e,n^t≤ c_e^M, ∀ e∈ E_n, ∀ n,t; X̅_ℓ_i^t = ∑_n=1^Nf̅^t_e_n,i,n, X_ℓ_i^t = ∑_n=1^Nf_e_n,i,n^t, X̅_ℓ_i^t, X_ℓ_i^t≤ c_ℓ_i^M, i = 1,2,3,4; z_e = max{g_95({f̅^t_e,n}_t=1^T), g_95({f_e,n^t}_t=1^T) }, z_ℓ_i = max{g_95({X̅_ℓ_i^t}_t=1^T), g_95({X_ℓ_i^t}_t=1^T) }, z_e≤ c_e^m , ∀ e ∈ E_n, n = 1,2,…, N; z_ℓ_i≤ c_ℓ_i^m, i = 1,2,3,4, where x̅_e,k,n^t, x_e,k,n^t are given by (<ref>) and the objective function f is defined in (<ref>). The variables in (<ref>) are summarized in Table <ref>. We want to emphasize that although the objective function f in (<ref>) is the sum of the cost on each link, the problem (<ref>) cannot be decomposed into low-dimensional sub-problems since the billable bandwidth of the ISP link z_ℓ_i non-linearly depends on the entire edge traffic. Nevertheless, we can linearize the problem (<ref>) by adding intermediate variables to the system. (See Appendix <ref> for the details) Although we stick to the form in (<ref>) in the later discussion, the linear representation is useful for conventional methods. Although problem (<ref>) corresponds to the offline version of the network scheduling model in the sense that the problem data are given in advance, we should think of (<ref>) as a family of integer programming problems of the form in (<ref>) parameterized by the problem data. We classify the problem data into static and dynamic parameters as in Table <ref>. Here, the static parameters, e.g., the link capacities, are physically determined once the network's topology (Figure <ref>) is chosen. In comparison, the traffic demands, decided by the users, are random and constantly changing over time periods. In other words, we interpret the static parameter as global parameters among the family of problems, while the dynamic parameters distinguish the problems within the family. Later in Section <ref> and Appendix <ref>, we follow such interpretations and generate test problems by randomly sampling the dynamic parameters subject to fixing static parameter values. Another perceptive insight is that link utilization during 5% of time slots does not contribute to the cost. This means that the top 5% traffic in any billing month is free if it does not exceed the physical maximum link capacity, which inspires the approach in <cit.>. Similar to the works like <cit.>, instead of searching for the best traffic allocation scheme precisely, i.e., the global minimizer of (<ref>), our goal is finding an “end-to-end” map between the program data (Table <ref>) and the allocation scheme. Such a map should be capable of efficiently generating allocation schemes of reasonable quantity for problems in the family. § NEURAL NETWORK ARCHITECTURE AND ALGORITHM We now introduce the Gumbel-Softmax Sampling Network (GSSN) designed for the integer programming problem. In the following sections, we sequentially discuss the data preprocessing, neural network architecture, and algorithmic components of the GSSN. In the discussion, we consider the “Unfold” and “Reshape” operations to convert matrix-valued data to vector-valued data and vice versa, respectively. For example, we say a column vector B∈ℝ^mn is an unfolding of a matrix A∈ℝ^m× n, denoted by B = Unfold(A), if B consists of all the column vectors of A in row order. Correspondingly, the inverse operation, reshaping the m × n dimensional column vector B into a matrix A of size (m,n), is denoted as A = Reshape(B,n). §.§ The Neural Network Input This subsection provides a comprehensive account of data preprocessing for the GSSN. The GSSN aims to map the local information of inbound and outbound traffic, (d̅^t_k,n,d^t_k,n), to a probability distribution on the space of traffic allocation schemes. To achieve this objective, we process all types of inbound and outbound traffic for all users simultaneously in parallel. To illustrate, we consider the k^th type of traffic (d̅^t_k,n,d^t_k,n), of user n at time t as the target and assume their set of admissible peering links as E_k,n = {e_1, e_2}. In the subsequent sections, we expound on the formation of input matrices I^t_k,n that corresponds to (d̅^t_k,n,d^t_k,n). In cases where traffic demand is not explicitly distinguished as inbound or outbound, the symbol d^t_k,n may denote traffic demand. This symbol may be substituted with d̅^t_k,n or d^t_k,n as appropriate. Every possible way of selecting {y_e_j,k,n^t}_j = 1,2,3,4 corresponds one-to-one with the non-empty subsets of E_k,n = {e_1, e_2}. Therefore, there are at most three available allocation schemes for the demand d^t_k,n . To capture the potential values of {y_e_j,k,n^t}_j = 1,2,3,4, we use the matrix YP_d_k,n^t, where YP_d_k,n^t[i,j] denotes the value of y_e_j,k,n^t for the j^th edge in the i^th option. YP_d_k,n^t = [ 1 0 0 0; 0 1 0 0; 1 1 0 0; 0 0 0 0; ⋮ ⋮ ⋮ ⋮; ], which is a 0-1 matrix of size (15,4). The first three rows of YP_d_k,n^t represent the available allocation schemes, while the elements in rows 4 to 15 are all zero. The purpose of this padding operation is to unify the input matrices of different traffic demands for different users. That is, if there are s optional edges in E_k,n corresponding to d^t_k,n, then the first 2^s-1 rows of the corresponding YP_d_k,n^t matrix represent the optional peering links, while rows 2^s to P=15 represent a matrix of all zeros. In accordance with the split ratio of traffic volume definition provided in section <ref>, we can represent the traffic split ratio matrix W_d_k,n, which corresponds to YP_d_k,n^t , as follows. Specifically, the element W_d_k,n[i,j] indicates the split ratio of traffic volume that should be adhered to on the j^th edge for the i^th allocation scheme of traffic demand d_k,n,t. W_d_k,n = [ 1 0 0 0; 0 1 0 0; c_e_1^b/∑_i=1,2c_e_i^b c_e_2^b/∑_i=1,2c_e_i^b 0 0; 0 0 0 0; ⋮ ⋮ ⋮ ⋮; ]∈ℝ^P× 4 It is worth noting that every element in the traffic split ratio matrix W_d_k,n is derived from the static parameter c_e^b. Consequently, W_d_k,n is a quantity that remains invariant with respect to variations in time t and demand d_k,n,t. This also explains why the subscript of W_d_k,n does not contain the parameter t. To obtain the allocation of inbound and outbound traffic (d̅_k,n^t, d_k,n^t) on the selected peering links under the selection scheme, we can perform the following calculations: G_d̅_k,n^t = d̅_k,n^t * W_d_k,n; G_d_k,n^t = d_k,n^t * W_d_k,n. G_d^t_k,n[i,j] represents the amount of traffic allocated to the j^th edge in the i^th allocation scheme for the traffic demand d^t_k,n. The final input matrix I_k,n,t can be obtained through the following processing. G̅_k,n,t = Unfold(G_d̅_k,n^t), G_k,n,t = Unfold(G_d_k,n^t) I_k,n,t = [ G̅_k,n,t[1] G_k,n,t[1] c_e_1^b c_e_1^m; G̅_k,n,t[2] G_k,n,t[2] c_e_2^b c_e_2^m; G̅_k,n,t[3] G_k,n,t[3] c_e_3^b c_e_3^m; G̅_k,n,t[4] G_k,n,t[4] c_e_4^b c_e_4^m; … … … …; G̅_k,n,t[60] G_k,n,t[60] c_e_4^b c_e_4^m; ]∈ℝ^(4*P) × 4 The row index i of I_k,n,t (<ref>) corresponds to the j^th edge in the p^th scheme, satisfying the following equation: i = 4*p + j, p∈ℕ^+, j∈{0,1,2,3} Specifically, I_k,n,t[i,1] and I_k,n,t[i,2] denote the allocation of inbound/outbound traffic demand of flow d_k,n^t on the j^th edge in the p^th allocation scheme. On the other hand, I_k,n,t[i,3] and I_k,n,t[i,4] represent the capacity information (c_e_k,n,j^b,c_e_k,n,j^m) of the j^th edge. The final input matrix I for problem (<ref>) is obtained by concatenating the sub-matrices of I_k,n,t in ascending order of traffic type k, user n, and time slot t. §.§ Neural Network Architecture The GSSN architecture comprises three encoders: the link encoder, the program encoder, and the ranking autoencoder. The edge parameters of the problem are continuously encoded and compressed by one of the encoders to derive new features. These features are then reshaped into a new matrix that serves as the input for the subsequent encoder. In the end, the three encoders are responsible for encoding the traffic information of each type into a probability distribution function of the feasible inbound and outbound schemes. The activation function used by GSSN is ReLU6(x)≜min{max{0,x},6}. The Link Encoder: The link encoder comprises a fully-connected, feed-forward Neural Network(FNN) with three hidden layers and 8 ReLu6 activation functions. Upon input, data I is processed by a link encoder, yielding a column vector S of dimensions T*N*K*P. Each row of S conveys the compressed encoding characteristics of the k^th traffic type for user n at time t on each peering link under scheme p. Define the matrix S^' = Reshape(S,4). Each row of S^' represents the encoding information of the scheme p selected by the k^th type of user traffic n at time t. The Program Encoder: The program encoder employs a fully connected neural network architecture identical to the link encoder. The input matrix, denoted as S, undergoes encoding by the program encoder, compressing each 4-dimensional feature vector into a one-dimensional scalar. The output is a matrix V with dimensions (T*N*K*P,1). Subsequently, V is reshaped into V^'= Reshape(V). The Ranking AutoEncoder: The Ranking AutoEncoder constitutes a prototypical autoencoder architecture comprising an encoder and a decoder. The components of these two neural network structures exhibit symmetry, encompassing 6 layers containing 58 ReLu6 activation functions. The Ranking AutoEncoder operates to map all schemes of the k^th type of traffic of user n at time t to a specific probability distribution function. This mapping enables subsequent Gumbel-Softmax sampling. After inputting V^' into the Ranking AutoEncoder, the output is a certain probability distribution α. α is a matrix with dimensions (T*N*K, P), where each row represents the probability distribution of all schemes of the k^th type of user traffic n at time t. Masked Gumbel-Softmax Sampling: Upon encoding via three preceding encoders, the probability distribution α of the scheme is obtained. It should be noted that invalid schemes are also inputted to maintain constant dimensions of the encoder’s input matrix. To encode Gumbel-Softmax sampling to invalid schemes, α is multiplied by a mask of identical dimensions. The mask assigns a probability of 0 to elements in α corresponding to invalid schemes. As discussed in Section <ref>, the range of α values is [0,∞). Following the mask’s action, α is obtained. Gumbel-Softmax performs a maximum value operation to select the index of the One-Hot vector element equal to 1, precluding the selection of invalid schemes. For the hyperparameters τ of the Gumbel-Softmax in Section <ref>, we adopted a strategy of linearly decreasing monotonically as the epoch increases. At this point, we obtain the selection scheme for the inbound and outbound transmission lines for different traffic of user n at time t. We can substitute equation (<ref>) to calculate the loss function and perform gradient training. §.§ How to Use GSSN The GSSN network’s training process is outlined in Algorithm <ref>. After training the GSSN, the information matrix I of the test set problem can be input. By repeating the input of matrix N_sampling times, the GSSN samples N_sampling different scheme selections. We choose the feasible scheme with the smallest objective function value from the schemes as the final output scheme of the GSSN algorithm. The complete GSSN network architecture can be seen in Figure <ref>. We set the initial learning rate of our neural network training to 1e-4, and the parameter updates are performed using the Adam algorithm<cit.>. As for the update of the hyperparameter τ in Gumbel-Softmax sampling, we adopt a linear annealing method based on the reference literature<cit.>. § NUMERICAL RESULTS This section provides numerical results from GSSN. The results include comparisons between the GSSN sampling method and random sampling, performance comparisons between GSSN warmup and Gurobi solving, and the generalization performance of GSSN. §.§ Problem Settings All experiments are conducted on an Inter Core i7 laptop with 32GB RAM and accelerated using a 3070-8G GPU. We generate 200 problems with N = 10 edges and T = 48 total time slots. In this study, 100 problems are randomly selected as the training set, while the remaining 100 are designated as the test set. Initially, we independently generate all static parameters using uniform distributions. Subsequently, we independently generate dynamic parameters traffic demands for different problems based on a fixed network topology determined by the aforementioned static parameters. To control the problem difficulty, we establish a certain constraint relationship between the sum of all types of traffic demands for the user n at time t and the sum of the edge capacities as follows. ∑_k=1^K=8d^t_k,n≤ 2*∑_e∈ E_k,n c^b_e,n≤∑_e∈ E_k,n c^m_e,n Due to the existence of traffic splitting ratios, Eq. (<ref>) cannot guarantee that a randomly selected solution satisfies all constraints of problem (<ref>), as demonstrated in later experiments with random networks. Further information regarding the experimental setup can be found in Appendix <ref>. We employed the following two comparative methods to evaluate the quality of the warmup generated by the GSSN sampler for problem (<ref>). * Gurobi: a commercial software that solves mixed-integer programming problems. The academic version 9.1.2 is used in this study. Gurobi solves problem (<ref>) using the linearization model developed in Appendix <ref>. * Random Sampling Network(RSN): the structure of the network is analogous to that of the GSSN, except for the users' inbound and outbound traffic link selection, which no longer adheres to the probability distribution obtained through neural network learning. Instead, a candidate solution is randomly generated from the set of admissible peering links E_k,n based on the uniform distribution over its set of all possible options. §.§ Neural Network Training and the Sampling Distribution Training the neural network involved 100 epochs, with a batch size of 1. We take the average of the soft-loss functions over the test problems as the loss function in training. The decays of the loss function value for training and testing problems are similar, as suggested by Figure <ref>. We observed no violation of the constraints during the training process, i.e., all GSSN samples are feasible solutions due to the mild difficulty of the problem controlled by assumption <ref>. Since GSSN utilizes learned probability distributions to randomly sample its outputs, the resulting outputs possess a certain level of randomness. To provide an intuitive illustration of this randomness, we fix a particular problem and randomly sample the GSSN output 1000 times, and Figure <ref> displays the histogram of feasible solution costs. Figure <ref> reveals that the GSSN output approximately follows a Gaussian distribution in terms of their cost function value. Given the relatively negligible time of sampling (in our experimental setting, approximately 0.015 seconds per sample), we can readily augment the number of samples to optimize the search for feasible solutions that offer a lower computational cost. §.§ The Comparison of WarmUp Solutions To evaluate the quality of feasible solutions generated by GSSN for the network flow problem in (<ref>), we conduct two sets of control experiments using the Gurobi solver and RSN algorithm, respectively. We utilize three approaches to obtain initial feasible solutions for the problem. The first method involves configuring the Gurobi solver with a maximum search time of 300 seconds and employing the Gurobi code “model.Params.SolutionLimit=1” to prioritize the search for feasible solutions. The other two methods include utilizing the feasible solutions obtained through RSN and GSSN sampling for network flow problems and the initial feasible solution obtained through Gurobi computation. To provide an intuitive representation, a scatter plot (Figure <ref>) is generated to illustrate the solution times and the corresponding objective function values (also known as cost) for the test problems. The mean and standard deviation (std) of the objective function values and solution time for the three methods can be found in Table <ref>. Based on the analysis of Figure <ref> and Table <ref>, noticeable differences are observed among the three methods in terms of the time required to obtain feasible solutions, where RSN exhibits the shortest time, followed by GSSN, and then Gurobi with a significant margin. Regarding the computed feasible solution values, GSSN produces the smallest objective function values, followed by RSN, then Gurobi. Interestingly, approximately half of the feasible solutions obtained by Gurobi have significantly larger magnitudes, reaching up to 1e6, compared to the other half, which has magnitudes similar to those obtained by GSSN and RSN. Thus, the initial solution generated by Gurobi can be far from optimal. When comparing GSSN with RSN, it is observed that the mean of the cost obtained by RSN is around twice that of GSSN. Besides, GSSN has a smaller standard deviation, indicating a more concentrated and stable distribution of feasible solutions. §.§ Use Gurobi to Test the Solution Quality of Three Warmups In this section, we utilize the feasible solutions generated by the three methods for network flow problems as inputs to warm up the Gurobi solver for 100 randomly generated test problems. A maximum solving time of 300 seconds is set for each problem. The cost of the problems solved by the three methods after 300 seconds is recorded. The histograms of the cost distributions obtained by the three methods as warm-up are shown in Figure <ref>, and the corresponding statistics of the mean and standard deviation of the costs can be found in Table <ref>. From Figure <ref> and Table <ref>, we observe that the GSSN method has the smallest mean and standard deviation among the three feasible solution generation methods. This indicates that the warmup feasible solutions generated by GSSN are of superior quality and more likely to escape the attraction domain of local minima with high-cost values. In contrast, part of the warmup solutions generated by RSN and Gurobi still stay near local minima after 300 seconds of solving. Thus, we can implement GSSN samples as short-term warmups for classical solvers like Gurobi. §.§ Generalization Test of GSSN Given our objective of evaluating the generalization capabilities of the GSSN and RSN sampling algorithms in dynamic scenarios, where rapid generation of feasible solutions and traffic allocation plans is essential, our focus is primarily on changes in problem cost values. Our experiments reveal that Gurobi’s performance in finding feasible solutions deteriorate for user counts exceeding 15, with some problems requiring more than 300 seconds to find a feasible solution. Consequently, we do not employ Gurobi to obtain more accurate solutions or assess the quality of longer-duration solutions. As such, Gurobi is excluded from subsequent experimental comparisons. The training set for GSSN is constructed by simulating a scenario involving 10 users and 48 time slots for problem <ref>. Subsequently, we aim to evaluate the generalizability of the trained GSSN model in scenarios characterized by more users and more time slots. The specific experimental configurations for assessing the model's generalization performance are provided below: * the generalization of time slots: We fix the number of users (N = 10) and static parameters for problem (<ref>). We generate 100 independent and identically distributed problems for each time slot size by sampling the inbound/outbound traffic demand. * the generalization of user numbers: We fix the number of time steps at 48 for network flow problem (<ref>) and randomly generate 100 problems for each user numbers. In contrast to the parameter generation method used in training data generation, the 100 problems are generated by independently and identically sampling both the static and dynamic parameters. Next, we compute the average cost of the generated feasible solutions for these 100 problems in the GSSN and RSN models. We graph the average costs obtained in relation to the number of time slots or users, as illustrated in Figure <ref>. We can see that the average cost of GSSN and RSN sampling increases approximately linearly as the number of time slots and users increases. However, the GSSN consistently outperforms the RSN model in generating feasible solutions of lower cost. In the generalization experiments involving time slots, both models demonstrate consistent success in sampling feasible solutions for each test problem. However, in the generalization experiments pertaining to varying user numbers, it is noteworthy that the success rate of generating feasible solutions by both GSSN and RSN models did not reach 100%. To assess the level of difficulty in generating feasible solutions through GSSN and RSN sampling, we establish two metrics. * the single sampling feasibility rate of the algorithm(SSFR): For a given set of N problems, each problem is sampled M times, and the number of successful samples among N*M samples is denoted as L. The single sampling feasibility rate is then calculated as SSFR = L/(MN). * the feasibility rate of the practical algorithm(PFR): For a given set of N problems, each problem is sampled M times. If at least one feasible solution is generated among M samples, then the number of problems that generate feasible solutions among N problems is denoted as S. The feasibility rate of the practical algorithm is then calculated as PFR = S/N. Figure <ref> show the value of SSFR and PFR for different numbers of users. Further examination of Figure <ref> shows that as the number of users and problem difficulty increase, the feasibility rates of both GSSN and RSN decline. However, GSSN exhibits a higher probability of generating a feasible solution with a single sample than RSN. When employing an algorithm that samples 100 times and evaluating the PFR indicator, we observe that while the feasibility rates of both GSSN and RSN decrease with increasing numbers of users in Figure <ref>, the ratio of decline for GSSN is significantly slower than that of RSN. For user counts ranging from 10-50, the feasibility rate of GSSN remains above 70%, whereas for user counts exceeding 25, the feasibility rate of RSN falls below 70%. In practical applications involving dynamic network flow problems, the number of users typically ranges from 1000 to 5000. The associated costs for data generation and network training at this scale are substantial. Consequently, a neural network trained on smaller user counts while maintaining good generalization performance in larger user scenarios is highly valuable. § CONCLUSION AND DISCUSSION We formulated the quantile optimization problem for edge-cloud traffic scheduling into constraint integer optimization problems with parameters describing traffic demands and link capacities. For the edge-cloud computing problem, we solved the corresponding unconstraint continuous optimization problem via unsupervised learning using GSSN, where the inputs and outputs are the problem parameters and the allocation scheme, respectively. In the numerical experiments, we tested the quality of the outputs in terms of the objective function value, warmups for classical solvers like Gurobi, and the generalization property by varying the number of time slots and users. From the numerical results, the GSSN method outperforms classical solvers and the RSN approach, and the output acts as short-term warmups for Gurobi. The GSSN shows a reasonable generalization property essential in real-world applications. Regarding the future work, the proposed problems are within the scope of a static regime, where we assume we know the traffic demand in advance. However, the dynamic regime, that is, the traffic allocation depends on the real-time traffic demands, yields more significance in practice. Also, despite the good numerical outcomes, using the soft-loss function as the objective function in training does not theoretically guarantee the output is always feasible. We already observed the decay of the feasible rate in the generalization test. Improving the feasible rate for neural network-based methods remains a critical issue in solving complex integer programming problems, and the GSSN is not an exception. In the future, we need to develop more systematic tools to ensure the feasibility of the GSSN output and seek opportunities to implement GSSN on more challenging edge-cloud computing problems. § ACKNOWLEDGMENTS This should be a simple paragraph before the References to thank those individuals and institutions who have supported your work on this article. § THE PROBLEM DATA GENERATING METHOD The current appendix delineates the methodology for generating training and test sets used in Section <ref>. Following the interpretation in Remark <ref> and simplicity reasons, we propose the following assumptions regarding the problem set and parameters therein: * the problem set is categorized by its network topology (Figure <ref>), where the problems share the same static parameters (Table <ref>) values; * the physical maximum link capacities (c_e^M, c_ℓ^M) should be large enough such that the related constraints in (<ref>) hold constantly; * for each problem, the inbound and outbound traffic demands at different time slots {d̅_k,n^t, d_k,n^t}_t=1^T are i.i.d. samples for each k and n. Since we still ask the billable bandwidth to be no greater than the maximum link capacities (c_e^m), truncating the constraints regarding the physical maximum link capacities (c_e^M) does not intrinsically change the problem complexity. Meanwhile, such truncations help us tune the parameter value so that the difficulty of finding a feasible solution by RSN is reasonable. Guided by the assumptions, we initiate the static parameter values and introduce the sampling method, subject to the existing problem data, for dynamic parameters afterward. §.§ Static Parameter Values A portion of the static parameter values is determined directly as illustrated in Table <ref>. Here, Uniform(a,b) and Binomial(EL,0.5) stand for the uniform distribution on interval [a,b] and binomial distribution of EL Bernoulli trials, respectively. The remaining static parameters are the link capacities and link rates of the ISP links, which should be consistent with the parameter value of the edge links connected to the ISP. Since the ISP merges all the edge link traffic connecting them (Figure <ref>), the ISP link capacities should correspond to the total link capacities of the edge links connected. For example, the specific generation method of c_ℓ_i^b adheres to the equation below, c^b_ℓ_i∼Uniform(0.8*C^b_ℓ_i,0.9*C^b_ℓ_i), C^b_ℓ_i = ∑_n=1^N c^b_e_n,i, where i = 1,2,…,EL. The rest ISP link capacities c^m_ℓ_i and c^M_ℓ_i are sampled in the same manner as Eq. (<ref>). In (<ref>), we introduced a pair of contraction coefficients that can be adjusted to produce problem sets of desirable difficulties. §.§ Dynamic Parameter Values Subsequently, subject to fixed static parameter values, we generate samples of the dynamic parameters, i.e., the traffic demands, which lead to the training and test problem sets used in Section <ref>. Introducing the inbound and outbound traffic demands random tensors, D̅ = (d̅^t_k,n)∈ℝ^K× N × T, D = (d̅^t_k,n)∈ℝ^K× N × T, respectively. Motivated by the assumption, we establish Algorithm <ref> to efficiently generate i.i.d. samples of the element d̅^t_k,n and d^t_k,n. 1.0 To remove possible extreme traffic demand, we employ Eqs. (<ref>) and (<ref>) to ensure that the aggregate inbound and outbound traffic demands of user n at time t are controlled by the sum of the maximum link capacity of all connected edges. Take the inbound traffic demands as an example, and we have ∑_k=1^Kd̅^t_k,n≤∑_k=1^8 0.25*cbb = 2∑_e∈ E_k,n c^b_e≤∑_e∈ E_k,n c^m_e. In Algorithm <ref>, the computation in the t-loop and k-loop can be processed via vector operations, which benefits the efficiency. Figure <ref> depicts an example of inbound/outbound traffic demand sampling in a scenario with a single user. § THE LINEARIZED NETWORK SCHEDULING PROBLEM In this appendix, we introduce the linearization of the network scheduling problem in (<ref>). To resolve the nonlinearity introduced by the quantile function g_95, we introduce 0-1 intermediate variables (u̅_e,n^t, u_e,n^t), e∈ E_n, n=1,2,…, N and (u̅_ℓ_i^t, u_ℓ_i^t), i=1,2,3,4 to label the top 5% inbound/outbound traffic time slots for edge links and ISP links, respectively. For example, u̅_e,n^t=1 means the averaged inbound traffic of the current time slot belongs to the top 5% inbound traffic among the billing period. We propose the following linearized constraint for (<ref>). * identify the billable bandwidth of the edge links ∑_t=1^Tu̅_e,n^t≤ 0.05T, ∑_t=1^Tu_e,n^t≤ 0.05T, ∀ e∈ E_n, ∀ n, f̅^t_e,n = ∑_k=1^8x̅^t_e, k,n, f_e,n^t = ∑_k=1^8x^t_e,k,n, z_e≥f̅_e,n^t - c_e^Mu̅_e,n^t, z_e≥f_e,n^t - c_e^Mu_e,n^t, ∀ t, ∀ e∈ E_n, ∀ n; * identify the billable bandwidth of the ISP links ∑_t=1^Tu̅_ℓ_i^t≤ 0.05T, ∑_t=1^Tu_ℓ_i^t≤ 0.05T, i=1,2,3,4; X̅_ℓ_i^t = ∑_n=1^Nf̅^t_e_n,i,n, X_ℓ_i^t = ∑_n=1^Nf_e_n,i,n^t, z_ℓ_i≥X̅_ℓ_i^t - c_ℓ_i^Mu̅_ℓ_i^t, z_e≥X_ℓ_i^t - c_ℓ_i^Mu_ℓ_i^t, ∀ t, i=1,2,3,4; * the link capacities constraint f̅_e,n^t≤ c_e^m· (1 - u̅_e,n^t) + c_e^M·u̅_e,n^t, f_e,n^t≤ c_e^m· (1 - u_e,n^t) + c_e^M·u_e,n^t, ∀ t, ∀ e∈ E_n, ∀ n X̅_ℓ_i^t≤ c_ℓ_i^m· (1 - u̅_ℓ_i^t) + c_ℓ_i^M·u̅_ℓ_i^t, X_ℓ_i^t≤ c_ℓ_i^m· (1 - u_ℓ_i^t) + c_ℓ_i^M·u_ℓ_i^t, ∀ t, i =1,2,3,4; z_e≤ c_e^m, ∀ e∈ E, z_ℓ_i≤ c_ℓ_i^m, i =1,2,3,4. IEEEtran
http://arxiv.org/abs/2307.07243v1
20230714093558
Comparative study of variations in quantum approximate optimization algorithms for the Traveling Salesman Problem
[ "Wenyang Qian", "Robert A. M. Basili", "Mary Eshaghian-Wilner", "Ashfaq Khokhar", "Glenn Luecke", "James P. Vary" ]
quant-ph
[ "quant-ph" ]
[email protected] Instituto Galego de Fisica de Altas Enerxias (IGFAE), Universidade de Santiago de Compostela, E-15782 Galicia, Spain Department of Physics and Astronomy, Iowa State University, Ames, IA 50011, USA [email protected] Department of Electrical and Computer Engineering, Iowa State University, Ames, IA 50011, USA [email protected] Department of Mathematics, Iowa State University, Ames, IA 50011, USA [email protected] Department of Electrical and Computer Engineering, Iowa State University, Ames, IA 50011, USA [email protected] Department of Mathematics, Iowa State University, Ames, IA 50011, USA [email protected] Department of Physics and Astronomy, Iowa State University, Ames, IA 50011, USA The Traveling Salesman Problem (TSP) is one of the most often-used NP-Hard problems in computer science to study the effectiveness of computing models and hardware platforms. In this regard, it is also heavily used as a vehicle to study the feasibility of the quantum computing paradigm for this class of problems. In this paper, we tackle the TSP using the quantum approximate optimization algorithm (QAOA) approach by formulating it as an optimization problem. By adopting an improved qubit encoding strategy and a layerwise learning optimization protocol, we present numerical results obtained from the gate-based digital quantum simulator, specifically targeting TSP instances with 3, 4, and 5 cities. We focus on the evaluations of three distinctive QAOA mixer designs, considering their performances in terms of numerical accuracy and optimization cost. Notably, we find a well-balanced QAOA mixer design exhibits more promising potential for gate-based simulators and realistic quantum devices in the long run, an observation further supported by our noise model simulations. Furthermore, we investigate the sensitivity of the simulations to the TSP graph. Overall, our simulation results show the digital quantum simulation of problem-inspired ansatz is a successful candidate for finding optimal TSP solutions. Comparative study of variations in quantum approximate optimization algorithms for the Traveling Salesman Problem James P. Vary August 12, 2023 ================================================================================================================= § INTRODUCTION For over a century, the Traveling Salesman Problem (TSP) <cit.> has inspired hundreds of works and dozens of algorithms, of both exact and heuristic approaches. Today, the TSP has become so quintessential in modern computing that it is commonly considered the prototypical NP-Hard combinatorial optimization problem, possessing far-reaching impact on countless applications in science, industry and society. Consequently, the TSP is frequently taken as an ideal candidate for new computational models and non-standard algorithmic approaches, including approximate approaches like simulated annealing <cit.> and self-organizing maps <cit.>, which have been widely employed to tackle the TSP. Recent advancements in quantum technologies have paved the way for various quantum computing approaches to tackle the Traveling Salesman Problem (TSP). These approaches include the quantum Held-Karp algorithm <cit.>, quantum annealing (QA) <cit.>, and the more general variational quantum algorithm <cit.> (VQA). VQA approaches have found extensive applications in diverse fields such as chemistry <cit.>, physics <cit.>, and finance <cit.>, among others. Although complete demonstrations of quantum advantage over classical algorithms are currently limited due to the noisy intermediate-scale quantum (NISQ) era <cit.>, exploring these quantum algorithms remains crucial as experimentation on prototype quantum hardware continues to rapidly approach what can be classically simulated by even the world’s largest supercomputers. Notably, the quantum approximate optimization algorithm (QAOA) <cit.>, a subclass of the general VQA, has been successfully applied to a number of optimization problems <cit.>, including the max-cut problem <cit.>, vehicle routing <cit.>, DNA sequencing <cit.>, protein folding <cit.>, as well as the TSP <cit.>. In comparison to the popular Hardware Efficient VQA, the QAOA takes advantage of the domain knowledge of the specific problem at hand to produce a variational ansatz with fewer parameters and a shallower depth. Furthermore, an extension of the original QAOA called the quantum alternating operator ansatz <cit.> offers a generalized approach that specializes in solving problems with hard constraints. In the NISQ era, the QAOA approach can be particularly advantageous for addressing the challenges of the Traveling Salesman Problem (TSP), owing to the QAOA's hybrid feature, hardware-friendly structure, and controlled optimization. Being a hybrid approach, the QAOA exhibits robust tolerance to systematic errors by leveraging classical computer optimizers. Its layered ansatz structure inspired by the problem Hamiltonian allows for high flexibility in the circuit depth and qubit coherence time, incorporating the capabilities offered by the quantum backends. Compared with the QA <cit.>, the QAOA also enables fine control of the optimization through its finite layers, which is particularly beneficial in the current NISQ era. However, the numerical simulation of the QAOA on the TSP, especially in the multiple layer region, is not well understood, since the non-adiabatic mechanism of the QAOA differs significantly from QA <cit.>. Therefore, it becomes imperative to explore various implementations of the QAOA to determine the optimal path for simulation. Conducting investigations of these problems on digital quantum computers or simulators is essential, as it has the potential to unveil new quantum simulation strategies for traditional optimization tasks. We distinguish the present work from the previous studies by constructing our QAOA using different ansatzes and comparing their performances in both numerical accuracy and resource cost, which addresses a crucial aspect that is often neglected in conventional studies. In this work, we study the effectiveness of three distinct designs of the QAOA in solving the TSP by adopting a layerwise learning optimization protocol <cit.> on digital quantum simulators via Qiskit <cit.>. We organize this paper as follows: In Sect. II, we introduce the TSP and its mathematical formulation as a binary constraint optimization problem. In Sect. III, we outline the QAOA methods, with particular focus on the initialization, mixer ansatz, and measurement protocol employed in this work. In Sect. IV, we present and compare the numerical results of the QAOA simulation on TSP instances with 3, 4, and 5 cities, utilizing different ansatz designs. We discuss the impact of the device noise and TSP variations on the simulation results. In Sect. V, we summarize the results and discuss plans for the future. § TRAVELING SALESMAN PROBLEM In this section, we first define the TSP as an optimization problem and then improve its formulation by taking advantage of symmetry in the solution. §.§ TSP formulation as optimization problem The Traveling Salesman Problem asks for the shortest path that visits each city exactly once and returns to the starting city. In the symmetric case where the distance between any two cities is the same regardless of the traveling direction, the TSP can be reformulated as an undirected graph problem where its vertices represent cities and edge weights represent traveling distances. Mathematically, given an undirected graph G with vertices V and edges E, i.e., G = (V, E), we aim to find a Hamiltonian cycle that goes through all |V| nodes exactly once with the smallest total weights of the connecting edges on the path. In this graph formulation of the TSP, any valid cycle, be it minimum or not, can be represented by a visiting order or a permutation of integers, such as {0, 1, ..., n-1}, where the integers are the city indices starting at 0 for a total of n cities. Alternatively, the visiting order on a TSP graph can be conveniently described by a sequence of binary decision variables x_i,t, indicating whether the city-i is visited at time t <cit.>. If x_i,t=1 then the city-i is visited at t, otherwise the city is not visited by the traveling salesman. Naively, to fully describe the solution to a n-city TSP, a total of n^2 binary variables is needed in this representation. Alternatively, this “one-hot" representation of binary decision variables can be written collectively in either matrix or flattened array format for numerical implementation. For instance, a valid Hamiltonian cycle of permutation x=(0, 1, 2, 3), is translated into binary decision variables x as x =(0, 1, 2, 3) ≡[ 1 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 1 ]≡ 1000010000100001 , where the matrix row index represents each city index, and the column index represents each time instance. City-i is visited at time t if and only if x_i,t=1. In this work, all three notations (permutation, matrix, and bit string array) are used interchangeably. Any Hamiltonian cycle in the TSP has a unique sequence of binary decision variables or “bit string". But the reverse is not true since a large portion of the possible bit strings may not correspond to any meaningful permutation. Specifically, we classify any bit string x into three categories or states. x = true, x is a permutation and gives the shortest path, false, x is a permutation but does not give the shortest path, invalid, x is not a permutation, where the true and false bit strings are also called valid bit strings. Any bit string can be translated to a Hamiltonian cycle if and only if it is a permutation. Clearly, invalid solutions are disallowed traveling orders to the TSP. With binary decision variables x, a true solution to an n-city TSP can be found by finding an x that minimizes the following cost function <cit.>, C_dist(x) = ∑_0≤ i,j<nω_ij∑_t=0^n-1 x_i,t x_j,t+1, where ω_ij is the distance (or edge weight in the undirected graph) between city-i and city-j.[In symmetric TSP, ω_ij = ω_ji and ω_ii=0.] Here, C_dist(x) also gives the shortest TSP distance when x is a true solution. Since the cost function itself does not forbid invalid solutions in general, additional constraint conditions must be satisfied for a valid Hamiltonian cycle, such as ∑_i=0^n-1 x_i,t = 1 for t=0,1,⋯,n-1 ∑_t=0^n-1 x_i,t = 1 for i=0,1,⋯,n-1 where Eq. (<ref>) forbids multiple cities visited by the traveler at the same time, and Eq. (<ref>) forbids revisiting the same city. Alternatively, in the matrix format, these constraints are easily implemented by requiring that any row or column sum to one exactly. These two hard constraints are the necessary conditions for any valid solution, though not necessarily a true solution to a TSP. To formulate the TSP as a minimum-optimization problem, these constraint conditions are conveniently incorporated as the penalty terms, such that the combined cost function, C(x) becomes, C(x) = C_dist(x) + λ C_penalty(x) = ∑_0≤ i,j<nω_ij∑_t=0^n-1 x_i,t x_j,t+1 + λ{∑_t=0^n-1(1-∑_i=0^n-1 x_i,t)^2 + ∑_i=0^n-1(1-∑_t=0^n-1 x_i,t)^2}, where λ is the weight factor of the penalty term, serving as the Lagrange multiplier. λ should be positive and sufficiently large. It is easy to see bit string x gives the minimum of C(x) if and only if x is a true solution to the given TSP. Finding a Hamiltonian cycle to the TSP is now equivalent to finding an x^* that minimizes C(x) in Eq. (<ref>), i.e. x^* = min C(x). §.§ Improved TSP by eliminating rotational symmetry Symmetry plays a vital role in many graph optimization problems, and exploiting them can help reduce the problem's complexity. In the previously introduced TSP optimization, one uses n^2 decision variables for n cities; however, solutions obtained after the optimization display “rotational" symmetry: they are physically identical up to some rotation. For example, a visiting order of permutation (0,1,2) is equivalent to (1,2,0) and (2,0,1) for a 3-city TSP. They form a natural equivalence class on the solution sets. To reduce the size of the search space (and the number of qubits to encode), a simple but significant improvement can be made by fixing the starting city <cit.>. Without loss of generality, we fix city-0 as our starting point, and the traveling salesman will return to city-0 after visiting all the other cities exactly once. Then, the improved cost functions C'_dist(x) and C'(x) become C'_dist(x) = ∑_1≤ i,j<nω_ij∑_t=1^n-2 x_i,t x_j,t+1 + ∑_i=1^n-1ω_0i(x_i,1+x_i,n-1), C'(x) = C'_dist(x) + λ C'_penalty(x) = ∑_1≤ i,j<nω_ij∑_t=1^n-2 x_i,t x_j,t+1 + ∑_i=1^n-1ω_0i(x_i,1+x_i,n-1) + λ{∑_t=1^n-1(1-∑_i=1^n-1 x_i,t)^2 + ∑_i=1^n-1(1-∑_t=1^n-1 x_i,t)^2}. In this new cost function, decision variables x_i,t only take value i={1, 2, ⋯ n} and t={1, 2, ⋯ n}, and thus we only need effectively (n-1)^2 decision variables for an n-city TSP after fixing the initial city. The reduction in the length of the bit string is especially advantageous because it is ultimately equivalent to reducing the number of qubits for encoding the problem on a quantum circuit. Additionally, it is important to point out that this TSP optimization formulation works for a general symmetric TSP, not relying on a flat surface, which can be generalized to many real-world applications where non-planar relations are ubiquitous, such as social networks, stock markets, material science, and so forth. Asymmetric TSP (ω_ij≠ω_ji) can also be in principle formulated similarly but is not considered within the scope of this work. There are many other ways to formulate n-city TSP as an optimization problem <cit.> usually requiring more than n^2 variables. Recent work <cit.> using the Hamiltonian Cycle Detection oracle leads to even fewer qubits. Within the n^2-variable formulation, an alternative approach to formulating the TSP expresses the cost function in terms of adjacency matrix, C_adj(x) = ∑_0≤ i,j<nω_ij x^adj_ij, where x^adj is the adjacency/connectivity representation of a permutation.[For an example, the adjacency matrix for (0, 1, 2, 3) visiting order in matrix form is x^adj = [ 0 1 0 1; 1 0 1 0; 0 1 0 1; 1 0 1 0 ]. ] The adjacency matrix representation can be particularly useful in symmetric TSP because time degrees of freedom are automatically factored out. Penalty terms for the cost function can be conveniently included by the symmetry about the main diagonal. However, unlike our adopted construction, it is not straightforward to reduce the number of decision variables in Eq. (<ref>), and therefore we leave it for a future study. In the subsequent section, we introduce the quantum approximate optimization algorithm based on the improved TSP optimization formulation according to Eq. (<ref>). § QUANTUM APPROXIMATE OPTIMIZATION ALGORITHM (QAOA) The quantum approximate optimization algorithm (QAOA) <cit.> is a general quantum heuristic approach for solving optimization problems. In this section, we introduce the QAOA workflow in detail and its application to the TSP formulation introduced in Sect. <ref>. §.§ QAOA workflow The QAOA is deeply connected with the adiabatic quantum computation (AQC) <cit.> which is based on the adiabatic theorem. In AQC, the whole simulation process can be viewed as a time-dependent Hamiltonian evolution represented by H(t), where H(t) = (1-t/T) H_M + t/T H_P, Here, H_M represents a known ansatz and H_P is the target Hamiltonian that one aims at finding a ground state. According to the adiabatic theorem, by gradually introducing perturbation, an initial eigenstate of H(t=0)=H_M will evolve into the ground state of H(t=T)=H_P. However, in practice, simulating this process can be extremely time-consuming, and accurately estimating a suitable duration poses its own challenges. The fundamental idea behind the QAOA is to approximate this adiabatic process by parameterizing the infinitely-long time evolution into finite time steps, addressing practical considerations. In both the original QAOA <cit.> and the extended QAOA <cit.>, the hybrid quantum approach consists of three essential parts: * State initialization with initial state |s⟩. * Parameterized unitary ansatz U_p(β⃗, ), a variational ansatz of p layers for the TSP, based on two alternating Hamiltonians H_P and H_M using respective parameters β⃗ and . * Measurement and optimization of the cost expectation ⟨β⃗, |C(x)| β⃗, ⟩ for the final state | β⃗, ⟩ where an optimizer on a classical computer is used for the minimization. Putting the three parts together, we construct the complete QAOA circuit, where the final state after the evolution is |β⃗, ⟩ =U_p(β⃗, )|s⟩= (∏_i=1^p U_M(β_i) U_P(γ_i))|s⟩ = U_M(β_p) U_P(γ_p) ⋯ U_M(β_1) U_P(γ_1)|s⟩, where p is referred to as the depth (or layer number) of the QAOA. Specifically, the two alternating unitary ansatzes in each layer are: U_P(γ_i) = e^-iγ_iH_P, U_M(β_i) = e^-iβ_iH_M, where H_P is the problem Hamiltonian derived from the cost function and H_M is the mixer Hamiltonian that explores the feasible subspace. In this work, we refer to the QAOA ansatz with p layers as p-QAOA. Note and β⃗ are parameter vectors of length p to be optimized, and there is only one single parameter γ_i (β_i) for the associated unitary ansatz U_P (U_M) per layer. It means there are only 2 parameters per layer for the QAOA, independent of the number of qubits (i.e., problem size), which makes the approach highly scalable. These parameters or angles can also be regarded as mimicking the trotterization time steps in the QAOA to approximate the adiabatic evolution in Eq. (<ref>); nonetheless, the behavior in the finite layer limit can be drastically different. In the last few years, many variants of the QAOA approach have emerged <cit.>. One such variant is the multi-angle QAOA (ma-QAOA) <cit.>, which uses a unique angle for each element of the Hamiltonian. This approach could potentially reduce circuit depth required for solving the TSP. Another variant, the digitized-counterdiabatic QAOA (DC-QAOA) <cit.> introduces an additional problem-dependent counterdiabatic driving term in each layer to enhance the convergence rate of the optimization process. Additionally, the adaptive-QAOA (ADAPT-QAOA) <cit.>, inspired by the adaptive VQE, systematically selects the mixer ansatz based on the optimization, potentially improving the simulation outcome. Since these more advanced QAOAs generally require more than two parameters per layer and additional simulation time, we opted not to incorporate them in this initial work; however, we have plans to include these variants in a subsequent study, allowing for a more comprehensive analysis of the QAOA to the TSP. §.§ From binary decision variables to qubits To carry out the optimization on quantum computers, an efficient qubit encoding scheme is necessary to map the binary decision variable in the TSP formulation to quantum computers. Here, we use the standard boolean binary variable mapping strategy <cit.>. For an n-city TSP, we simply map x_i,t↦ (I_(i,t) - Z_(i,t))/2, where Z_(i,t) is the Pauli- matrix (see App. <ref>) at qubit location (i,t) on a two-dimensional lattice. To identify the qubit on the lattice with its realistic index in a quantum device, one may use the ideal mapping (ignoring the device connectivity) that takes (i,t) → ni+t for the original TSP formulation in Eq. (<ref>). For the improved TSP formulation according to Eq. (<ref>), since both sets of the i=0 and t=0 qubits are never used, we economically map (i,t) ↦ (n-1)(i-1)+(t-1). such that only a total of (n-1)^2 qubits is needed, from index 0 to (n-1)^2-1, for n cities. Reducing qubit number is crucial in the practical quantum simulation, and therefore we adopt the mapping strategy in Eq. (<ref>) for the improved TSP formulation throughout this work. §.§ State initialization The initial states are one of the key components in the QAOA approach. In the original QAOA <cit.>, the initial states are always set to be, |+⟩^⊗ N, where N is the total number of qubits. For a n-city TSP, with the original n^2 =N case for simplicity, it means the initial state becomes |s^n_H⟩ = ^⊗ n^2|0⟩ = |+⟩^⊗ n^2 = 1/√(2^n^2)∑_x=0^2^n^2-1|x⟩. In this way, the initial quantum state |s^n_H⟩ is a superposition of all possible basis states for the problem. While this strategy is easy to implement on a quantum device using Hadamard gates , the magnitude of each basis state in the initial state shrinks exponentially as the number of cities increases because the dimension of the search space grows as 𝒪(2^N). Recently, additional initialization strategies of a restricted quantum search space following their corresponding mixing ansatzes have been considered in the QAOA. In particular, the so-called W_N states <cit.> can be especially useful as it represents one-hot encoding on the quantum circuit suitable for binary decision variables. For example, a W_3 state on three qubits is written as W_3 = 1/√(3)(|100⟩ + |010⟩ + |001⟩), where each bit string always sums to one. With the property of the W state, we can construct an improved initial state to satisfy the temporal or spatial constraints of the TSP automatically, i.e., Eq. (<ref>) or Eq. (<ref>), |s^n_W⟩ = (_n|0⟩)^⊗ n= (1/√(n)∑_i=0^n-1|2^i⟩)^⊗ n, where the temporal constraint is satisfied by putting together multiple W states in parallel.[Technically, there are many ways to build the _n|0⟩ state; we followed the method in Ref. <cit.>.] With a sufficiently powerful ansatz, one may also consider a permutation initial state, ignoring all superpositions, |s^n_P⟩ = |(0, 1, ⋯, n-1)⟩, where its construction is simplest, using a few Pauli--gates. We also considered an equal superposition of all permutation states, representing the minimal Hilbert space containing all the valid solutions; however, we found it to be the most challenging to initialize on the circuit. These choices of initial states provide dramatically different initial search spaces, with dimension going from 𝒪(2^n^2), 𝒪(n^n), to 𝒪(1) respectively, along with their set relation {|s_P⟩}⊂{|s_W⟩}⊂{|s_H⟩}. Notably, both the |s_H⟩ and |s_W⟩ are a superposition of solution states, but |s_P⟩ is not. The selection of initial states plays a vital role in the QAOA, as it can reduce the number of potential candidates in the quantum evolution, albeit at the expense of an increased number of quantum gates. Lastly, these initial states will be used together with their respective mixer Hamiltonians of the QAOA, which are introduced in the next section. §.§ Variational ansatzes Variational ansatzes are essential in optimizing the quantum state to represent the true solution. The variational ansatz U_p introduced in Eq. (<ref>) consists of two parts: §.§.§ Problem Hamiltonian The problem Hamiltonian is the qubitized cost function encoding the specific TSP instance to be solved in the QAOA approach. Specifically, these problem Hamiltonians are obtained by mapping the cost functions (Eq. (<ref>) and Eq. (<ref>)) onto the quantum circuit according to the encoding strategy, Eq. (<ref>), C_dist(x), C'_dist(x) → H_dist, C_penality(x), C'_penality(x) → H_penality. where the obtained operators are a sum of the Pauli- and Pauli- operators, known as the Ising Hamiltonian <cit.>. Combining them, we obtain H_P, the problem Hamiltonian of the TSP instance, H_P = H_dist + λ H_penality = ∑_i c_i Z_i + ∑_ij c_ijZ_i Z_j. As a consequence of qubit encoding, a ground state of H_P is guaranteed to be a true solution state that minimizes the respective TSP cost function. The Ising representation of the Hamiltonian is easily translated into a quantum circuit using a sequence of quantum gates. §.§.§ Mixer Hamiltonian The mixer Hamiltonian defines how the state space is to be explored and impacts how the quantum state evolves significantly with each iteration. Based on the Trotter product formula, the mixer Hamiltonian must not commute with the problem Hamiltonian, [H_M, H_P] ≠ 0, to simulate a tottered optimization like the QAOA. Many mixer Hamiltonians have been proposed <cit.> for different problems solved via QAOA. For different mixers, appropriate initial states as the eigenstates of the mixer Hamiltonian must be used in accordance with the adiabatic theorem. In evaluating the numerical performance of QAOA for TSP, we consider three types of mixers: X mixer, XY mixer, and Row-swap mixer (RS mixer), with details explained below. (a) The X mixer is the original mixer proposed in the QAOA that works together with a number of problems such as the Max-cut problem <cit.>. It takes s_H for its state initialization. In the n-city TSP, the X mixer is H_M_X = ∑_i=0^n-1∑_t=0^n-1 X_i,t. The X mixer strategy proves most useful for quantum annealing applications, especially on practical D-Wave Systems <cit.>. It is easy to implement on most quantum backends, only requiring 𝒪(n^2) single-qubit -gates per layer in the QAOA. (b) The XY mixer is another natural candidate for the mixing Hamiltonian, preserving the Hamming distance among the acted qubits <cit.>, which is especially suited to the one-hot encoding realized by the initial states s_W. Here, we construct the XY mixer for the n-city TSP as H_M_XY = ∑_i=0^n-1∑_t=0^n-1_(i,t), (i,t+1) where the -gate is implemented via the Pauli- and Pauli- gates on the circuit. The block-wise construction allows the conservation of probability for each city in the TSP, reinforcing the satisfaction of the temporal constraint, as in Eq. (<ref>). A generic -gate across any two points (i,t) and (j,s) on the 2D lattice is _(i,t), (j,s) = X_i,t X_j,s + Y_i,t Y_j,s, where X (Y) is the Pauli- (Pauli-) matrix. Here, one should understand Eq. (<ref>) as a cyclic iteration of the -gate. For example, X_n-1,n≡ X_n-1,0 in the n-city case; other variants such as non-cyclic and fully-connected -gates can also be used. The -gate is often interchangeably referred to as the -gate, as they both redistribute the amplitudes between two qubits while preserving the total amplitude of the quantum state. Alternatively, one could use the -gate <cit.> instead of the -gate to implement the XY mixer via _u=(i,t),v=(i,t+1)=1/2(X_uX_v+Y_uY_v+Z_uZ_v+I_uI_v). where a similar performance is produced, and therefore we choose to use the simpler -gate to implement the XY mixer throughout this work. Compared with the X mixer, the XY mixer is more expensive to implement by having 𝒪(n^2) -gates per layer. (c) The Row-swap (RS) mixer has recently been proposed in the QAOA as a means of embedding hard constraints directly into the mixer Hamiltonian <cit.>. Although the RS mixer also uses the -gate, it simultaneously swaps all non-overlapping rows of qubits (corresponding to different cities) as a whole. The RS mixer can be represented as, H_M_RS =∑_i=0^n-2∑_j=i+1^n-1∏_t=0^n-1_(i,t), (j,t), where the first two sums represent all possible swapping between city-i and city-j, and the last product denotes the simultaneous swap of all corresponding entries in the associated cities. In this way, the RS mixer is capable of exploring the entire space of valid solutions when initialized on any single valid state, i.e., s_P. However, it should be noted that the RS mixer incurs a significant computational cost during the simulation due to the involvement of many tensor products of the Pauli- or Pauli- matrices. One can mitigate this expense by relying on a set of creation and annihilation operators constructed from four-qubit gates <cit.>. Nevertheless, the H_M_RS ansatz remains computationally expensive, requiring 𝒪[(n-1)(n-2)/2] four-qubit gates per layer with each four-qubit gate itself being expensive to construct. §.§ Measurement and optimization protocol Based on the unitary ansatz and their appropriate initial states, the cost expectation of the QAOA is evaluated by measurements performed on quantum devices and subsequently optimized using gradient-free optimizers such as COBYLA <cit.> and SPSA <cit.>. The optimization process continues until convergence or when the maximum iteration threshold is reached. The resulting solution to the TSP is then determined by identifying the most dominant quantum state (or binary decision variable encoded in bit string). To account for statistical fluctuations in measurements, we run each quantum simulation multiple times (typically 5-10) with different random seeds and report the result with the lowest converged expectation value. Considering that the expectation values are TSP-specific, we use the standard evaluation metric, the approximation ratio (AR), to evaluate the performance by normalizing against the ideal cost in different TSPs. The AR is calculated as AR = simulation cost/ideal cost = ⟨β⃗, |C(x)| β⃗, ⟩/C_ideal≥ 1, where a lower AR corresponds to a lower expectation cost, indicating a closer approximation to the exact solution. Classical optimizers play a vital role in the optimization and their advantages can be further utilized in the QAOA: The expectation values of individual bit strings are cached and retrieved on the classical optimizer to enable fast computation of the final cost expectation during each iteration. The option to use constraint bounds of [0, 2π) for the ansatz parameters in the case of COBYLA can also accelerate the convergence, which is the main reason we primarily focused on simulations using the COBYLA optimizer in our study, although a comprehensive analysis with other available optimizers can be explored in the future research. To optimize the QAOA, we employed the layerwise learning (LL) protocol introduced in Ref. <cit.>. In comparison to complete depth learning (CDL), LL proved to be advantageous in reducing the optimization cost, particularly as the number of qubits and circuit depth increases. It also helps mitigate the likelihood of barren plateaus (BP) <cit.>. In short, the LL is a two-part optimization protocol, as illustrated in Fig. <ref>: (A) Progressive pretraining: In the first part (Fig. <ref>), we construct the QAOA ansatz by gradually adding layers. Initially, we train and optimize over the leading few layers (typically two layers). Then, for a p-layer QAOA simulation, we freeze the parameters in the first (p-1)-th layers, obtained from previous simulations, and exclusively optimize the parameters in the p-th layer. Optimal parameters of the current layer that yield the lowest cost expectation are selected.[Note the initial values for the parameters of the p-th layer are zero. If no lower cost is found at the p-th layer compared with previous costs, we use zeros for the parameters of that layer. In this way, the cost is always non-increasing over the entire simulation.] This progressive optimization protocol proves to be efficient and leads an increasingly optimized solution as the number of layers increases. It also reduces the computational cost in parameter searching for very thick layers. We denote this protocol with the letter A and an integer to indicate the depth being optimized. (B) Randomized retraining: In the second part (Fig. <ref>), we take the pre-trained QAOA ansatz from part (A) and randomly select a larger portion of the parameters to be trained at a time. Typically, we free 50% of the parameters in each iteration of retraining. Although more computationally expensive, this retraining is still less costly than the CDL, and allows us to train the QAOA ansatz as a whole, mitigating the risk of getting trapped in local minima, which could occur when using the protocol of part (A) exclusively. We use the protocol name B with a number to indicate which iteration of retraining is being conducted. It should be mentioned that there are also other variations to the LL, such as sequential blockwise learning used in Ref. <cit.>, where one block/layer is optimized at a time while fixing all other blocks. Layerwise learning may also be prone to systematic layer saturations <cit.> that require special treatments, which we leave for a future study. For the numerical results presented in this work, we always use the LL optimization protocol as its computational cost and solution accuracy consistently outweigh those of CDL. In Fig. <ref>, we show an example of the layerwise learning applied to the QAOA with the X mixer, showing the optimization in both one protocol step and the full LL procedure; similar performance are also found for other mixers. § NUMERICAL RESULTS With both the TSP optimization and QAOA method introduced, we perform numerical quantum simulation on the IBM Quantum QASM simulator using . The problem and mixer Hamiltonian operators are constructed using library. For the circuit implementations of the three mixers, we use Pauli- and Pauli- gates for the X mixer and the XY mixer, and use the library for the XY mixer. We focus on quantum simulations using the layerwise learning protocol for 3-, 4-, and 5-city TSPs on a sufficiently powerful local Ubuntu machine[Ubuntu 22.04.2 LTS machine using 1 CPU core with 32.0 GB memory and Intel i9 processor of 3.50 GHz. ], and compare their performances in terms of numerical accuracy and resource costs. To obtain converged results, we always use a sufficient number of TSP instances, varying from 7 to 10 graphs depending on the number of cities, mixer, and simulated noise, each with 5-10 repeated runs of quantum simulation. §.§ Simulation accuracy We follow the LL optimization protocol introduced in Sect. <ref> and use (n-1)^2 qubits based on the improved TSP formulation (Eq. (<ref>)) for each quantum simulation with n cities. In Fig. <ref>, we present the QAOA simulation results when solving various instances of 4-city and 5-city TSPs using the X, XY, and XY mixers. The performance is evaluated with three criteria: (a) approximation ratio (AR), (b) percentage of the true solution, and (c) rank of the true solution. The two-part LL optimization is indicated by letters A and B, followed by the specific depth and iteration numbers, respectively. We use a sufficient number of layers in the QAOA simulation (4 layers for 3-city cases and 6 for 4-/5-city cases) to ensure convergence. The uncertainty bars depicted in Fig. <ref> represent the standard deviations of the respective results calculated for various TSP graph instances. A comprehensive comparison of all the results can be found in Table. <ref>, which includes the results for the 3-city TSP simulations as well. (a) Approximation ratio (AR): Expectation cost, or equivalently AR, is the primary observable measured during the quantum simulation. It directly influences the classical optimizer's ability for finding the optimal parameters. Fig. <ref> and Fig. <ref> demonstrate that both pretraining and retraining parts of the LL are necessary to optimize the AR for various TSP instances. Among the three types of QAOA mixers, the RS mixer achieves the lowest AR, reaching values as low as 1.01±0.01 (4-city case) and 1.18±0.14 (5-city case). On the other hand, the X mixer performs the poorest, particularly as the problem size increases, partially due to the limitations of the ansatz's expressibility. It is worth noting that even the heuristic VQE ansatz outperforms the X mixer in the 4-city case, with a lower AR around 2.19±0.37 compared to 2.33±0.83 (see Table. <ref>). With consideration of temporal constraints during construction, the XY mixer exhibits intermediate performance, with AR values of around 1.44±0.23 and 1.89±0.66 for 4- and 5-city TSPs, respectively. (b) True percentage: The percentage of the true solution is also known as the overlap between the quantum state and the expected true solution. While the true percentage is determined only after the simulation, it is desirable to have it as large as possible for the accurate extraction of the optimal solution. In Fig. <ref> and Fig. <ref>, we present the true percentages for the three mixers as the TSP problem size increases. Undeniably, RS is the dominating mixer, reaching around 96.3±3.8% and 41.1±29.5%; however, the large uncertainty suggests a highly unstable pattern in the obtained solution; see App. <ref> for an explanation. On the other hand, the X mixer gives the lowest percentages, reflecting a poor performance in accurately identifying the true solution. Lastly, the XY mixer is again holding a middle ground, with true percentages of approximately at 36.6±5.7% and 7.4±0.6% respectively. (c) Rank: The rank of the true solution specifies how many other states possessed a higher probability than the state corresponding to the true solution, and is a crucial indicator of the simulation's accuracy. Achieving a rank of 1 for the true solution signifies consistent identification of the correct solution, as it means the quantum state with the highest probability is always the true solution's state (so we want to have a rank as low as possible). The results are presented in Fig. <ref> and Fig. <ref>. In the case of the X mixer, it exhibits a significantly high rank, indicating a low likelihood of picking the correct solution among the top quantum states. On the other hand, the ranks of the XY and RS mixers are comparable, both reaching around rank-1 for 4-city TSPs and around rank-2 for 5-city TSPs. Notably, for the 5-city case, we observe lower ranks of the XY mixer in the early stages compared to the final stages, showcasing the effectiveness of the XY mixer in even shallower QAOA for certain TSP instances. Based on the observations in AR, true percentage, and rank, several conclusions can be made. First, we can see that X mixers consistently underperform in all three criteria, compared to the other two mixers. This behavior is expected because the Hadamard initialization produces a uniform superposition of all possible states, i.e., 2^16 states in the 5-city case, without any constraints on the solution. As a result, it becomes challenging for the classical optimizer to filter out the invalid and false solutions based solely on the problem Hamiltonian. Particularly when the problem size increases, the X mixer alone is not suitable for the QAOA simulation of the TSP. Secondly, we observe that the RS mixer stands out as the dominating mixer in terms of AR and true percentage, which makes it a reliable candidate for QAOA. In terms of ranks, the performances of the XY and RS mixers are quite similar. The strategies employed by the two mixers are very different: the RS mixer relies heavily on the expressibility of the mixer itself, while the XY mixer combines the initialization and the mixing Hamiltonian to achieve its results. By utilizing a single-bit string as the initial state, RS may potentially overlook the benefits of having superposition states in a quantum simulation. In a sense, the XY mixer takes a more balanced approach, whereas the RS mixer takes a more assertive approach; this distinction between the two mixers can have implications for the resource cost, which will be discussed in the following section. §.§ Resource evaluations Besides numerical accuracy, resource cost estimation is another crucial factor to consider in quantum simulation, as any computational resource is always finite. On the quantum computer and simulator, many factors will contribute to the performance of the simulation, including attributes of the transpiled quantum circuits, such as the number of qubits, the number of single-qubit (double-qubit) gates, and the quantum circuit depth. In a practical calculation, properties of the quantum device, such as qubit connectivity, coherent error, and incoherent noise, will also come into play. For this section, we focus on the quantum circuits of the three QAOA mixers and compare their resource costs on ideal devices; practical calculation will be discussed in the subsequent section using noisy simulation. In Table. <ref>, we compare the properties of the quantum circuits of the three mixers after transpilation for both finite and generic TSP cases. As expected, the complexity of the circuit, measured in terms of quantum gates and circuit depth, generally increases with the number of cities, resulting in a longer simulation time. Notably, the RS mixer incurs a significantly higher resource cost compared to the X and XY mixers, as reflected in the simulation time in practice. As discussed earlier, this increased cost is primarily attributed to the utilization of four-qubit gates in the RS mixer, leading to a quadratic scaling, i.e., 𝒪(n^4), of single-qubit and double-qubit gates. The abundance of double-qubit gates is anticipated to pose serious challenges in executing the simulation on a real quantum device or when employing a noise model <cit.>. Interestingly, despite requiring fewer resources, the X mixer, actually takes a longer time to run in practice compared to the XY mixer, particularly as the number of qubits increases. This observation is likely due to the computational burden of the optimizer when evaluating the expected cost for a dense superposition of bit strings. On the other hand, the XY mixer requires relatively low computational resources, scaling linearly with the circuit depth and quadratically with the number of quantum gates, which is a more economical choice for running QAOA simulations. Considering both optimization accuracy and computational cost, the XY mixer emerges as a more balanced choice for the QAOA. Nonetheless, a resource cost of 𝒪(n^2) gates and qubits for the XY mixer is still quite expensive as n increases. Notably, building the XY mixer at the pulse level <cit.> has the potential to further enhance its numerical performance. Lastly, it should be acknowledged that the resource costs of all mixers would be even higher when simulating on current NISQ or future fault-tolerant quantum computers. In the interest of addressing this aspect, we present noise-model simulations in the subsequent section. §.§ Robustness against noise Estimating the performance of the QAOA simulation in the presence of noise is crucial to implementations on NISQ and fault-tolerant devices in the future. In this section, we employ the NoiseModel class from Qiskit to study the sensitivity of the simulation on different noise levels. In particular, we focus on noisy QAOA simulations with XY and RS mixers for the same set of 4-city TSP problems. In Fig. <ref>, we compare the performance of various noise simulations in terms of AR, true percentage, and rank. We consider noise models with different degrees of single-qubit errors: 0.005%, 0.01%, 0.05%, and 0.1%. Besides single-qubit errors, we set the double-qubit errors to be ten times their respective single-qubit errors, which is a reasonable approximation for realistic two-qubit gates such as the CX-gate. For the current study, we have omitted other potential errors for simplicity, such as the qubit connectivity and thermal relaxation time, which can also be implemented with the noise model. From the results presented in Fig. <ref>, it is evident that the qubit errors in the noise model directly impact the quality of the simulation. As expected, QAOA simulations with larger errors perform poorly compared to smaller ones. Interestingly, there seems to be a noise threshold in the simulation results: noisy simulations with error less than or equal to 0.01% exhibit qualitatively different behavior compared to those with higher errors, as shown in Fig. <ref>, Fig. <ref>, and Fig. <ref>. Additional details of the noisy simulation are provided in Table. <ref>, where we can clearly see that the LL protocol fails to optimize the QAOA simulation at 0.1% and 0.05% noise levels. Comparing the two ansatzes, the XY mixer outperforms the RS mixer in all indicators for all noisy simulations. Surprisingly, the XY mixer achieves performance similar to the ideal simulation with errors less than or equal to 0.01%, indicating its potential resilience against simulation noise. Our result suggests that the XY mixer is a more suitable choice among the three mixers when considering noise effects in the QAOA simulation. §.§ Problem dependence It is important to investigate the problem dependence of the QAOA simulation of the TSP in preparation for the full-fledged quantum simulation. Here, we study several TSP problem dependencies, such as the topology of the TSP graphs and the penalty weight. The topology of the TSP graph could potentially have a significant impact on the performance of the quantum simulation algorithm. One characteristic we consider is the "skewness" of the TSP graphs, which represents the level of asymmetry. To measure the skewness, we analyze the distribution of the edge weights ω_ij in the graph using Fisher-Pearson's moment coefficient <cit.>. Specifically, we calculate the skewness parameter g by g = m_3/m_2^3/2, m_k=∑_0 ≤ i<j<n(ω_ij-ω)^k/|ω|, where ω is the mean of the edge weights and |ω| is total number of edges in the graph. Here, m_3 is the third moment of the edges, and m_2 is the variance, the square of the standard deviation. Intuitively, the skewness can also be computed as the average value of the cubed z-scores. For instance, a skewness value of 0 indicates a symmetric/normal distribution of the edge, and skewness values of greater than 1 or less than -1 typically indicate highly-skewed distributions. Negative (positive) skewness indicates a left-skewed/right-leaning (right-skewed/left-leaning) distribution. In Fig. <ref>, we present the quantum simulation using X, XY, and RS mixers on various 4-city TSP graphs with varying skewnesses. Here, we focus on the approximation ratio in the final step of layerwise learning to assess the dependence on the TSP graph’s skewness. Taking every bit string solution into account, AR represents the overall effectiveness of the simulation, which is suitable to analyze the skewness. We observe that the simulation tends to perform less effectively with right-skewed edge distributions, possibly due to the presence of more low-weight edges in positively-skewed graphs. Further investigations that include sampling uncertainties are necessary to fully study the consequences of varying graph topology for TSPs with more cities. Additionally, the penalty weight λ in the TSP cost equation (Eq. (<ref>)) is essential to examine, for it directly controls the gaps between valid and invalid solutions. By a similar analysis for the skewness, we find that the simulation performs optimally when λ is in the range of [1.0 E_G,max, 4.5 E_G,max], where E_G,max represents the maximum TSP edge weight. This analysis further supports the choice of the penalty weight used in this study. § SUMMARY AND DISCUSSIONS In this paper, we solved the symmetric TSP (Traveling Salesman Problem) as an optimization problem by using three distinct ansatzes to the QAOA (Quantum Approximate Optimization Algorithm) approach. By adopting a layered learning optimization protocol, we performed numerical quantum simulations on gate-based quantum simulators for various 3-, 4-, and 5-city TSPs. In particular, we presented and compared the performance of the three types of mixer ansatzes for the QAOA: the X mixer, the XY mixer, and the RS mixer. For the few-city TSPs studied in this work, we demonstrated that a well-balanced quantum simulation, such as using the XY mixer, is potentially more suitable in terms of both numerical accuracy and computational cost. These findings are further validified through the noise model simulations. Additionally, we highlighted other factors that may play a role in the quantum simulation, such as the TSP graph skewness and cost function penalty. Our research is a significant step towards finding a successful strategy for the TSP optimization problem using the gate-based QAOA approach, which holds a particular interest in the current NISQ paradigm. The QAOA simulation complements traditional quantum annealing methods in the infinite time region, where efficient qubit reduction techniques, improved optimization protocols, and resource-efficient mixer ansatzes investigated in this work are expected to be valuable for realistic quantum device simulations. Moving forward, we plan to extend our investigations to larger-city TSPs, employing deeper QAOA circuits on noisy quantum backends. By utilizing an adaptive shot-frugal optimizer <cit.> and implementing digitized-counterdiabatic quantum approximate optimization methods <cit.>, we aim to further enhance the accuracy and efficiency of our TSP simulations. § ACKNOWLEDGEMENTS We wish to thank M. Li, I. Chen, M. Kreshchuk, and S. Pal for valuable discussions. We acknowledge the use of IBM Quantum services for this work. The views expressed are those of the authors, and do not reflect the official policy or position of IBM or the IBM Quantum team. This work is supported by the US Department of Energy (DOE), Office of Science, under Grant Nos. DE-FG02-87ER40371, DE-SC0023692, and DE-SC0023707 (Quantum Horizons/NuHaQ), and the Palmer Department Chair Endowment at Iowa State University. WQ is also supported by Xunta de Galicia (Centro singular de investigacion de Galicia accreditation 2019-2022), European Union ERDF, the “Maria de Maeztu” Units of Excellence program under project CEX2020-001035-M, the Spanish Research State Agency under project PID2020-119632GB-I00, the European Research Council under project ERC-2018-ADG-835105 YoctoLHC, and the European Union's MSCA Postdoctoral Fellowships HORIZON-MSCA-2022-PF-01 under Grant Agreement No. 101109293. § PAULI GATES The Pauli gates are defined as I = [ 1 0; 0 1 ], X = [ 0 1; 1 0 ], Y = [ 0 -i; i 0 ], Z = [ 1 0; 0 -1 ], with their subscripts indicating the gate's qubit location when included. § COMPARISON OF THE THREE MIXERS ON A SINGLE TSP INSTANCE We compare the performance of the X, XY, and RS mixers using the “total percentage" plot on the same TSP graph in Fig. <ref>. As explained in the main text, the total percentage of solutions to the TSP consists of true, false, and invalid solutions obtained after the simulation. The respective rank of the true solution is also presented at the top of each step in the layerwise learning procedure. We see that the ranks are extraordinarily high for the X mixer. For the RS mixer, it can be situational whether we obtain the true solution as the most dominant one; therefore we show the results obtained from two different simulations with the RS mixers in panels (c) and (d), which explains the high standard deviations shown in Fig. <ref> and Fig. <ref>. The XY result, by contrast, is less sensitive to the simulation. More examples of the total percentage plots for the three mixers can be found in Ref. <cit.>.
http://arxiv.org/abs/2307.04063v1
20230708235625
Symmetry energy and neutron star properties constrained by chiral effective field theory calculations
[ "Yeunhwan Lim", "Achim Schwenk" ]
nucl-th
[ "nucl-th", "astro-ph.HE", "nucl-ex" ]
[E-mail: ][email protected] Department of Physics, Yonsei University, Seoul 03722, South Korea [E-mail: ][email protected] Technische Universität Darmstadt, Department of Physics, 64289 Darmstadt, Germany ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung GmbH, 64291 Darmstadt, Germany Max-Planck-Institut für Kernphysik, Saupfercheckweg 1, 69117 Heidelberg, Germany We investigate the nuclear symmetry energy and neutron star properties using a Bayesian analysis based on constraints from different chiral effective field theory calculations using new energy density functionals that allow for large variations at high densities. Constraints at high densities are included from observations of GW170817 and NICER. In particular, we show that both NICER analyses lead to very similar posterior results for the symmetry energy and neutron star properties when folded into our equation of state framework. Using the posteriors, we provide results for the symmetry energy and the slope parameter, as well as for the proton fraction, the speed of sound, and the central density in neutron stars. Moreover, we explore correlations of neutron star radii with the pressure and the speed of sound in neutron stars. Our 95% credibility ranges for the symmetry energy S_v, the slope parameter L, and the radius of a 1.4 neutron star R_1.4 are S_v=(30.6-33.9) MeV, L=(43.7-70.0) MeV, and R_1.4=(11.6-13.2) km. Our analysis for the proton fraction shows that larger and-or heavier neutron stars are more likely to cool rapidly via the direct Urca process. Within our equation of state framework a maximum mass of neutron stars M_ max>2.1 indicates that the speed of sound needs to exceed the conformal limit. Symmetry energy and neutron star properties constrained by chiral effective field theory calculations Achim Schwenk ====================================================================================================== § INTRODUCTION Understanding dense matter is a central challenge in nuclear physics and astrophysics. In nature, dense matter exists in the core of neutron stars under extreme neutron-rich conditions. The properties of neutron-rich matter around nuclear densities are described by the nuclear symmetry energy and its density dependence. While there haven been impressive constraints from nuclear theory, nuclear experiments, and astrophysics (see, e.g., Refs. <cit.>), more precise determinations of the symmetry energy and its slope parameter L at saturation density, n_0 = 0.16 fm^-3, are still an open problem. From the theoretical side, the symmetry energy is best constrained by controlled calculations of the equation of state (EOS) of neutron matter based on chiral effective field theory (EFT) interactions <cit.>. This yields values for the symmetry energy S_v at saturation density and the L parameter in the range of S_v = (30-35) MeV and L = (35-70) MeV. However, to describe the EOS to all densities in neutron stars requires extensions beyond the reach of chiral EFT calculations. To this end, different extensions, such as piecewise polytropes <cit.>, speed-of-sound based parametrizations <cit.>, nonparametric Gaussian processes <cit.>, or nuclear energy-density funcationals (EDFs) have been used (see, e.g., Ref. <cit.>). Recently, new EDFs for the nuclear EOS has been introduced by Huth et al. <cit.>, which have the advantage to provide high density extrapolations that are consistent with causality and with a maximum of the speed of sound. These functionals allow for EOS calculations for the broad ranges of conditions reached in core-collapse supernovae and neutron star mergers. In this work, we use these new EDF EOSs to constrain the symmetry energy and neutron star properties based on a prior informed by chiral EFT calculations of neutron matter. From the astrophysics side, the strongest constraint on the nuclear EOS comes from the observation of heavy two-solar-mass neutron stars <cit.>. Moreover, the heaviest well measured neutron stars, PSR J0740+6620, was recently also observed by NICER to provide constraints on its radius <cit.>. In addition, NICER observed the mass and radius of a typical-mass neutron star, PSR J0030+0451 <cit.>. The NICER analyses for both neutron stars by Riley et al. <cit.> and by Miller et al. <cit.> give different mass-radius posteriors, but agree within their uncertainties. The differences in the posteriors are reduced by including realistic assumptions for the EOS, and in this work we explicitly show that in our EDF EOS ensembles the results from both NICER analyses are very similar. In addition to the NICER constraints, we include in our Bayesian inference the tidal deformability information from GW170817 inferred by LIGO/Virgo <cit.>. Using the chiral EFT informed priors with the astro posteriors, we provide results for the symmetry energy and neutron star properties. This paper is organized as follows. In Sec. <ref> we introduce our EOS framework using the new EDFs from Huth et al. <cit.>. These are fit to a range of chiral EFT calculations of neutron matter. Building on this EOS prior, we include constraints at high densities from observations of GW170817 and NICER using a Bayesian analysis. In Sec. <ref>, we investigate the posterior distributions for the symmetry energy and the slope parameter, as well as for the proton fraction, the speed of sound, and the central density in neutron stars. Moreover, we explore correlations of neutron star radii with the pressure and the speed of sound in neutron stars. Finally, we summarize our results and conclude in Sec. <ref>. § EQUATION OF STATE FRAMEWORK The EOS describes the energy density and pressure of matter for given baryon density, composition, and temperature. Since we focus on cold neutron stars, we consider zero temperature. For a given EOS, the mass and radius of neutron stars follow by solving the Tolman-Oppenheimer-Volkoff (TOV) equations <cit.>. Our starting point will be the EOS of homogeneous matter, which we constrain by empirical ranges of the properties of symmetric nuclear matter around saturation density and by neutron matter calculations. Based on each EOS, we the calculate consistently the structure of the neutron star crust. Since neutron stars are extremely neutron rich with proton fractions ∼ 5%, the most important constraints for the EOS come from neutron matter calculations. In this work, we focus on neutron matter calculations based on chiral EFT interactions, which has the advantage that chiral EFT predicts consistent many-body interactions and enables systematic uncertainty estimates based on the EFT expansion <cit.>. Neutron matter has been calculated based on chiral two- and three-nucleon interactions using many-body perturbation theory (MBPT) <cit.>, quantum Monte Carlo (QMC) methods <cit.>, self consistent Green's function (SCGF) methods <cit.>, and coupled cluster (CC) theory <cit.>. These calculations are able to include all interactions up to next-to-next-to-next-to-leading order (N^3LO) <cit.> and include uncertainty estimates from the EFT truncation <cit.>. §.§ Energy density functionals To extend the EOS to high density we use nonrelativistic EDFs, which depend on the baryon number density n and proton fraction x of uniform matter. The baryonic energy density ε(n,x) is expressed as ε(n,x) = 1/2m_N τ_n(n,x) + 1/2m_N τ_p(n,x) + (1-2x)^2 f_n(n) + [1-(1-2x)^2]f_s(n) , where τ_n/2m_N and τ_p/2m_N are the neutron and proton kinetic densities, with nucleon mass m_N. It was shown that the dependence on isospin asymmetry is to a very good approximation quadratic <cit.>, with the dominant non-quadratic contributions stemming from the kinetic densities, so that Eq. (<ref>) provides a very good approximation for asymmetric nuclear matter. The functionals f_n(n) and f_s(n) can be chosen to satisfy the constraints from neutron matter calculations and symmetric nuclear matter properties, respectively. For the interaction density functionals, we take the form introduced recently by Huth et al. <cit.> f_n(n) = ∑_j=0^3 a_j n^2+j/3/d_j + n^(j+1)/3 , f_s(n) = ∑_j=0^3 b_j n^2+j/3/d_j + n^(j+1)/3 , where a_j, b_j are fit parameters and d_j = d fm^-1-j with parameter d=3 <cit.>. This corresponds to an expansion of the interaction energy density in powers of the Fermi momentum k_ F∼ n^1/3, and the denominator ensures that the interaction part becomes proportional to n^5/3 at higher densities. Note that without the denominator, the interaction part generally causes the speed of sound to exceed the speed of light beyond some baryon density. For a detailed discussion of these new functionals and the parameter choices, see Ref. <cit.>. §.§ Constraints from neutron matter calculations based on chiral effective field theory For neutron matter constraints we use the MBPT calculations from Ref. <cit.> based on different chiral NN+3N Hamiltonians, including the Hebeler+ interactions <cit.>, the NNLOsim potentials <cit.>, as well as the N^3LO 450 MeV and 500 MeV uncertainty bands <cit.> (using the NN EMN interactions <cit.>). The different neutron matter results and their uncertainties are given by the individual lines shown in Fig. <ref>. We use the individual lines to fit the a_j of the EDF for neutron matter, f_n(n) in Eq. (<ref>), based on the k_ F expansion and d=3. The b_j of the corresponding symmetric matter part, f_s(n), are determined from empirical properties. We fit to the binding energy E/A(n_0)=-16 MeV at saturation density n_0 = 0.16 fm^-3, the incompressibility K=235 MeV, with K = 9 n^2 ^2(E/A)/ n^2(n_0,x=1/2), and the skewness Q = -300 MeV, with Q = 27 n^3 ^3(E/A)/ n^3(n_0,x=1/2). These values are extracted from Skyrme EDFs and constraints for nuclear matter properties <cit.>, see also Ref. <cit.>. Since neutron star properties are not very sensitive to symmetric nuclear matter, we do not vary all nuclear matter properties, but only explore the most uncertain value of Q in the following, see Sec. <ref>. The uncertainties in our EDF EOSs are reflected in the covariance matrix of x⃗=(a⃗, b⃗) defined as C_jk = 1/∑_i w_i∑_i w_i (x_j^i -⟨ x_j ⟩)(x_k^i -⟨ x_k ⟩) , where x_j^i is the set of fit parameters (a_j, b_j) for the i-th individual EOS, ⟨ x_j ⟩ represents the average of x_j, and w_i is the weight for each EOS. Since we do not vary the symmetric nuclear matter properties, in this work C_jk is 4 × 4 matrix for the a_j from the neutron matter EOSs only. In the initial set given by the 17 neutron matter EOSs, the weights are w_i=1, but when we implement Bayes statistics and inferences, w_i < 1. With the average ⟨ x_j ⟩ and the covariance matrix C_jk, a multivariate normal distribution can be used to generate an EOS ensemble based on our EDF EOSs. We note that the statistical uncertainties from this EOS ensemble have of course a prior sensitivity to the initial set of individual EOSs. The resulting EDF EOS ensemble based on the multivariate normal distribution is shown in Fig. <ref> with the 95% credibility region in comparison to the individual EOSs based on MBPT calculations of neutron matter. The ensemble is based on 100,000 EOSs generated using the EDF, Eqs. (<ref>) and (<ref>), from the average ⟨ x_j ⟩ and the covariance matrix C_jk based on the individual neutron matter MBPT EOSs. The agreement between the band and the individual lines in Fig. <ref> indicates that the EDF EOS ensemble employed in this work can generalize chiral EFT results within their uncertainties. Moreover, the compare the EDF EOS ensemble to the unitary gas constraint <cit.> and observe in Fig. <ref> that this is nicely fulfilled by our EOSs. §.§ Bayesian modelling We incorporate the astrophysics constraints on the EOS by applying Bayes theorem, from which the posterior distribution results from the combination of the prior and likelihood, P(a⃗| D) = P(D|a⃗)P(a⃗)/∫ da⃗ P(D|a⃗)P(a⃗) . Here, P(a⃗) represents the EOS prior given by the EDF parameter space obtained from the neutron matter calculations and symmetric nuclear matter properties, D stands for the astrophysical data so that the P(D|a⃗) is the likelihood or conditional probability to obtain D for a given EDF with parameter set a⃗. In our study, we include the astrophysical observations of GW170817 and NICER to constrain the EOS at higher densities. For the NICER mass-radius constraints for PSR J0030+0451 and PSR J0740+6620 we consider separately either the Amsterdam analysis of Riley et al. <cit.> or the Illinois/Maryland analysis of Miller et al. <cit.>. The heaviest neutron star mass of 2.08 ± 0.07, <cit.> is thus directly implemented through the NICER M-R information of PSR J0740+6620. Folding in the NICER constraints based on our prior leads to the likelihood for the EDF parameters <cit.> P(NICER|a⃗) = ∫ dM dR P(M,R) ×δ(M-M(a⃗)) δ(R-R(a⃗)) , where M(a⃗) and R(a⃗) is the M-R relation for a given EDF EOS with parameter set a⃗ and P(M,R) is the M-R posterior distribution for each of the two NICER sources. The integral is carried out be discretizing the M- space, summing over all bins which are passed by the M(a⃗)-R(a⃗) relation and weighting those bins with the NICER posterior for each of the sources successively. In addition to NICER, we use the tidal deformability information from GW170817 inferred by LIGO/Virgo <cit.>, P(LIGO|a⃗) = ∫ dM_1 dΛ_1 dM_2 dΛ_2 P(M_1,Λ_1,M_2,Λ_2) ×δ(M_1-M_1(a⃗)) δ(Λ_1-Λ_1(a⃗)) ×δ(M_2-M_2(a⃗)) δ(Λ_2-Λ_2(a⃗)) , where P(M_1,Λ_1,M_2,Λ_2) is the posterior distribution from LIGO/Virgo. We assume that the NICER and GW170817 analyses are independent each other so that combining both constraints, the likelihood is given by P(D|a⃗) = P(NICER|a⃗) P(LIGO|a⃗) . Multiplying the combined likelihood with the prior P(a⃗) and a normalization constant considering the integral in the denominator, we obtain the posterior distribution P(a⃗| D) for a given EDF EOS with parameter set a⃗. § RESULTS Next we present our results for the properties of neutron stars and the symmetry energy based on the EOS framework developed in the previous section. This combines the information from neutron matter based on chiral EFT interactions, with empirical properties of symmetric nuclear matter, as well as astrophysical constraints from GW170817 and NICER using a family of EDFs for nucleonic matter. Since matter in neutron stars is very neutron-rich, we have focused more on the propagation of the theoretical uncertainties in our knowledge of neutron matter. An advantage of our EOS framework is that we use the same EDF to construct the crust and core EOS for neutron stars. In the following, we present our results for the neutron star mass and radius, the proton fraction, the speed of sound, and the central density in neutron stars. We also provide results for the symmetry energy and the slope parameter and explore correlations of neutron star radii with the pressure and the speed of sound in neutron stars. §.§ Mass-radius relation The mass and radius of neutron stars are obtained by solving the TOV equations for nonrotating stars. Figure <ref> shows the 95% credibility regions for the mass M and radius R generated from the multivariate normal distribution for the EDF EOSs based on an ensemble of ∼ 10^5 EOS. The top panel shows the prior distribution for the k_ F expansion using different values of d=1,3,5,7, and d=∞. The middle and lower panels show the posterior distribution including astrophysics information from GW170817 and the NICER analysis of Riley et al. <cit.> or the NICER analysis of Miller et al. <cit.>, respectively. Our results show that the posterior distributions obtained from the two different NICER analyses are very similar once the nuclear physics information is encoded in the EOS framework. Regarding the different EDF choices, we find that the d=3 distribution is similar to the case of d=5, 7, and d=∞. However, large d, and in particular d=∞ allows for the speed of sound to become acausal, c_s^2 > 1 (in units with the speed of light c=1), as the density increases, which is not the case in either neutron or symmetric matter for d=3 by construction. In addition, as d=1 makes the interaction energy density rapidly behave like n^5/3, the EOS becomes soft at rather low densities compared to the larger d values. As a result the 95% credibility regions for mass and radius only extend slight above 2. Therefore, in the following, we will show results only for the EDF EOSs with d=3. Before doing so, we also list the radius ranges of typical 1.4 and 2 neutron stars to show the rather minor sensitivity to the choice of d (see Table <ref>). In Table <ref> we give the prior and posterior ranges for the radius R_1.4 of a 1.4 neutron star at 95% (± 2σ) and 68% (± 1σ) credibility as well as the most likely radius for the EDF EOS ensembles with the k_ F expansion and different d values. For d=3, the 95% credibility prior range is R_1.4 = (9.87-13.19) km. Including the astrophysics information from GW170817 and the NICER analysis of Riley et al. <cit.> gives for 95% credibility posterior range R_1.4=(11.57-13.17) km while with the Miller et al. <cit.> analysis R_1.4=(11.65-13.23) km, or the combined range R_1.4=(11.6-13.2) km. Both NICER analyses thus give very similar posterior ranges with the result based on Miller et al. shifted to slightly larger radii. Overall, the radius range decreases by over 50% from 3.3 km for the prior to 1.6 km for the combined posterior, mainly by disfavoring the smaller radii in the prior range. Moreover, in the prior distribution for d=3, 72% EOSs have a maximum mass of neutron stars greater than 2.0, while for the posterior distribution, 97% (98%)EOSs have a maximum mass above 2.0 using the NICER analysis of Riley et al. <cit.> (Miller et al. <cit.>). In Fig. <ref>,we show the color-coded prior and posterior distributions for the case of d=3. In both posterior distributions, the most probable radii for neutron stars between 1.0 and 1.8 vary only within 0.3 km. Moreover, the mass and radius distribution for M>2.0 is very similar between for the prior and the two posteriors, because the astrophysics information mainly removes EOSs that give low maximum mass and small radii. Table <ref> gives the prior and posterior ranges for the radius R_2.0 of a 2.0 neutron star for the EDF EOSs with d=3. The prior distribution shows a wider radius range because it does not include information of a massive neutron star. Again the two posterior ranges for R_2.0 are very similar and merely shifted by less than 100 m. In the case of d=3, the maximum mass of neutron stars among the ∼ 10^5 EOS ensemble reaches up to 2.23, while it can go up to 2.32 for d=∞. §.§ Symmetry energy and L parameter We can also extract the symmetry energy S_v and the slope parameter L from our calculations. This is shown in Fig. <ref> for the individual MBPT calculations for the different chiral NN+3N Hamiltonians from Ref. <cit.> as points, where the dashed (solid) line connects the 500 (450) MeV cutoff N^3LO results. As discussed, our EOS EDF ensembles are built from all the different chiral NN+3N results. The resulting 95% prior and posterior distributions are shown for the EDF EOS ensemble with the k_ F expansion and d=3. We find that the prior range for S_v and L is narrowed to larger values with the astrophysics constraints included. For both NICER analyses the posteriors are again very similar. The 95% distributions can be parametrized by the mean values and the covariance matrix. For the prior distribution these are given by (mean values in MeV and convariance matrix in MeV^2): ⟨ S_v, L ⟩ = (31.96, 51.70) , Σ_S_v,L = [ 0.79 6.73; 6.73 75.11 ] , while the posterior distributions for the astrophysical inferences are given for the Riley et al. <cit.> and Miller et al. <cit.> analysis, respectively, ⟨ S_v, L ⟩ = (32.23, 56.33) , Σ_S_v,L^ Riley= [ 0.66 4.56; 4.56 40.02 ] , and ⟨ S_v, L ⟩ = (32.31, 57.31) , Σ_S_v,L^ Miller = [ 0.64 4.43; 4.43 40.43 ] . We observe that the astrophysics constraints move the posterior distributions to larger S_v and L values within the prior range. Moreover, all MBPT calculations for the different chiral NN+3N Hamiltonians are still largely within the posterior range, but some of them only borderline. This points to that astrophysics prefers EOSs on the stiffer part of the neutron matter EOS band based on chiral EFT. This is consistent with the EOS findings in Ref. <cit.>. In Fig. <ref> we also show the GP-B results at N^3LO from Ref. <cit.>. Since the GP-B contours are based on the same N^3LO 500 (450) MeV results <cit.> included in our analysis, we can trace the difference between the GP-B countours and the N^3LO points to the evaluation of S_v and L for the correlated range of 95% of the calculated saturation density, while our distributions are at a fixed reference saturation density n_0=0.16 fm^-3. Since the L parameter scales linearly with the density, this mainly affects the L value, while the range of symmetry energies is broadened due to the additional uncertainty in the calculated saturation density. Finally, we compare our 95% posterior distributions in Fig. <ref> with the recent results from Essick et al. <cit.>, which are however 90% contours. These are based on a different set of chiral NN+3N calculations and astrophysics constraints through a more general Gaussian process extension to high densities. Nevertheless both contours (at the same reference saturation density n_0) are remarkably consistent. §.§ Proton fraction The ground state of neutron star matter is obtained by solving the condition for beta equilibrium, μ_n = μ_p + μ_e , where the neutron, proton, and electron chemical potentials μ_n, μ_p, and μ_e are given by μ_n = ε/ n_n, μ_p = ε/ n_p, μ_e = ε/ n_e , with total energy density ε. Since the core is composed of uniform nuclear matter, Eq. (<ref>) is straightforward for a given EDF. For the crust EOS, where matter exists in inhomogeneous form, we employ the liquid drop model (LDM) <cit.> using the same EDF to construct the EOSs of the inner and outer crust. In the inner crust, the total energy density including the electron contribution is given by <cit.> ε = u n_i f_i + σ(x_i)u d/r_N + 2π (e x_i n_i r_N)^2 u f_d(u) + (1-u)n_nof_no + ε_e , where u is the volume fraction of the nucleus to the Wigner-Seitz cell, n_i is the baryon number density of the heavy nucleus, n_no is the density of unbound neutrons, x_i is the proton fraction in the heavy nucleus, f_i =f(n_i,x_i) and f_no=f(n_no,x_no=0) are the energy per baryon for the heavy nucleus and unbound neutrons, respectively. σ(x_i) is the surface tension at zero temperature as a function of the proton fraction in heavy nuclei, r_N the radius of the heavy nucleus, e the electric charge, d the dimension of the nuclear pasta phase, f_d(u) the Coulomb shape function corresponding to the nuclear pasta phase, and ε_e is the electron energy density. We use the surface tension from <cit.> σ(x_i) = σ_0 2^α+1+ q/x^-α + q + (1-x)^-α , where σ_0, α, and q are parameters fit to the calculation of the surface tension. In this work, we use σ_0= 1.14 MeV fm^-2, α=3.4, and q=30, but note that the crust properties depend only weakly on the surface tension parameters, and also the impact of the crust on the investigated neutron star properties is minor. Based on the viral theorem, the Coulomb energy is approximately twice the nuclear surface energy. Thus, we can combine the surface and Coulomb energy to a single form of energy contribution, which leads to a simpler equation for the energy density <cit.> ε = u n_i f_i + (243π/5e^2 x_i^2 n_i^2 σ^2(x_i) )^1/3𝒟(u) + (1-u)n_nof_no + ε_e , where 𝒟(u) is a continuous dimension function introduced in Ref. <cit.>. For total baryon density and proton fraction Y_p, and thus electron density n_e = Y_p n, the conditions u, n_i, x_i, and n_no are found by minimizing the total energy density, Eq. (<ref>), using the Lagrange multiplier method for the constraints of baryon density and charge neutrality, n = u n_i + (1-u) n_no and n_e = (1-u) n_i x_i . For a outer crust EOS, which is defined as the region without unbound neutrons, the outside neutron density n_no is neglected. Using the LDM construction, the transitions from the outer to inner crust and to the outer core are thus smooth, since the same EDF is employed to construct the entire neutron star EOS. Figure <ref> shows the average proton fraction at the central density ⟨ Y_p^c ⟩ based on the EDF EOS ensemble for the k_ F expansion and d=3, as well as the variance over the average σ_Y_p^c / ⟨ Y_p^c ⟩. The average proton fraction is dominated by the core, but include the details of the crust calculation discussed above. We note that in Fig. <ref> (and in Figs. <ref> and <ref>), the mass and radius domain is restricted to the region where the relative probability to the maximum probability P(M,R)/P_ max≥ 10^-2 (as in Fig. <ref>). As expected, the proton fraction increases as the mass increases, and for a given mass, it increases with radius as the EOS becomes stiffer. Our EOS ensemble assumes for the proton fraction that matter is nucleonic, which may not be valid for massive stars. However, for typical 1.4 neutron stars, this may not be such a large extrapolation. In addition, we plot in Fig. <ref> the threshold Y_p = 1/9 for direct URCA process, which leads to fast cooling neutron stars <cit.>. We find that typical neutron stars around 1.4 do not exceed this threshold for radii around 12 km, but only in our largest radius configurations. However, based on our results, we expect that massive neutron stars with M>2.1 would cool via the direct Urca process. Figure <ref> shows the total proton fraction Y_p^ tot of the maximum mass star versus the maximum mass. The total proton fraction increases along a band as the maximum mass increases, due to the stiffer EOS. Figure <ref> shows results for four different Q values of symmetric nuclear matter, keeping in mind that negative Q values are favored by nuclear masses, ab initio calculations, and astrophysics <cit.>. With increasing Q, the total proton fraction for a given mass decreases and also the maximum mass increases, as larger Q stiffens the EOS. Naturally, the sensitivity to Q is much less pronounced for typical neutron stars. Figure <ref> shows the proton fraction at the central density Y_p^c versus the radius of a 1.4 star, which exhibits a tight correlation and is only very weakly dependent on Q. Larger radii thus have a larger proton fraction. Again we see that radii around 12 km, as expected based on most recent EOS astrophysical inferences <cit.>, do not cool via the direct Urca threshold. However for larger radii R_1.4 > 12.6 km (for Q=-300 MeV) even typical neutron stars would be fast coolers. §.§ Central density and speed of sound Next, we study the posterior distribution for the central density and the speed of sound in neutron stars. Figure <ref> shows the average central density in units of saturation density ⟨ n_c/n_0 ⟩ and its variance over the average σ_n_c / ⟨ n_c ⟩. The average central density increases with increasing mass, while it decreases for as the radius increases for a given mass of neutron star. This results from stiffer EOSs leading to larger radii. In our EDF EOSs, the maximal central density reaches up to ≈ 7 n_0, which is reached for softer EOSs in the most massive neutron stars with smaller radii. Figure <ref> shows the speed of sound squared c_s^2 = ∂ P/∂ε at the central densities in neutron stars. In our EDFs, the speed of sound increases but remains causal and decreases at high density <cit.>. As we see from Fig. <ref>, the speed of sound is increasing as the mass increases, so in neutron stars most matter is on the part of the EOS that has an increasing c_s^2 in our ensemble of EOSs. In Fig. <ref>, the red dashed line represent c_s^2=1/3, which shows that even typical 1.4 stars exceed the conformal limit, except when they have radii larger than 13 km (see also the middle panel of Fig. <ref>). Moreover, information on the radii of massive stars with M ≳ 2.0 would inform us about c_s^2 at the central density (see also Fig. <ref>). This could be realized with an improved NICER radius measurement <cit.> of the 2.08 ± 0.07 pulsar PSR J0740+6620 <cit.>. §.§ Correlations Finally, we study the correlation of neutron star radii with the pressure and the speed of sound. In Ref. <cit.> it was suggested that the radius of 1.4 neutron star would follow the emprical relation R_1.4∼ p_2n_0^1/4, where p_2n_0 is the pressure at twice saturation density. In the top panel of Fig. <ref> we show that this correlation is indeed fulfilled in our EDF EOS ensemble within a band. For the radius in km and the pressure in MeV fm^-3, we find R_1.4 = 0.731 + 5.312 P_2n_0^1/4 for the mean line of the correlation shown in Fig. <ref>, with a correlation coefficient r_xy = 0.980. While the details of this correlation depend on the EOS model, this indicates that astrophysical observations of neutron star radii provide constraints for the pressure at twice saturation density. The middle panel of Fig. <ref> shows the distribution of R_1.4 versus the speed of sound at the central density of neutron stars. Most of the distribution follows a linear trend, but the correlation coefficient r_xy=-0.870 is weaker in this case. We also observe that c_s^2 at the central density exceeds the conformal limit c_s^2 =1/3 in our EDF EOS ensemble for R_1.4 smaller than 12.8 km The correlation is even weaker at lower densities when comparing R_1.4 with the L parameter in the bottom panel fo Fig. <ref>, which is proportional to the pressure of pure neutron matter at saturation density. This is as expected because the central densities of a 1.4 neutron star is ∼ 3 n_0. Nevertheless, there is a general trend that R_1.4 increases as L increases. Figure <ref> shows the correlation of the radius of a 2.0 neutron star with the speed of sound at the central density. The strong correlation indicates that the radius measurement of massive neutron stars provides constraints for the speed of sound in dense nuclear matter. For the radius in km, we find R_2.0 = 16.493 - 7.846 c_s^2, with a correlation coefficient r_xy=-0.995. Moreover, we find within our EDF EOS ensemble that the speed of sound at the central density of 2.0 stars is always greater than the conformal limit. Figure <ref> shows the mass and radius prior when we impose the conformal limit for the speed of sound. The top panel shows the case when the speed of sound continues to increase up to 1/3 and maintains the conformal limit for all higher densities. The bottom panel is for the case where the speed of sound jumps to 1/3 at n=2n_0 and remains at the conformal limit for all higher densities. In both scenarios, the speed of sound is not larger than the conformal limit at any density. From Fig. <ref>, the prior probability to support 2.0 stars is around 10% or less, which is similar to the findings of Ref. <cit.>. Thus, the conformal limit can be consistent with 2.0 stars, but most of the support of our EDF EOS ensemble exceeds the conformal limit for massive neutron stars. However, when we take the maximum mass limit as the central value of PSR J0740+6620, 2.08, the speed of sounds needs to exceed 1/3 in our ensemble, as the maximum mass does not reach up to 2.08 in our modelling in both cases in Fig. <ref>. § SUMMARY AND CONCLUSION We have explored EOS ensembles using new EDFs from Ref. <cit.> that allow for large variations at high densities. The EDF EOS ensembles were constrained by empirical properties of symmetric nuclear matter and by MBPT calculations of neutron matter based on different chiral NN+3N Hamiltonians. Starting from this prior, constraints at high densities were included from observations of GW170817 and NICER, where the heavy neutron star mass constraint is incorporated through PSR J0740+6620. All our results show that both Riley et al. <cit.> and Miller et al. <cit.> NICER analyses lead to very similar posterior constraints for the symmetry energy and neutron star properties when folded into our EOS framework. Based on our EDF EOS ensembles, we have studied the symmetry energy and the L parameter, as well as for the proton fraction, the speed of sound, and the central density in neutron stars. Our 95% posterior credibility ranges for the symmetry energy S_v, the L parameter, and the radius of a 1.4 neutron star R_1.4 are S_v=(30.6-33.9) MeV, L=(43.7-70.0) MeV, and R_1.4=(11.6-13.2) km. Moreover, we have shown that larger and-or heavier neutron stars have a larger proton fraction and are thus more likely to cool rapidly via the direct Urca process. As can be seen from our results for S_v and L, present astrophysics constraints prefer larger pressures within the prior ranges. To this end, we have also explored correlations of neutron star radii with the pressure and the speed of sound. The radius of 1.4 stars was found to correlate well with the pressure at twice saturation density, and R_2.0 was shown to correlate tightly with the speed of sound at the central density. Therefore, precise measurements of R_1.4 provide key information for density regimes at the limits of chiral EFT calculations, and radii of massive neutron stars will help to constrain the behavior of the speed of sound in dense matter. Finally, by constructing EOS ensembles with imposed conformal limit on the speed of sound, we found that a maximum mass of neutron stars M_ max>2.1 indicates that the speed of sound needs to exceed the conformal limit. We thank Sabrina Huth for fruitful discussion. This work was supported by the Max Planck Society, the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant Agreement No. 101020842) and by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (MSIT) (No. 2021R1A2C2094378). 55 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Lattimer and Lim(2013)]LattimerLim author author J. M. Lattimer and author Y. Lim, @noop journal journal Astrophys. J. volume 771, pages 51 (year 2013)NoStop [Drischler et al.(2021)Drischler, Holt, and Wellenhofer]Dris21ARNPS author author C. Drischler, author J. W. Holt, and author C. Wellenhofer, @noop journal journal Annu. Rev. Nucl. Part. Sci. volume 71, pages 403 (year 2021)NoStop [Huth et al.(2021)Huth, Wellenhofer, and Schwenk]Huth21 author author S. Huth, author C. Wellenhofer, and author A. Schwenk, @noop journal journal Phys. Rev. C volume 103, pages 025803 (year 2021)NoStop [Essick et al.(2021a)Essick, Landry, Schwenk, and Tews]Essick21PRC author author R. Essick, author P. Landry, author A. Schwenk, and author I. Tews, @noop journal journal Phys. Rev. C volume 104, pages 065804 (year 2021a)NoStop [Hebeler and Schwenk(2010)]Hebe10nmatt author author K. Hebeler and author A. Schwenk, @noop journal journal Phys. Rev. C volume 82, pages 014314 (year 2010)NoStop [Tews et al.(2013)Tews, Krüger, Hebeler, and Schwenk]Tews13N3LO author author I. Tews, author T. Krüger, author K. Hebeler, and author A. Schwenk, @noop journal journal Phys. Rev. Lett. volume 110, pages 032504 (year 2013)NoStop [Carbone et al.(2013)Carbone, Polls, and Rios]Carb13nm author author A. Carbone, author A. Polls, and author A. Rios, @noop journal journal Phys. Rev. C volume 88, pages 044302 (year 2013)NoStop [Hagen et al.(2014)Hagen, Papenbrock, Ekström, Wendt, Baardsen, Gandolfi, Hjorth-Jensen, and Horowitz]Hage14ccnm author author G. Hagen, author T. Papenbrock, author A. Ekström, author K. Wendt, author G. Baardsen, author S. Gandolfi, author M. Hjorth-Jensen, and author C. J. Horowitz, @noop journal journal Phys. Rev. C volume 89, pages 014319 (year 2014)NoStop [Lynn et al.(2016)Lynn, Tews, Carlson, Gandolfi, Gezerlis, Schmidt, and Schwenk]Lynn16QMC3N author author J. E. Lynn, author I. Tews, author J. Carlson, author S. Gandolfi, author A. Gezerlis, author K. E. Schmidt, and author A. Schwenk, @noop journal journal Phys. Rev. Lett. volume 116, pages 062501 (year 2016)NoStop [Holt and Kaiser(2017)]Holt17 author author J. W. Holt and author N. Kaiser, @noop journal journal Phys. Rev. C volume 95, pages 034326 (year 2017)NoStop [Drischler et al.(2019)Drischler, Hebeler, and Schwenk]Dris19MCshort author author C. Drischler, author K. Hebeler, and author A. Schwenk, @noop journal journal Phys. Rev. Lett. volume 122, pages 042501 (year 2019)NoStop [Jiang et al.(2020)Jiang, Ekström, Forssén, Hagen, Jansen, and Papenbrock]Jiang20 author author W. G. Jiang, author A. Ekström, author C. Forssén, author G. Hagen, author G. R. Jansen, and author T. Papenbrock, @noop journal journal Phys. Rev. C volume 102, pages 054301 (year 2020)NoStop [Keller et al.(2023)Keller, Hebeler, and Schwenk]Kell23ANM author author J. Keller, author K. Hebeler, and author A. Schwenk, @noop journal journal Phys. Rev. Lett. volume 130, pages 072701 (year 2023)NoStop [Hebeler et al.(2013)Hebeler, Lattimer, Pethick, and Schwenk]Hebe13ApJ author author K. Hebeler, author J. M. Lattimer, author C. J. Pethick, and author A. Schwenk, @noop journal journal Astrophys. J. volume 773, pages 11 (year 2013)NoStop [Tews et al.(2018)Tews, Carlson, Gandolfi, and Reddy]Tews18cs author author I. Tews, author J. Carlson, author S. Gandolfi, and author S. Reddy, @noop journal journal Astrophys. J. volume 860, pages 149 (year 2018)NoStop [Greif et al.(2019)Greif, Raaijmakers, Hebeler, Schwenk, and Watts]Greif19cs author author S. K. Greif, author G. Raaijmakers, author K. Hebeler, author A. Schwenk, and author A. L. Watts, @noop journal journal Mon. Not. Roy. Astron. Soc. volume 485, pages 5363 (year 2019)NoStop [Landry and Essick(2019)]Landry19GP author author P. Landry and author R. Essick, @noop journal journal Phys. Rev. D volume 99, pages 084049 (year 2019)NoStop [Lim and Holt(2018)]Lim18 author author Y. Lim and author J. W. Holt, @noop journal journal Phys. Rev. Lett. volume 121, pages 062701 (year 2018)NoStop [Demorest et al.(2010)Demorest, Pennucci, Ransom, Roberts, and Hessels]Demo10ns author author P. Demorest, author T. Pennucci, author S. Ransom, author M. Roberts, and author J. Hessels, @noop journal journal Nature volume 467, pages 1081 (year 2010)NoStop [Antoniadis et al.(2013)Antoniadis, Freire, Wex, Tauris, Lynch, van Kerkwijk, Kramer, Bassa, Dhillon, Driebe et al.]Anto13ns author author J. Antoniadis, author P. C. C. Freire, author N. Wex, author T. M. Tauris, author R. S. Lynch, author M. H. van Kerkwijk, author M. Kramer, author C. Bassa, author V. S. Dhillon, author T. Driebe, et al., @noop journal journal Science volume 340, pages 1233232 (year 2013)NoStop [Fonseca et al.(2021)Fonseca, Cromartie, Pennucci, Ray, Kirichenko, Ransom, Demorest, Stairs, Arzoumanian, Guillemot et al.]Fonseca21 author author E. Fonseca, author H. T. Cromartie, author T. T. Pennucci, author P. S. Ray, author A. Y. Kirichenko, author S. M. Ransom, author P. B. Demorest, author I. H. Stairs, author Z. Arzoumanian, author L. Guillemot, et al., @noop journal journal Astrophys. J. Lett. volume 915, pages L12 (year 2021)NoStop [Riley et al.(2021)Riley, Watts, Ray, Bogdanov, Guillot, Morsink, Bilous, Arzoumanian, Choudhury, Deneva et al.]Riley21 author author T. E. Riley, author A. L. Watts, author P. S. Ray, author S. Bogdanov, author S. Guillot, author S. M. Morsink, author A. V. Bilous, author Z. Arzoumanian, author D. Choudhury, author J. S. Deneva, et al., @noop journal journal Astrophys. J. Lett. volume 918, pages L27 (year 2021)NoStop [Miller et al.(2021)Miller, Lamb, Dittmann, Bogdanov, Arzoumanian, Gendreau, Guillot, Ho, Lattimer, Loewenstein et al.]Miller21 author author M. C. Miller, author F. K. Lamb, author A. J. Dittmann, author S. Bogdanov, author Z. Arzoumanian, author K. C. Gendreau, author S. Guillot, author W. C. G. Ho, author J. M. Lattimer, author M. Loewenstein, et al., @noop journal journal Astrophys. J. Lett. volume 918, pages L28 (year 2021)NoStop [Riley et al.(2019)Riley, Watts, Bogdanov, Ray, Ludlam, Guillot, Arzoumanian, Baker, Bilous, Chakrabarty et al.]Riley19 author author T. E. Riley, author A. L. Watts, author S. Bogdanov, author P. S. Ray, author R. M. Ludlam, author S. Guillot, author Z. Arzoumanian, author C. L. Baker, author A. V. Bilous, author D. Chakrabarty, et al., @noop journal journal Astrophys. J. Lett. volume 887, pages L21 (year 2019)NoStop [Miller et al.(2019)Miller, Lamb, Dittmann, Bogdanov, Arzoumanian, Gendreau, Guillot, Harding, Ho, Lattimer et al.]Miller19 author author M. C. Miller, author F. K. Lamb, author A. J. Dittmann, author S. Bogdanov, author Z. Arzoumanian, author K. C. Gendreau, author S. Guillot, author A. K. Harding, author W. C. G. Ho, author J. M. Lattimer, et al., @noop journal journal Astrophys. J. Lett. volume 887, pages L24 (year 2019)NoStop [Abbott et al.(2019)Abbott et al.]LIGO19PRX author author B. P. Abbott et al. (collaboration LIGO Scientific Collaboration and Virgo Collaboration), @noop journal journal Phys. Rev. X volume 9, pages 011001 (year 2019)NoStop [Tolman(1939)]Tolm39TOV author author R. C. Tolman, @noop journal journal Phys. Rev. volume 55, pages 364 (year 1939)NoStop [Oppenheimer and Volkoff(1939)]Oppe39TOV author author J. R. Oppenheimer and author G. M. Volkoff, @noop journal journal Phys. Rev. volume 55, pages 374 (year 1939)NoStop [Hebeler et al.(2015)Hebeler, Holt, Menéndez, and Schwenk]Hebe15ARNPS author author K. Hebeler, author J. D. Holt, author J. Menéndez, and author A. Schwenk, @noop journal journal Annu. Rev. Nucl. Part. Sci. volume 65, pages 457 (year 2015)NoStop [Gezerlis et al.(2013)Gezerlis, Tews, Epelbaum, Gandolfi, Hebeler, Nogga, and Schwenk]Geze13QMCchi author author A. Gezerlis, author I. Tews, author E. Epelbaum, author S. Gandolfi, author K. Hebeler, author A. Nogga, and author A. Schwenk, @noop journal journal Phys. Rev. Lett. volume 111, pages 032501 (year 2013)NoStop [Drischler et al.(2020)Drischler, Furnstahl, Melendez, and Phillips]Dris20PRL author author C. Drischler, author R. J. Furnstahl, author J. A. Melendez, and author D. R. Phillips, @noop journal journal Phys. Rev. Lett. volume 125, pages 202702 (year 2020)NoStop [Drischler et al.(2016)Drischler, Hebeler, and Schwenk]Dris16asym author author C. Drischler, author K. Hebeler, and author A. Schwenk, @noop journal journal Phys. Rev. C volume 93, pages 054314 (year 2016)NoStop [Somasundaram et al.(2021)Somasundaram, Drischler, Tews, and Margueron]Somasundaram21 author author R. Somasundaram, author C. Drischler, author I. Tews, and author J. Margueron, @noop journal journal Phys. Rev. C volume 103, pages 045803 (year 2021)NoStop [Tews et al.(2017)Tews, Lattimer, Ohnishi, and Kolomeitsev]Tews17 author author I. Tews, author J. M. Lattimer, author A. Ohnishi, and author E. E. Kolomeitsev, @noop journal journal Astrophys. J. volume 848, pages 105 (year 2017)NoStop [Hebeler et al.(2011)Hebeler, Bogner, Furnstahl, Nogga, and Schwenk]Hebe11fits author author K. Hebeler, author S. K. Bogner, author R. J. Furnstahl, author A. Nogga, and author A. Schwenk, @noop journal journal Phys. Rev. C volume 83, pages 031301(R) (year 2011)NoStop [Carlsson et al.(2016)Carlsson, Ekström, Forssén, Strömberg, Jansen, Lilja, Lindby, Mattsson, and Wendt]Carl15sim author author B. D. Carlsson, author A. Ekström, author C. Forssén, author D. F. Strömberg, author G. R. Jansen, author O. Lilja, author M. Lindby, author B. A. Mattsson, and author K. A. Wendt, @noop journal journal Phys. Rev. X volume 6, pages 011019 (year 2016)NoStop [Entem et al.(2017)Entem, Machleidt, and Nosyk]Ente17EMn4lo author author D. R. Entem, author R. Machleidt, and author Y. Nosyk, @noop journal journal Phys. Rev. C volume 96, pages 024004 (year 2017)NoStop [Dutra et al.(2012)Dutra, Sa Martins, Delfino, Stone, and Stevenson]Dutra12PRC author author M. Dutra, author J. S. Sa Martins, author A. Delfino, author J. R. Stone, and author P. D. Stevenson, @noop journal journal Phys. Rev. C volume 85, pages 035201 (year 2012)NoStop [Lim and Holt(2019)]Lim19 author author Y. Lim and author J. W. Holt, @noop journal journal Eur. Phys. J. A volume 55, pages 209 (year 2019)NoStop [Lim et al.(2021)Lim, Bhattacharya, Holt, and Pati]Lim2020n author author Y. Lim, author A. Bhattacharya, author J. W. Holt, and author D. Pati, @noop journal journal Phys. Rev. C volume 104, pages L032802 (year 2021)NoStop [Lim and Holt(2022)]Lim2022f author author Y. Lim and author J. W. Holt, @noop journal journal Galaxies volume 10, pages 99 (year 2022)NoStop [Raaijmakers et al.(2021)Raaijmakers, Greif, Hebeler, Hinderer, Nissanke, Schwenk, Riley, Watts, Lattimer, and Ho]Raaijmakers21 author author G. Raaijmakers, author S. K. Greif, author K. Hebeler, author T. Hinderer, author S. Nissanke, author A. Schwenk, author T. E. Riley, author A. L. Watts, author J. M. Lattimer, and author W. C. G. Ho, @noop journal journal Astrophys. J. Lett. volume 918, pages L29 (year 2021)NoStop [Lim and Holt(2017)]Lim17 author author Y. Lim and author J. W. Holt, @noop journal journal Phys. Rev. C volume 95, pages 065805 (year 2017)NoStop [Ravenhall et al.(1983)Ravenhall, Pethick, and Lattimer]RPL1983 author author D. G. Ravenhall, author C. J. Pethick, and author J. M. Lattimer, @noop journal journal Nucl. Phys. A volume 407, pages 571 (year 1983)NoStop [Lattimer and Swesty(1991)]LSEOS author author J. M. Lattimer and author F. D. Swesty, @noop journal journal Nucl. Phys. A volume 535, pages 331 (year 1991)NoStop [Lattimer et al.(1991)Lattimer, Pethick, Prakash, and Haensel]Lattimer91 author author J. M. Lattimer, author C. J. Pethick, author M. Prakash, and author P. Haensel, @noop journal journal Phys. Rev. Lett. volume 66, pages 2701 (year 1991)NoStop [Capano et al.(2020)Capano, Tews, Brown, Margalit, De, Kumar, Brown, Krishnan, and Reddy]Capano20 author author C. D. Capano, author I. Tews, author S. M. Brown, author B. Margalit, author S. De, author S. Kumar, author D. A. Brown, author B. Krishnan, and author S. Reddy, @noop journal journal Nature Astronomy volume 4, pages 625 (year 2020)NoStop [Al-Mamun et al.(2021)Al-Mamun, Steiner, Nättilä, Lange, O'Shaughnessy, Tews, Gandolfi, Heinke, and Han]Al-Mamun21 author author M. Al-Mamun, author A. W. Steiner, author J. Nättilä, author J. Lange, author R. O'Shaughnessy, author I. Tews, author S. Gandolfi, author C. Heinke, and author S. Han, @noop journal journal Phys. Rev. Lett. volume 126, pages 061101 (year 2021)NoStop [Essick et al.(2021b)Essick, Tews, Landry, and Schwenk]Essick21 author author R. Essick, author I. Tews, author P. Landry, and author A. Schwenk, @noop journal journal Phys. Rev. Lett. volume 127, pages 192701 (year 2021b)NoStop [Huth et al.(2022)Huth, Pang, Tews, Dietrich, Le Févre, Schwenk, Trautmann, Agarwal, Bulla, Coughlin, and Van Den Broeck]Huth22 author author S. Huth, author P. T. H. Pang, author I. Tews, author T. Dietrich, author A. Le Févre, author A. Schwenk, author W. Trautmann, author K. Agarwal, author M. Bulla, author M. W. Coughlin, and author C. Van Den Broeck, @noop journal journal Nature volume 606, pages 276 (year 2022)NoStop [Annala et al.(2022)Annala, Gorda, Katerini, Kurkela, Nättilä, Paschalidis, and Vuorinen]Annala22 author author E. Annala, author T. Gorda, author E. Katerini, author A. Kurkela, author J. Nättilä, author V. Paschalidis, and author A. Vuorinen, @noop journal journal Phys. Rev. X volume 12, pages 011058 (year 2022)NoStop [Altiparmak et al.(2022)Altiparmak, Ecker, and Rezzolla]Altiparmak22 author author S. Altiparmak, author C. Ecker, and author L. Rezzolla, @noop journal journal Astrophys. J. Lett. volume 939, pages L34 (year 2022)NoStop [Gorda et al.(2022)Gorda, Komoltsev, and Kurkela]Gorda22 author author T. Gorda, author O. Komoltsev, and author A. Kurkela, @noop (year 2022), http://arxiv.org/abs/2204.11877 arXiv:2204.11877 NoStop [Lattimer and Prakash(2007)]Lattimer06 author author J. M. Lattimer and author M. Prakash, @noop journal journal Phys. Rept. volume 442, pages 109 (year 2007)NoStop [Bedaque and Steiner(2015)]Bedaque15 author author P. Bedaque and author A. W. Steiner, @noop journal journal Phys. Rev. Lett. volume 114, pages 031103 (year 2015)NoStop
http://arxiv.org/abs/2307.04794v1
20230710180006
CHEX-MATE: CLUster Multi-Probes in Three Dimensions (CLUMP-3D), I. Gas Analysis Method using X-ray and Sunyaev-Zel'dovich Effect Data
[ "Junhan Kim", "Jack Sayers", "Mauro Sereno", "Iacopo Bartalucci", "Loris Chappuis", "Sabrina De Grandi", "Federico De Luca", "Marco De Petris", "Megan E. Donahue", "Dominique Eckert", "Stefano Ettori", "Massimo Gaspari", "Fabio Gastaldello", "Raphael Gavazzi", "Adriana Gavidia", "Simona Ghizzardi", "Asif Iqbal", "Scott Kay", "Lorenzo Lovisari", "Ben J. Maughan", "Pasquale Mazzotta", "Nobuhiro Okabe", "Etienne Pointecouteau", "Gabriel W. Pratt", "Mariachiara Rossetti", "Keiichi Umetsu" ]
astro-ph.CO
[ "astro-ph.CO" ]
I. Gas Analysis Method using X-ray and Sunyaev–Zel'dovich Effect Data California Institute of Technology, 1200 E. California Blvd., MC 367-17, Pasadena, CA 91125, USA [email protected] INAF, Osservatorio di Astrofisica e Scienza dello Spazio, via Piero Gobetti 93/3, 40129 Bologna, Italy INFN, Sezione di Bologna, viale Berti Pichat 6/2, 40127 Bologna, Italy INAF, IASF-Milano, via A. Corti 12, 20133, Milano, Italy Department of Astronomy, University of Geneva, Ch. d’Ecogia 16, CH-1290 Versoix, Switzerland INAF - Osservatorio Astronomico di Brera, via E. Bianchi 46, 23807 Merate (LC), Italy Università degli studi di Roma ‘Tor Vergata’, Via della ricerca scientifica 1, 00133 Roma, Italy INFN, Sezione di Roma ‘Tor Vergata’, Via della Ricerca Scientifica 1, 00133 Roma, Italy Dipartimento di Fisica, Sapienza Università di Roma, Piazzale Aldo Moro 5, I-00185, Roma, Italy Department of Physics and Astronomy, Michigan State University, 567 Wilson Road, East Lansing, MI 48864, USA Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544, USA Laboratoire d’Astrophysique de Marseille, Aix-Marseille Univ, CNRS, CNES, Marseille, France Institut d’Astrophysique de Paris, UMR 7095, CNRS & Sorbonne Université, 98 bis boulevard Arago, F-75014 Paris, France Université Paris-Saclay, Université Paris Cité, CEA, CNRS, AIM, 91191, Gif-sur-Yvette, France Jodrell Bank Centre for Astrophysics, Department of Physics and Astronomy, The University of Manchester, Manchester M13 9PL, UK HH Wills Physics Laboratory, University of Bristol, Bristol, BS8 1TL, UK Physics Program, Graduate School of Advanced Science and Engineering, Hiroshima University, Hiroshima 739-8526, Japan Hiroshima Astrophysical Science Center, Hiroshima University, 1-3-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8526, Japan Core Research for Energetic Universe, Hiroshima University, 1-3-1, Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8526, Japan IRAP, Université de Toulouse, CNRS, CNES, UT3-UPS, (Toulouse), France Academia Sinica Institute of Astronomy and Astrophysics (ASIAA), No. 1, Section 4, Roosevelt Road, Taipei 10617, Taiwan Galaxy clusters are the products of structure formation through myriad physical processes that affect their growth and evolution throughout cosmic history. As a result, the matter distribution within galaxy clusters, or their shape, is influenced by cosmology and astrophysical processes, in particular the accretion of new material due to gravity. We introduce an analysis method to investigate the three-dimensional triaxial shapes of galaxy clusters from the Cluster HEritage project with – Mass Assembly and Thermodynamics at the Endpoint of structure formation (CHEX-MATE). In this work, the first paper of a CHEX-MATE triaxial analysis series, we focus on utilizing X-ray data from and Sunyaev-Zel'dovich (SZ) effect maps from and Atacama Cosmology Telescope (ACT) to obtain a three dimensional triaxial description of the intracluster medium (ICM) gas. We present the forward modeling formalism of our technique, which projects a triaxial ellipsoidal model for the gas density and pressure to compare directly with the observed two dimensional distributions in X-rays and the SZ effect. A Markov chain Monte Carlo is used to estimate the posterior distributions of the model parameters. Using mock X-ray and SZ observations of a smooth model, we demonstrate that the method can reliably recover the true parameter values. In addition, we apply the analysis to reconstruct the gas shape from the observed data of one CHEX-MATE galaxy cluster, PSZ2 G313.33+61.13 (Abell 1689), to illustrate the technique. The inferred parameters are in agreement with previous analyses for that cluster, and our results indicate that the geometrical properties, including the axial ratios of the ICM distribution, are constrained to within a few percent. With much better precision than previous studies, we thus further establish that Abell 1689 is significantly elongated along the line of sight, resulting in its exceptional gravitational lensing properties. Junhan Kim1 Jack Sayers1 Mauro Sereno2,3 Iacopo Bartalucci4 Loris Chappuis5 Sabrina De Grandi6 7,8 Marco De Petris9 Megan E. Donahue10 Dominique Eckert5 Stefano Ettori2,3 11 Fabio Gastaldello4 Raphael Gavazzi12,13 Adriana Gavidia1 Simona Ghizzardi4 Asif Iqbal14 Scott Kay15 Lorenzo Lovisari2 Ben J. Maughan16 Pasquale Mazzotta7,8 Nobuhiro Okabe17,18,19 20 Gabriel W. Pratt14 Mariachiara Rossetti4 Keiichi Umetsu21 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Galaxy clusters are useful probes of structure formation, astrophysical processes such as shocks and feedback from active galactic nuclei, and cosmology <cit.>. For instance, they are fundamental to the science goals of numerous ongoing and upcoming large survey projects such as eROSITA <cit.>, Euclid <cit.>, and Rubin Observatory <cit.>. In order to maximize the scientific reach of such programs, particularly with regard to cosmological parameter constraints, it is crucial to accurately characterize the ensemble average physical properties of galaxy clusters along with the intrinsic scatter relative to these averages <cit.>. One such example are the scaling relations used to connect global galaxy cluster observables to underlying halo mass <cit.>. While these scaling relations are generally sensitive to a range of astrophysical processes <cit.>, some observables, including the gravitational weak lensing measurements often used to determine absolute mass, have deviations from average relations that are dominated by projection effects related to asphericity and orientation <cit.>. CHEX-MATE <cit.>[<http://xmm-heritage.oas.inaf.it/>] is an effort to provide a more accurate understanding of the population of galaxy clusters at low-z and high mass, particularly in the context of cosmology and mass calibration, including the shape of their matter distributions and the effects of the baryonic physics on their global properties. The project is based on a 3 Msec program to observe 118 galaxy clusters, containing two equal-sized sub-samples selected from the all-sky SZ[Throughout this paper, we use the abbreviation `SZ' to represent the thermal Sunyaev-Zel'dovich effect.] effect survey. The CHEX-MATE Tier-1 and Tier-2 samples each include 61 galaxy clusters with four overlapping clusters and represent a volume-limited (0.05 < z ≤ 0.2) sample in the local universe and mass-limited (M_500≥ 7.25 × 10^14 M_⊙)[The parameter M_500 denotes the mass enclosed within a radius (R_500) where the mean overdensity is 500 times the critical density at a specific redshift, and we use the M_500 and R_500 values from <cit.>.] sample of the most massive objects in the universe, respectively. The X-ray observing program has recently been completed, and initial results from the analyses of these data along with publicly available SZ data have already been published <cit.>. We utilize triaxial modeling techniques <cit.> to investigate the three-dimensional mass distribution within the CHEX-MATE galaxy clusters to infer their intrinsic properties. This approach is motivated by two reasons: (1) Three-dimensional triaxial shapes provide a better approximation of galaxy clusters than spherical models, and the parameters, such as mass, obtained from such an analysis have lower levels of bias and intrinsic scatter <cit.>; (2) A correlation between the triaxial shape of the dark matter (DM) halo and its formation history has been established in simulations <cit.>, suggesting that triaxial shape measurements can provide a powerful probe of cosmology independent of other techniques currently in use. For instance, some lensing-based shape measurements have found good agreement with ΛCDM predictions <cit.>, while a recent multi-probe triaxial analysis suggests a ≃ 2σ discrepancy between the observed and predicted minor to major axial ratio distributions <cit.>. This discrepancy could indicate that clusters formed more recently than predicted. Alternatively, elevated merger rates <cit.>, a reduced influence of baryons on the dark matter <cit.> or enhanced feedback <cit.> could also explain the observed cluster shapes. CHEX-MATE offers a uniform selection of galaxy clusters with consistent measurements of ICM density and temperature. This clean, well-characterized selection with a large sample size (∼80 clusters excluding major mergers; ) will enable a robust cosmological measurement of the triaxial shape distribution. For our analysis, we adopted the CLUster Multi-Probes in Three Dimensions <cit.> project and implemented significant updates to the modeling package. CLUMP-3D incorporates multiwavelength data from X-ray (surface brightness and spectroscopic temperature), mm-wave (SZ surface brightness), and optical (gravitational lensing) observations, which are the projected observables. Then, it assumes triaxial distributions of the ICM gas and matter density profiles. Taking advantage of the different dependencies of the X-ray and SZ signals on the gas density and temperature, they probe the line-of-sight extent of the ICM, and gravitational lensing data probes the projected matter distribution. In particular, the X-ray emission observed from the ICM is proportional to the line-of-sight integral of the squared electron density (n_e) multiplied by the X-ray cooling function, Λ, represented as S_X ∝∫ n^2_e Λ dl. Meanwhile, the detected SZ signal is proportional to the line-of-sight integral of the product of electron density and temperature (T_e), denoted as B_SZ∝∫ n_e T_e dl. Given that the ICM temperature (T_X) can be spectroscopically measured using X-ray observations, the line-of-sight elongation (l) can subsequently be determined through the combination of these three measurements as l ∼ (B^2_SZΛ) / (S_X T^2_X). Assuming co-alignement of the triaxial axes of the ICM and dark matter distributions, while still allowing for different axial ratios for the two quantities, our multi-probe analysis can thus constrain the three-dimensional shapes of galaxy clusters. CLUMP-3D was introduced in <cit.>, where the authors inferred the triaxial matter and gas distribution of the galaxy cluster MACS J1206.2-0847. The technique built upon similar methods developed to constrain cluster morphology <cit.>. Then, it was applied to measure the shapes of the Cluster Lensing And Supernova survey with Hubble <cit.> clusters, to probe the ensemble-average three-dimensional geometry <cit.> as well as the radial profile of the non-thermal pressure fraction <cit.>. These results demonstrated the potential of the three-dimensional triaxial shape measurement technique, but they were relatively imprecise due to the sample size, data quality, and systematics related to cluster selection. Thus, the much larger CHEX-MATE galaxy cluster sample, with a well understood selection function and more uniform and higher quality X-ray data, will provide improved statistics and more robust constraints on the shape measurements. In this paper, we demonstrate several improvements to the original CLUMP-3D formalism while modeling the ICM distributions observed by , , and ground-based SZ data from ACT. As detailed below, we have implemented a fully two-dimensional analysis of the X-ray temperature <cit.> and SZ data, whereas the original CLUMP-3D only treated the X-ray surface brightness (SB) in two dimensions while using one-dimensional azimuthally-averaged profiles of both the X-ray spectroscopic temperatures and the SZ effect data. In addition, we now model the ICM gas density and pressure instead of its density and temperature. This allows us to fit the data with fewer parameters, thus accelerating the model fitting process. Additionally, we fully rewrote the code in Python to facilitate future public release of the package. In Sec. <ref>, we summarize the triaxial analysis formalism and describe the model fitting method. In Sec. <ref>, we introduce the X-ray and SZ data from our program and apply the technique to a CHEX-MATE galaxy cluster. In a subsequent paper, we will include gravitational lensing constraints in a manner that also builds upon, and improves, the existing CLUMP-3D technique. With these X-ray, SZ effect, and gravitational lensing data, we will be able to model the triaxial distributions of both the ICM and the dark matter. Throughout this study, we adopt a ΛCDM cosmology characterized by H_0 = 70 km s^-1 Mpc^-1, Ω_m = 0.3, and Ω_Λ = 0.7. E(z) represents the ratio of the Hubble constant at redshift z to its present value, H_0, and h_70 = H_0 / (100 s^-1 Mpc^-1)/0.7. § TRIAXIAL ANALYSIS: FORMALISM AND THE MODEL FIT While the mathematical description of a triaxial geometry for astronomical objects and their physical profiles has been introduced in previous studies <cit.>, these works lack consistency in their notation. To prevent confusion, we present our mathematical formalism for triaxial modeling in this section. We then describe our model fitting procedure and the implementation of the fitting algorithm in our software package. As this paper focuses on the analysis method to study ICM distributions, we do not include the gravitational lensing data in our fits. Future works in this series will expand our formalism to include total mass density profiles constrained by gravitational lensing measurements. For instance, in the case of a Navarro–Frenk–White <cit.> profile the gravitational lensing analysis requires two additional parameters–total mass (M_200) and concentration (c_200)–assuming that the gas and matter distributions are co-aligned along the ellipsoidal axes. This assumption is well supported by a 2D weak-lensing and X-ray analysis of 20 high-mass galaxy clusters <cit.>, as well as by cosmological hydrodynamical simulations <cit.>. §.§ Geometry and Projection To connect the intrinsic cluster geometry to the projected properties observed in the plane of the sky, we assume a triaxial ellipsoidal model for the gas distribution, where the thermodynamic profiles of the ICM are represented as a function of ζ, the ellipsoidal radius. In the intrinsic coordinate system of the ellipsoid (x_1, x_2, x_3), it is defined as: ζ^2 = x^2_1/q^2_1 + x^2_2/q^2_2 + x^2_3, where q_1 and q_2 are minor-to-major and intermediate-to-major axial ratios, respectively (0 < q_1 ≤ q_2 ≤ 1). Given a semi-major axis of the ellipsoid l_s, the volume of the ellipsoid is (4 π / 3) l^3_s q_1 q_2. The ellipsoid becomes a prolate shape if q_1 = q_2 ≤ 1 and an oblate shape if q_1 ≤ q_2 = 1. Figure <ref> illustrates the geometry of the ellipsoid and the involved coordinate systems. It is essential to note that the axes defining the ICM model may not align with the observer's frame. To relate the ellipsoid's intrinsic coordinate system (x^ int_1, x^ int_2, x^ int_3) to the observer's coordinate frame (x^ obs_1, x^ obs_2, x^ obs_3), we employ three Euler angles. These angles describe the relationship between the two coordinate systems: (1) the angle between x^ int_3, aligned with the major axis of the ellipsoid, and x^ obs_3, which lies along the observer's line-of-sight (θ), (2) the angle between x^ int_1 and the line of nodes (φ), and (3) the angle between x^ obs_1 and the line of nodes (ψ). The line of nodes is the intersection of the x^ int_1-x^ int_2 plane and the x^ obs_1-x^ obs_2 plane, and it is aligned with the vector x^ int_3 × x^ obs_3. We can derive the geometric properties of the projected ellipse from the intrinsic parameters of the ellipsoid when it is projected onto the plane from any direction. These properties encompass the semi-major axis of the projected ellipse l_p, its ellipticity ϵ, the orientation of the ellipse in the plane of the sky θ_ϵ, and the elongation parameter e_∥. The projected profiles are expressed as a function of ξ, the elliptical radius of the ellipse in the plane of the sky. The ellipticity of the projected ellipse (ϵ) is ϵ = 1-q_p, where q_p is the minor-to-major axial ratio of the observed projected isophote (q_p≤ 1), which is the inverse of e_p used in <cit.>, and q_p = √(j + l - √((j - l)^2 + 4k^2)/j + l + √((j - l)^2 + 4k^2)), where j = 1/2[(1/q^2_1 + 1/q^2_2) - sin^2θcos^2 ψ(q^2_1 + q^2_2 - 2 )/q^2_1 q^2_2 + (1/q^2_1 - 1/q^2_2) {cos 2φ( cos^2θcos^2ψ - sin^2ψ) - cosθsin2φsin2ψ}], k = 1/4 q^2_1 q^2_2[2 cosθ(q^2_1 - q^2_2 ) cos 2ψsin 2φ + {sin^2θ(q^2_1 + q^2_2 - 2 ) + (1+cos^2θ) (q^2_1 - q^2_2 ) cos 2φ}sin2ψ], l = 1/2[(1/q^2_1 + 1/q^2_2) - sin^2θsin^2 ψ(q^2_1 + q^2_2 - 2 )/q^2_1 q^2_2 + (1/q^2_1 - 1/q^2_2) {cos 2φ( cos^2θsin^2ψ - cos^2ψ) + cosθsin2φsin2ψ}]. It is worth noting that the expressions of j, k, and l in <cit.> and <cit.> differ from those presented above, as they assumed ψ = 0, using only two angles to align the major ellipsoidal axis with the observer's line-of-sight. However, a coordinate transformation requiring ψ is necessary to align the remaining axes. The orientation angle in the plane of the sky of the projected ellipse is θ_ϵ = tan^-1(l-j + √((j - l)^2 + 4k^2)/2k), and the elongation parameter of the ellipsoid is e_∥≡l_los/l_p = √(q_p/q_1 q_2) f^-3/4, where f = sin^2 θ[ ( sinφ/q_1)^2 + ( cosφ/q_2)^2 ] + cos^2 θ. The elongation parameter, e_∥, represents the ratio of the size of the ellipsoid along the observer's line-of-sight to the major axis of the projected ellipse in the sky plane, providing a measure of the three dimensional geometry of the triaxial ellipsoid model of the ICM. In the gas analysis presented in <cit.>, the orientation angle (θ_ϵ; represented as ϵ by ) was determined from the X-ray map, while the elongation parameter (represented as e_, which is equivalent to 1 / e_∥) was estimated from the combined X-ray and SZ analysis. Later, <cit.> simultaneously constrained the individual Euler angles by treating the axial ratios and three angles as free parameters. Then, the semi-major axis of the projected ellipse becomes l_p = l_s/e_∥√(f) = l_s √(q_1 q_2/q_p) f^1/4, and the projected length scales l_s and l_los are related by the elongation parameter, that is, l_los = l_s / √(f). In the plane of the sky, an elliptical radius ξ becomes ξ^2 = (x^2_1 + x^2_2/q^2_p) (l_s/l_p)^2 <cit.>. [Assuming that the ellipse is expressed as x^2_1/a^2+ x^2_2/b^2 = 1, q_p is the minor-to-major axial ratio (b/a), and the elliptical radius, which is the corresponding major axis length, becomes √(x^2_1 + x^2_2/q^2_p) because x^2_1 + a^2/b^2 x^2_2 = a^2. ] Finally, three-dimensional volume density can be projected onto the sky plane by utilizing the geometric parameters, F_ 2D (ξ; l_p, p_i) = 2/√(f)∫_ξ^∞ F_ 3D (ζ; l_s, p_i) ζ/√(ζ^2 - ξ^2) dζ, F_ 2D (x_ξ; l_p, p_i) = 2 l_p e_∥∫_x_ξ^∞ F_ 3D (x_ζ; l_s, p_i) x_ζ/√(x^2_ζ - x^2_ξ) dx_ζ, where x_ζ = ζ / l_s, x_ξ = ξ/l_p, and p_i are the parameters describing the intrinsic density profile <cit.>. Using this projection, we calculate the SZ and X-ray maps on the sky plane from the three-dimensional ellipsoidal distribution of the ICM profiles and fit the model to the data. We describe the analytic profiles (F_ 3D) for the physical quantities related to the direct observables (F_ 2D) in the next section. §.§ Electron Density and Pressure Profiles We use smooth analytic functions of the electron density and pressure profiles to describe the thermodynamics and spatial distribution of the ICM, and then use these functions to compute observable quantities, such as the SZ effect map, the X-ray SB map, and the X-ray temperature map. The model lacks the ability to effectively constrain small-scale structures that deviate from its assumptions. However, the three-dimensional description of the profiles provides a better approximation compared to spherical models. After accounting for instrumental effects, such as the point spread function (PSF), these model maps are then compared to the observed data. The original CLUMP-3D package, as detailed in <cit.>, instead assumed smooth analytic functions for the gas density and temperature <cit.>. However, because the presence (or not) of a cool core alters the overall shape of the temperature profile <cit.>, the analytic function needs to be sufficiently flexible to allow for either a decrease or increase in temperature at small radii. Pressure profiles are more regular in their global shape <cit.>, and therefore a simpler function with fewer free parameters can be used to describe the ICM. Thus, our overall model can be more easily constrained than the one used by <cit.>. Table <ref> lists the model parameters used in our gas analysis, including the geometric parameters described in the previous section. The electron density profile is described as n_e (ζ) = n_0 (ζ/ζ_c)^-η_e[1 + (ζ/ζ_c)^2 ]^-3β_e/2+η_e/2[1 + (ζ/ζ_t)^3 ]^-γ_e/3, where n_0 is the central electron density, ζ_c is the core radius, and ζ_t is the tidal radius (ζ_t > ζ_c). (β_e, η_e, γ_e) represent the power law exponent of the electron density distribution for the intermediate, inner, and external slope of the profile, respectively <cit.>. The electron pressure profile is modeled using a generalized NFW (gNFW) profile <cit.>. It is described as P_e(x)/P_500 = P_0/(c_500x)^γ_p [1 + (c_500x)^α_p]^(β_p - γ_p)/α_p, where x = ζ/R_500, (γ_p, α_p, β_p) describes the power law exponent for the central (r ≪ r_ s), intermediate (r ∼ r_ s=R_500 / c_500), and outer (r ≫ r_ s) regions, and the characteristic pressure is P_500 = 1.65 × 10^-3 E(z)^8/3×[ M_500/3 × 10^14 h^-1_70 M_⊙]^2/3 h^2_70 keV cm^-3. The expressions for P_500 provided in <cit.> and <cit.> represent the gas pressure and the electron pressure, respectively. We opt to use the electron pressure formulation from the latter. In order to convert the electron pressure, P_e, into gas pressure, it is necessary to incorporate both the mean molecular weight and the mean molecular weight per free electron into the calculations. As noted by <cit.>, strong degeneracies between the pressure profile parameters generally prevent meaningful constraints when all are varied <cit.>. For our baseline fits, we thus fix the values of c_500 and γ_p to 1.4 and 0.3 as in <cit.>. In addition, because β_p characterizes the pressure profile in the outer regions, it may not be well-constrained depending on the map size chosen for the actual fit. For the demonstration of our approach using actual CHEX-MATE data in Sec. <ref>, we restrict the map size of the X-ray and SZ observational data to within R_500 to mask out potential spurious signal at large radii that do not originate from a target cluster, and therefore an external constraint on the value of β_p is required. In such cases, we use a value that depends on the mass and redshift, given by β_p = 5.495 ( M_500/10^15 M_⊙)^0.15(1 + z )^0.02. This relation is derived from a combined X-ray and SZ analysis of galaxy clusters with a redshift range of 0.05 ≤ z ≤ 0.60 and mass range of 4 × 10^14≤ M_500≤ 30 × 10^14M_⊙ <cit.>. This fit is thus valid for the mass and redshift ranges of the CHEX-MATE clusters, with Tier-1 covering 0.05<z<0.2 and 2×10^14 < M_500 < 9×10^14M_⊙, and Tier-2 encompassing z<0.6 and M_500 > 7.25×10^14M_⊙. §.§ Sunyaev-Zel'dovich Effect and X-ray Observables In this section, we summarize the observables associated with the SZ effect and the X-ray emissivity, and explain their relationship to the electron density and pressure profiles introduced earlier. The SZ effect is characterized by the Compton-y parameter, which is proportional to the integrated electron pressure along the line-of-sight. y ≡σ_ T/m_e c^2∫_∥ P_e dl = σ_ T k_ B/m_e c^2∫_∥ n_e T_e dl, where σ_T is the Thomson cross-section, k_B is the Boltzmann constant, n_e is the electron number density, and T_e is the electron temperature. The X-ray observations are primarily sensitive to the surface brightness, SB = 1/4 π (1+z)^3∫_∥ n^2_e Λ_ eff (T_e, Z) dl <cit.>, of the ICM due to thermal Bremsstrahlung, where the cooling function Λ_ eff (T_e, Z) quantifies the thermal radiation emitted from a fully ionized plasma due to collisions, taking into account the relative abundance of each chemical element. It can be calculated using software such as <cit.>. We use a pre-calculated table and interpolate the value in the temperature (T_e)–metallicity (Z) space during the model computation. To calculate the emissivity, the instrument response within the chosen energy band [0.7–1.2] keV and the Galactic hydrogen column density must be taken into account, as explained in <cit.>, which describes the details of the data analysis used to produce the SB maps. In our software, we perform the calculation using the Python package [<https://pyproffit.readthedocs.io/en/latest/intro.html>] <cit.>. The data can also be used to derive projected temperature maps of ICM via spectroscopic fits <cit.>. Within our model, we approximate this spectroscopic temperature based on the formalism of <cit.> as follows: T_ sp = ∫ W T_e dV/∫ W dV keV;  W = n^2_e/T^3/4_e, which is valid for Bremsstrahlung (T_e ≥ 3 keV). The SZ and X-ray observables (Eqs. <ref> and <ref>) are modeled as projections of the three-dimensional profiles parameterized by the ellipsoidal radius ζ (or x_ζ). The three-dimensional volume density of the models, F_ 3D (x_ζ; l_s, p_i), can be written analytically, and the two-dimensional maps are calculated following Eq. <ref>. The model Compton-y parameter is y_ model (x_ξ; l_ p, p_i) = (2 l_ p e_∥) (σ_ T/m_e c^2) ∫_x_ξ^∞ P_e (x_ζ) x_ζ/√(x^2_ζ - x^2_ξ) dx_ζ, where P_e(x_ζ) = P_0 P_500/(c_500x_ζl_s/R_500)^γ_p[1 + (c_500 x_ζl_s/R_500)^α_p]^(β_p - γ_p)/α_p keV cm^-3. This integration can be computationally expensive, depending on the size of the map. To expedite the calculation, we create a linearly-spaced sample of the (normalized) elliptical radius x_ξ and interpolate the integration results while generating a model map. We apply the same technique in the X-ray observable calculation. Lastly, we convolve the model map with the appropriate PSF shape (e.g., a 7 FWHM Gaussian map in the case of and 1.6 FWHM in the case of ACT, see Fig. <ref>). Similarly, the X-ray SB (Eq. <ref>) model becomes SB_ model (x_ξ; l_ p, p_i) = ( 2 l_ p e_∥) 1/4 π (1+z)^3∫_x_ξ^∞ n^2_e (x_ζ) Λ_ eff(T_e(x_ζ), Z (x_ζ) ) x_ζ/√(x^2_ζ - x^2_ξ) dx_ζ, where n_e (x_ζ) = n_0 (x_ζl_s/ζ_c)^-η_e[1 + (x_ζl_s/ζ_c)^2 ]^-3β_e/2+η_e/2[1 + (x_ζl_s/ζ_t)^3 ]^-γ_e/3, and the electron temperature is T_e(x_ζ) = P_e (x_ζ)/n_e(x_ζ) k_B. We use a radius-dependent metallicity profile Z (x_ζ) obtained from the X-COP galaxy cluster samples <cit.> for calculating the cooling function. Upon generating the model, instrumental responses are incorporated to facilitate a direct comparison between the model and the data. For the X-ray maps, the sky background in the [0.7–1.2] keV band (2.49 × 10^-4 cts/s/arcmin^2; ) is considered. Specifically, we adopted the sky and particle background measured by the European Photon Imaging Camera <cit.> M2 CCD in the [0.5-2] keV band and converted it for the [0.7–1.2] keV band. After adding the sky background, the vignetting is applied. Subsequently, the resulting map is convolved with a Gaussian profile to account for the PSF. The nominal PSF of can be can be closely represented using a Gaussian function with a 6 FWHM[<https://xmm-tools.cosmos.esa.int/external/xmm_user_support/documentation/uhb/onaxisxraypsf.html>]. However, the actual FWHM of the PSF is dependent on the angle relative to the optical axis, and combining images from different cameras could potentially deteriorate the final PSF. Therefore, we follow the convention of <cit.> and assume the Gaussian has a FWHM of 10. The line-of-sight integration of the observed quantities described above is performed to a depth of 10 Mpc in radius. To summarize, the observational data used in our analysis includes two dimensional images of the SZ signal, X-ray SB, and X-ray temperature. Then, we use our triaxial model to generate analogous images based on the model parameters delineated in Table <ref>. The observed and model-generated images can then be directly compared to facilitate our fitting process, and the method employed for this fitting procedure is elaborated upon in the following section. §.§ Fitting Formalism The χ^2 statistic is used to define the likelihood of the model. We use <cit.>, a Python-based affine-invariant ensemble Markov Chain Monte-Carlo <cit.> package, for the model fitting process. By performing MCMC sampling <cit.>, we determine the posterior distribution of the parameters that describe the triaxial model. When conducting a model fit with the data, we occasionally need to adjust the scale parameter of the stretch move within the affine-invariant ensemble sampling algorithm implemented in the package to enhance performance <cit.>. We define the χ^2 functions for our analysis below, which are based on two-dimensional maps of the SZ and X-ray data rather than the original one-dimensional radial profiles used in the CLUMP-3D method presented in <cit.>. The χ^2 function for the two-dimensional SZ map is χ^2_ SZ = ∑_i,j=1^N_ y[y_i - ŷ_i] ( C^-1_ SZ)_ij[y_j - ŷ_j], where ŷ_i is the model Compton-y within a pixel, and y_i is the observed value. To deal with the correlated noise in the SZ data, we use the inverse of the uncertainty covariance matrix (C^-1_ SZ). Similarly, The χ^2 function for the X-ray temperature map becomes χ^2_ T = ∑_i=1^N_ T(T_ sp, i - T̂_ sp, i/δ T_ sp,i)^2, where T̂_ sp, i is the model spectroscopic temperature within a pixel, and T_ sp, i is the observed value with uncertainty δ T_ sp,i. For the X-ray SB, we employ a dual approach. We use a two-dimensional model fit within the circular region that encloses 80% of the emission and one-dimensional analysis for the outside region where the background and the source emission is comparable and signal-to-noise ratio is relatively low. In the exterior region, we compute azimuthal medians in annular bins to mitigate biases in measuring X-ray SB caused by gas clumping, as suggested by <cit.>. While our current analysis solely uses the two-dimensional map of X-ray temperature, in future work we intend to implement an approach that is fully consistent with our treatment of the X-ray SB to also mitigate local deviations from homogeniety in the X-ray temperature data <cit.>. Then, the combined likelihood becomes χ^2_ SB = χ^2_ SB,1D + χ^2_ SB,2D where χ^2_ SB,1D = ∑_i=1^N_ SB,1D(S_ X,1D, i - Ŝ_ X,1D, i/δ S_X,1D,i)^2, and χ^2_ SB,2D = ∑_i=1^N_ SB,2D(S_ X,2D, i - Ŝ_ X,2D, i/δ S_ X,2D,i)^2, where Ŝ_ X, i is the model SB, and S_ X, i and δ_S,i are obtained from the observational data. We currently employ SB measurements and the corresponding error for our 2D analysis assuming Gaussian statistics. This should be a valid assumption, as we define regions with sufficiently large photon counts (i.e., ≥ 20). However, the formally correct approach is to use the Cash statistic, which accounts for Poisson fluctuations in the photon counts <cit.>. Fits using the Cash statistic for photon counting in the low count regime will be explored in future works. Finally, the total χ^2 statistic becomes χ^2_ X+SZ = χ^2_ SZ + χ^2_ T + χ^2_ SB, and the MCMC is used to sample χ^2_ X+SZ within the parameter space near the best fit. §.§ Parameter Estimation with Mock Data To validate the accuracy of our model fitting algorithm, we conduct a full analysis using mock observations of a galaxy cluster described by our model from known input parameter values. Using the model parameters outlined in Table <ref>, we generate model SZ, X-ray SB and temperature maps, incorporating the instrument PSF response. For this test, we assume the mock cluster has the following characteristics: z=0.18, M_500= 8.8 × 10^14M_⊙, R_500= 7.4. Additionally, we take the following values for the geometric configuration and electron density and pressure parameters, with (q_ ICM,1, q_ ICM,2, cosθ, φ, ψ) = (0.6, 0.75, 0.8, -25, 60), (n_0, ζ_ c, ζ_t, β_e, η_e, γ_e) = (0.002, 175, 1.5, 0.6, 0.3, 1.8), P_0, α_p = (10.0, 1.0). In this case, e_∥ = 1.02. The model maps generated with the input parameters are presented in Fig. <ref>. For each pixel based on its coordinates within the observed map, we calculate the observables projected onto the two-dimensional sky plane (Sec. <ref>). Then, instrumental effects, including the PSF response, are applied. As we discuss in the next section, our baseline analysis of the observed data uses the combined and ACT SZ effect map <cit.>, and we assume a PSF with a FWHM of 1.6. To ensure adequate angular sampling of the PSF, we require a maximum pixel size equal to the FWHM divided by three. In addition, we incorporated noise into each mock observation. Using the error maps for the observed data, we randomly sampled Gaussian noise distributions for the SZ, X-ray SB, and X-ray temperature maps, respectively. Figure <ref> shows the posterior distribution of the parameters from our fit to this mock observation. The posterior distributions indicate that we can accurately recover most of the varied parameter values within the expected deviations due to noise fluctuations. Thus, our fitting methodology is able to reliably determine the underlying shape and thermodynamics of the observed mock galaxy cluster. The use of both SZ and X-ray data in our analysis allows us to measure the three-dimensional geometry of the ICM distribution by constraining the elongation parameter (Sec. <ref>), since the two observational probes redundantly measure the thermodynamic properties of the gas along the line-of-sight. However, it should be noted that there may be degeneracies in determining cluster shape through this multi-probe approach depending on the relative orientation of the geometry, especially in inferring the geometric parameters of the 3D structure, as discussed in <cit.>. These degeneracies can cause bias in the recovered shape parameters along with multimodality in the posterior distributions. A further exploration of these degeneracies, in particular as they pertain to the observational data available for the CHEX-MATE sample, will be included in a subsequent paper in this series. § APPLICATION TO CHEX-MATE DATA In this section, we introduce the X-ray and SZ data collected from our program. We apply the triaxial analysis technique to analyze a CHEX-MATE galaxy cluster PSZ2 G313.33+61.13 (Abell 1689), and the cluster serves as an illustrative example to demonstrate the method. §.§ Data Table <ref> summarizes the SZ and X-ray data from CHEX-MATE available for our multiwavelength analysis of the ICM distribution. The foundation of our analysis is the 3 Msec observing program CHEX-MATE <cit.>, from which we have obtained two-dimensional X-ray SB and temperature maps produced using the Voronoi tessellation method <cit.>. The details of the image production are reported in <cit.> and <cit.>, and here we report briefly the main analysis steps. The observations of the clusters were obtained using the EPIC instrument <cit.>. To create the X-ray SB map, photon-count images in the [0.7-1.2] keV range were extracted from the data acquired using the MOS1, MOS2, and pn cameras on the instrument. The energy band was selected to optimize the contrast between the emission from the source and the background <cit.>. The images from all three cameras were combined to maximize the statistical significance while accounting for the energy-band responses. Additionally, the X-ray maps are instrumental-background subtracted and corrected for the exposure. Point sources are removed from the analysis <cit.> by masking them with circular regions that appear as empty circular regions in the X-ray maps in Fig. <ref>. Furthermore, they are spatially binned to have at least 20 counts per bin using the Voronoi technique. X-ray temperature maps <cit.> were prepared in a similar manner for the data obtained in the [0.3-7] keV band, with background modeling <cit.> and spectral fitting performed. The fitting procedure to ascertain the temperature was done utilizing the XSPEC <cit.>, which was employed to minimize the modified Cash statistics <cit.> with the assumption of <cit.> metallicity. Subsequently, Voronoi-binned maps were generated to achieve a high signal-to-noise ratio (∼30) for each cell. SZ maps are available for all of the CHEX-MATE galaxy clusters by definition <cit.>. From these data we have generated a custom y-map using the Modified Internal Linear Component Algorithm <cit.> with an improved angular resolution of 7 FWHM compared to the one publicly released by with an angular resolution of 10 FWHM <cit.>. Also, ground-based SZ observations from cosmic microwave background (CMB) surveys, including the ACT and the South Pole Telescope <cit.>[<https://pole.uchicago.edu/public/data/sptsz_ymap/>], as well as the Caltech Submillimeter Observatory (CSO)'s Bolocam galaxy cluster archive[<https://irsa.ipac.caltech.edu/data/Planck/release_2/ancillary-data/bolocam/bolocam.html>] <cit.>, provide higher angular resolution data for a subset of CHEX-MATE clusters. Some of these ground-based data are currently publicly accessible, while others are slated for future release. In this demonstration paper, we make use of the ACT SZ component-separated maps. The recent data release 4 (DR4) from the ACT provides component-separated maps, one of which is the SZ <cit.>. These maps were generated by analyzing data from a 2,100 square degree area of the sky, captured using the ACTPol receiver <cit.> at 98 and 150 GHz. This data offers more than four times finer angular resolution compared to the map. Then, the maps were jointly analyzed and combined with data. Rather than using the noise estimate provided with these data, which is quantified as a two dimensional power spectral density, we instead follow an approach based on the recent analysis of similar joint ACT and maps in <cit.>. Specifically, we randomly sample 10,000 maps, ensuring that their size aligns with that of the input SZ data, in the corresponding ACT region (for instance, the region designated as `BN' for the cluster Abell 1689 analyzed in the next section). Then, we compute the covariance using these maps to estimate the noise covariance matrix. The resulting noise rms for the y-map is approximately ∼ 9 × 10^-6 per 0.5 square pixel, and the diagonal elements of the noise covariance matrix are shown along with the y-map in Fig. <ref>. §.§ PSZ2 G313.33+61.13 (Abell 1689) Using the datasets described above, we demonstrate our fitting method for PSZ2 G313.33+61.13 (Abell 1689), which is a Tier-2 cluster in the CHEX-MATE sample located at z=0.1832 with a SZ estimated mass of M_500 = 8.77 × 10^14M_⊙. We note the lensing mass measurement of the cluster is ∼70% higher than the hydrostatic mass estimate; see <cit.>. We conducted a triaxial fit aligning the model center with the X-ray peak <cit.>. For a morphologically regular cluster, like Abell 1689, any deviations or offsets between the SZ and X-ray measurements are expected to have minimal impact on the overall model fit. The + ACT SZ y-maps, along with the X-ray SB and temperature maps, are shown in Fig. <ref>. Maps of the rms noise for each observable are also included, and indicate that the cluster is imaged at high signal to noise. This particular cluster was chosen for our demonstration because its triaxial shape has been well studied in the literature <cit.>. For example, <cit.> performed a gas-only analysis using radial profiles of the X-ray and SZ observations from Chandra, WMAP, along with various ground-based SZ facilities, and constrained the shape and orientation of the cluster's triaxial model with q_ ICM,1 = 0.70 ± 0.15, q_ ICM,2 = 0.81 ± 0.16, and cosθ = 0.70 ± 0.29. A subsequent study by <cit.> presented a combined multiwavelength analysis that included lensing data, with the inferred ICM distribution being q_ ICM,1 = 0.60 ± 0.14, q_ ICM,2 = 0.70 ± 0.16. Their derived value of cosθ, obtained from the combined lensing and X-ray/SZ analysis, was found to be 0.93 ± 0.06. The large cosθ suggests that the major axis of the triaxial ellipsoid (x^ int_3 in Fig. <ref>) is closely aligned with the observer's line of sight. Figure <ref> shows the posterior of the model parameters that describe our triaxial fit of PSZ2 G313.33+61.13, using the data from , ACT, and . We find axial ratios of q_ ICM,1 = 0.65 ± 0.02 and q_ ICM,2 = 0.79 ± 0.02. These values are consistent with previous results, but an order of magnitude more precise (Table <ref>). Our fits indicate the major axis of Abell 1689 is almost perfectly aligned with the line of sight, with cosθ≥ 0.96 at 90% confidence. While previous works also indicated such an alignment, a much wider range of orientations were allowed in those fits. We note that our analysis only includes statistical uncertainties on the fit, and the uncertainty due to data calibration is not taken into account here. Also, as the elongation parameter (Eq. <ref>), which is the ratio of the size of the ellipsoid along the observed line-of-sight to the major axis of the projected ellipse in the sky plane, quantifies the 3D geometry of the triaxial ellipsoid model of the ICM. We thus present constraints on e_∥ rather than on φ and ψ. The inferred e_∥ is well constrained in the fit to a value of 1.24 ± 0.03 and is consistent with the gas analysis result of <cit.>, who found e_ = 0.66 ± 0.21, which corresponds to 1.15 ≤ e_∥≤ 2.22. Figure <ref> shows the reconstructed SZ, X-ray SB and temperature maps of PSZ2 G313.33+61.13, incorporating the instrument response, generated using the recovered parameters from Fig. <ref>. The difference map, which is created by taking the input data and subtracting the reconstructed model from it, reveals that the majority of the pixels exhibit relative errors that are spread within a range of ± 4σ (Fig. <ref>). The residuals for the SZ, X-ray SB, and X-ray temperatures are distributed around zero. Their respective standard deviations are equivalent to 1.5σ, 0.6σ, and 1.1σ when fitted by a Gaussian. For comparison, we performed an additional X-ray + SZ fit using only the SZ data, without incorporating the ground-based ACT data. We obtain posteriors that significantly deviate from our baseline fit with ACT data. We attribute this to the coarse angular resolution of which prevents it from resolving morphological features given the angular size of Abell 1689 at z=0.1832. To test this, we generated two sets of mock observations using the recovered parameters from our baseline fit to the observed data from both and ACT (along with ). One mock was based on the properties of the y-map from + ACT, while the other mimicked the y-map with only SZ data, including the appropriate noise and PSF shape for each case. Our fit to the mock multiwavelength data with the + ACT y-map yields recovered parameters closely aligned with the input model, suggesting these data can accurately recover the input ICM shape. In contrast, the second mock observation based on the -only y-map produces a set of parameters significantly deviating from the input. This suggests that the SZ data from alone are insufficient to reliably fit our triaxial model, at least for a galaxy cluster with this specific shape at this specific redshift. This confirms that our fit to observed data using the -only y-map are likely biased. In a subsequent paper we will explore this issue in more detail, to better understand which types of galaxy clusters can (or cannot) be reliably reconstructed with the data available for CHEX-MATE. Furthermore, in order to evaluate how the much higher overall signal to noise of the X-ray SB compared to the SZ and X-ray temperature impacts the results, we carried out an additional fit using the reduced χ^2 for each of the three observables in order to weight them equally in the fit. The results of this fit indicate that there is only a minimal shift in the values of the derived geometric parameters based on this equal weighting of the three observables. Specifically, in the reduced χ^2 fit, q_ ICM,1 has a value of 0.70 ± 0.04, q_ ICM,2 is 0.78 ± 0.05, and e_∥ stands at 1.22 ± 0.07. We also attempted to account for fluctuations in the calibration uncertainty, which can be especially important for the temperature profile <cit.>. We conducted model fits by introducing an additional ∼10% uncertainty of the temperature, but observed little changes in the parameters, with posteriors displaying similar levels of variation. As we will illustrate in subsequent studies, the derived geometric parameters of the ICM distribution, such as the elongation that quantifies the 3D geometry, can be applied in conjunction with gravitational lensing measurements. For these fits, we will work under the assumption that the triaxial axes of the ICM and dark matter are coaligned, but with axial ratios that are allowed to vary. The lensing analysis becomes crucial for discerning the triaxial shapes of dark matter, circumventing the need to rely on hydrostatic equilibrium or simulation-based corrections. Consequently, a comprehensive multi-probe analysis facilitates a characterization of the total matter distribution, which is essential for precise lensing-based mass calibrations <cit.>, along with allowing for a determination of the distribution of non-thermal pressure support <cit.>. § CONCLUSIONS We have improved a multi-probe analysis package to fit the three-dimensional ellipsoidal shapes of CHEX-MATE galaxy clusters. This package builds upon CLUMP-3D <cit.>, which was employed to analyze the triaxial shapes of CLASH clusters <cit.>. Specifically, we have made the following improvements: 1) we model 2D distributions of the SZ and X-ray temperature data, in contrast to the 1D azimuthally averaged profiles in these quantities used by <cit.>, 2) we parametrize electron density and pressure rather than density and temperature, reducing the number of parameters and speeding up the fit, and 3) we have ported the code to Python to facilitate a future public release. For the two-dimensional map analyses, we have added the capability to include publicly available SZ data from ground-based CMB surveys such as ACT, in addition to the default SZ maps. We verified the triaxial analysis method through mock data analysis and applied it to the actual CHEX-MATE galaxy cluster, PSZ2 G313.33+61.13 (Abell 1689). The analysis effectively constrains the model geometry, in particular, at the few percent level for the axial ratios. Our results are consistent with previous analyses of Abell 1689 available in the literature. Specifically, we find axial ratios of q_ ICM,1=0.65 ± 0.02, q_ ICM,2=0.79 ± 0.02, and elongation parameter e_∥ = 1.24 ± 0.03. Compared to the similar gas-only analysis using X-ray and SZ data presented in <cit.>, the axial ratios and elongation parameters in our study demonstrate a substantial improvement, with uncertainties an order of magnitude lower. This marked improvement is attributable to multiple factors: our use of deeper new data not available to <cit.>; our use of an SB image rather than a shallower Chandra SB image; our use of much higher quality SZ data from and ACT rather than from WMAP and SZA/OVRO/BIMA; and our improved analysis formalism making use of fully 2D images for all of the observables rather than a projected elliptically averaged profile of X-ray SB and temperature along with a single aperture photometric measurement of the SZ signal. Our results indicate that Abell 1689 has axial ratios typical of what is expected for the general population of galaxy clusters <cit.>, but a remarkably close alignment between the major axis and the line of sight. This alignment has resulted in exceptional lensing properties of Abell 1689, such as an abundance of strong lensing features <cit.>, one of the largest Einstein radii observed <cit.>, and an extremely large concentration for its mass when fitted to a spherically symmetric model <cit.>. We thus conclude that there is nothing unusual about the triaxial shape of Abell 1689, other than its orientation. In addition, the estimated axial ratios of the cluster yield a triaxiality parameter t=0.66 <cit.>. While the incorporation of lensing data is necessary for a direct quantitative comparison with DM axial ratios, the calculated t classifies this halo as being close to the `prolate' population that comprises ∼80% of the total cluster fraction in the DM only simulations <cit.>. The integration of lensing data for a comprehensive multi-wavelength analysis, as well as the public release of the software and data products, will be addressed in subsequent papers of this series. J.K. and J.S. were supported by NASA Astrophysics Data Analysis Program (ADAP) Grant 80NSSC21K1571. J.K. is supported by a Robert A. Millikan Fellowship from the California Institute of Technology (Caltech). M.S. acknowledges financial contribution from contract ASI-INAF n.2017-14-H.0. and from contract INAF mainstream project 1.05.01.86.10. M.E.D. acknowledges partial support from the NASA ADAP, primary award to SAO with a subaward to MSU, SV9-89010. S.E., F.G., and M.R. acknowledge the financial contribution from the contracts ASI-INAF Athena 2019-27-HH.0, “Attività di Studio per la comunità scientifica di Astrofisica delle Alte Energie e Fisica Astroparticellare” (Accordo Attuativo ASI-INAF n. 2017-14-H.0), and from the European Union’s Horizon 2020 Programme under the AHEAD2020 project (grant agreement n. 871158). This research was supported by the International Space Science Institute (ISSI) in Bern, through ISSI International Team project #565 (Multi-Wavelength Studies of the Culmination of Structure Formation in the Universe). A.I., E.P., and G.W.P. acknowledge support from CNES, the French space agency. K.U. acknowledges support from the National Science and Technology Council of Taiwan (grant 109-2112-M-001-018-MY3) and from the Academia Sinica (grants AS-IA-107-M01 and AS-IA-112-M04). B.J.M. acknowledges support from STFC grant ST/V000454/1. aa
http://arxiv.org/abs/2307.05819v1
20230711215547
Linear and nonlinear transport equations with coordinate-wise increasing velocity fields
[ "Pierre-Louis Lions", "Benjamin Seeger" ]
math.AP
[ "math.AP" ]
Université Paris-Dauphine and Collège de France [email protected]@ceremade.dauphine.fr University of Texas at Austin mailto:[email protected]@math.utexas.edu Partially supported by the National Science Foundation under award number DMS-1840314 Linear and nonlinear transport equations with coordinate-wise increasing velocity fields Benjamin Seeger August 12, 2023 ======================================================================================== We consider linear and nonlinear transport equations with irregular velocity fields, motivated by models coming from mean field games. The velocity fields are assumed to increase in each coordinate, and the divergence therefore fails to be absolutely continuous with respect to the Lebesgue measure in general. For such velocity fields, the well-posedness of first- and second-order linear transport equations in Lebesgue spaces is established, as well as the existence and uniqueness of regular ODE and SDE Lagrangian flows. These results are then applied to the study of certain nonconservative, nonlinear systems of transport type, which are used to model mean field games in a finite state space. A notion of weak solution is identified for which a unique minimal and maximal solution exist, which do not coincide in general. A selection-by-noise result is established for a relevant example to demonstrate that different types of noise can select any of the admissible solutions in the vanishing noise limit. § INTRODUCTION This paper has two main purposes. First, we develop a well-posedness theory for first- and second-order linear transport equations for a new class of irregular velocity fields, as well as the corresponding ODE and SDE regular Lagrangian flows. We then apply the results to the study of certain nonlinear transport systems, motivated in particular by applications to mean field games (MFG) with a finite state space. For a fixed, finite time horizon T >0, we study both the terminal value problem (TVP) for the nonconservative equation -_t u - b(t,x) ·∇ u = 0 in (0,T) ×^d, u(T,·) = u_0 as well as the dual, initial value problem (IVP) for the conservative equation _t f + ÷(b(t,x) f) = 0 in (0,T) ×^d, f(0,·) = f_0, under the assumption that b is coordinate-by-coordinate (semi)-increasing: _x_j b^i(t,·) ≥ -C(t) δ_ij for all t ∈ [0,T], i,j = 1,2,…, d, and some C ∈ L^1_+([0,T]). The divergence of such a vector field is bounded from below, which means that, formally, the flow _t ϕ_t,s(x) = b(t, ϕ_t,s(x)), t ∈ [s,T], ϕ_s,s(x) = x will not concentrate at null sets. This indicates that the two problems (<ref>) and (<ref>) are amenable to a solution theory in Lebesgue spaces. On the other hand, the measure ÷ b is not in general absolutely continuous with respect to Lebesgue measure, and this leads to the formation of vacuum for t > s. It is therefore the case that the existing theory of renormalized solutions, initiated by DiPerna and the first author <cit.> for Sobolev velocity fields and extended to the case b ∈ BV_ and ÷ b ∈ L^ by Ambrosio <cit.>, does not cover our present situation. In particular, the two problems (<ref>) and (<ref>) cannot be covered with a unified theory, due to the fact that (<ref>) cannot be understood in the distributional sense if ÷ b is not absolutely continuous with respect to Lebesgue measure and u ∈ L^1_. Nevertheless, we exploit the dual relationship between the two problems, and provide a link to the forward, regular Lagrangian flow (<ref>). Analogous results are also proved for the degenerate, second order equations -_t u - b(t,x) ·∇ u - [ a(t,x) ∇^2 u]= c(t,x) in (0,T) ×^d, u(T,·) = u_T and _t f + ÷(b(t,x) f) - ∇^2 ·(a(t,x)f) = 0 in (0,T) ×^d, f(0,·) = f_0, where b satisfies (<ref>), a(t,x) = 1/2σ(t,x)σ(t,x)^T is a nonnegative, symmetric matrix, and σ∈^d × m; and the SDE flow d_t Φ_t,s(x) = b(t,Φ_t,s(x))dt + σ(t,Φ_t,s(x))dW_t, where W is an m-dimensional Brownian motion. We then turn to the study of nonlinear, nonconservative systems of transport type that take the form -_t u - f(t,x,u) ·∇ u = g(t,x,u) in (0,T) ×^d, u(T,·) = u_T. Here, u, f, and g are vector-valued, with f ∈^d and g,u ∈^m for some integers m,d ≥ 1, f and g are local functions of (t,x,u) ∈ [0,T] ×^d×^m, and the equation reads, for each coordinate i = 1,2,…, m, -_t u^i - ∑_j=1^d f^j(t,x,u) _x_j u^i = g^i(t,x,u). The primary motivation for the consideration of (<ref>) comes from the study of mean field games (MFG). These are models for large populations of interacting rational agents, which strategize in order to optimize an outcome, based on the collective behavior of the remaining population, while subject to environmental influences. The master equation for mean field games with a (finite) discrete state space takes the general form of the system (<ref>) with d = m, as described in <cit.>; see also <cit.>. Alternatively, systems of the form (<ref>) arise upon exploiting dimension reduction techniques for continuum-state MFG models in which the various data depend on the probability distribution of players through a finite number of observables, i.e. μ↦ u( t, ∫Φ dμ) for probability measures μ and some given continuous ^d-valued function Φ. This connection is explored by the authors and Lasry in <cit.>. We note also that the special case where d = m, f(t,x,u) = -u, and g(t,x,u) = 0 leads to the system -_t u + u ·∇ u = 0, which arises in certain models describing the flow of compressible gasses at low density with negligible pressure <cit.>. The nonlinear equation (<ref>) can formally be connected to a system of characteristic ODEs on ^d ×^m. In order to draw the analogy to MFG PDE systems and the master equation in a continuum state space (see for instance <cit.>), it is convenient to represent the characteristics as the forward backward system -_s U_s,t(x) = g(s,X_s,t(x),U_s,t(x)) U_T,t(x) = u_T( X_T,t(x)), _s X_s,t(x) = f(s,X_s,t(x), U_s,t(x)) X_t,t(x) = x. If f, g, and u_T are smooth and the interval [t,T] is sufficiently small, then (<ref>) can be uniquely solved, and the unique, smooth solution u of (<ref>) is given by u(t,x) := U_t,t(x). The argument fails for arbitrarily long time intervals, in view of the coupling between X and the terminal condition for U.[When written in forward form, the breakdown manifests as a failure to invert the characteristics describing the state variable x ∈^d, which may cross in finite time.] The monotone regime is explored in <cit.>, i.e., d = m and (-g,f): ^d ×^d →^d ×^d and u_T: ^d →^d are smooth and monotone (one of which is strictly monotone). In that case, (<ref>), and therefore (<ref>), can be uniquely solved on any time interval. This regime is exactly analogous to the monotonicity condition of Lasry and the first author for MFG systems with continuum state space <cit.>, and, as in that setting, strong regularity and stability results can be established for (<ref>). The monotone regime also allows for a well-posed notion of global weak solutions of (<ref>), even when f, g, and u_T fail to be smooth <cit.>. If there exist functions H: ^d ×^d → and v_T: ^d → such that (g,f) = (∇_x, ∇_p) H and u_T = ∇ v_T (the so-called “potential regime”), then, formally, one expects u(t,·) = ∇ v(t,·), where v solves the Hamilton-Jacobi equation _t v + H(t,x,∇_x v) = 0 in (0,T) ×^d, v(T,·) = v_T, for which global, weak solutions can be understood with the theory of viscosity solutions <cit.>. Weak solutions of (<ref>) can then be indirectly understood as the distributional derivative of v, an approach which is taken in <cit.>. In the special case where d = m = 1, (<ref>) can be studied with the theory of entropy solutions of scalar conservation laws <cit.>. In this paper, using the theory developed here for linear equations, we identify new regimes of assumptions on f, g, and u_T for which a notion of weak solution can be identified for any dimensions d,m ≥ 1. Under a certain ordering structure, the existence of a unique maximal and minimal solution is established, which do not coincide in general. This nonuniqueness is further explored from the viewpoint of stochastic selection, and we prove, for a specific but informative example, that any of the family of solutions can be distinguished by a certain vanishing noise limit, indicating that the choice of a mean field game equilibrium is very sensitive to the manner in which low-level, systemic noise is introduced to the model. §.§ Summary of main results We list the main results of the paper here, in an informal setting. More precise statements and discussions can be found within the body of the paper. The assumption that b satisfy the (semi)-increasing condition (<ref>) implies that b ∈ BV_. We emphasize again, however, that the measure ÷ b will in general have a singular part with respect to Lebesgue measure, and we therefore cannot appeal to the existing results on renormalized solutions to transport equations with irregular velocity fields. We do not give a full account of the vast literature for such problems, but refer the reader to the thorough surveys <cit.> and the references therein. Our approach to the first-order transport problem is to study the well-posedness of the regular Lagrangian flow for (<ref>) directly, rather than using PDE methods. The assumption (<ref>) on b allows for a comparison principle with respect to the partial order x,y ∈^d, x ≤ y ⇔ x_i ≤ y_i for all i = 1,2,…, d. A careful regularization procedure then leads to the existence of a minimal and maximal flow, which coincide a.e., and we have the following result (see Section <ref> below for more precise statements): Assume b satisfies (<ref>). Then, for a.e. x ∈^d, there exists a unique absolutely continuous solution of (<ref>), and there exists a constant C > 0 such that, for all 0 ≤ s ≤ t ≤ T, | ϕ^-1_t,s(A) | ≤ C|A| for all measurable A ⊂^d. If (b^)_ > 0 is a family of smooth approximations of b and (ϕ^)_ > 0 are the corresponding flows, then, as → 0, ϕ^→ϕ in C_t L^p_x, and L^p_x,W^1,1_t for all p ∈ [1,). We also obtain analogous results for the SDE (<ref>).Degenerate linear parabolic equations and SDEs with irregular data have been studied in a number of works that generalize the DiPerna-Lions theory and the Ambrosio superposition principle; see <cit.>. A common source of difficulty involves the dependence of σ on the spatial variable, even if it is smooth. This is the case, for instance, when b ∈ BV_ and ÷ b ∈ L^ treated by Figalli <cit.>, or when b satisfied a one-sided Lipschitz condition from below, as considered by the authors in <cit.>; in both settings, the results are constrained to σ(t,x) = σ(t) constant in ^d. In our present setting, we can relax the spatial dependence, and we assume that σ is Lipschitz and satisfies σ^ik(t,x) = σ^ik(t, x_i), for all (t,x) ∈ [0,T] ×^d, i = 1,2,…, d, k = 1,…,m. Then, in Section <ref>, we turn to the study of the linear transport equations (<ref>)-(<ref>), as well as the second-order equations (<ref>)-(<ref>), which can be related to the ODE and SDE flows in Section <ref>. The flow (<ref>) (resp. (<ref>)) gives rise to continuous solution operators on L^p_ (resp. L^p), p ∈ (1,) for the dual problems (<ref>)-(<ref>) (resp. (<ref>)-(<ref>)). The resulting solutions are stable under regularizations of b or vanishing viscosity limits in the spaces C_t L^p_x, with the convergence being strong for the nonconservative equations (<ref>)/(<ref>) and weak for the conservative equations (<ref>)/(<ref>). The solution operator for the nonconservative equation (<ref>) cannot be made sense of in the sense of distributions, because the measure ÷ b can have a singular part. We nevertheless provide a PDE-based characterization for solutions that are increasing or decreasing with respect to the partial order (<ref>). In order to give meaning to the ill-defined product b ·∇ u in this context, we introduce mollifications of a one-sided nature that lead to commutator errors which are shown to possess a sign; this is to be compared with the renormalization theory initiated in <cit.>, in which the commutator errors are shown to converge to zero with the convolution parameter. This leads to a notion of sub and supersolution for (<ref>), which are proved to satisfy a comparison principle. The solution operator for (<ref>) can thus be alternatively characterized in terms of these regularizations, and, moreover, the regular Lagrangian flow (<ref>) can be recovered as the (vector)-valued solution of the terminal value problem (<ref>) with u_T(x) = x. These two viewpoints on the transport equation and ODE flow are instrumental in our understanding of the nonlinear equations to follow. The continuity equation (<ref>) (or the Fokker-Planck equation (<ref>) in the second-order case) can then be related to the nonconservative equation through duality. Importantly, arbitrary distributional solutions of (<ref>) are not unique in general. We prove that, if f_0 ≥ 0, then there exist unique distributional solutions of (<ref>) and (<ref>) which coincide with the duality solution. Moreover, this result is proved independently of the superposition principle; instead, we use the duality with the nonconservative equation, and the characterization of its solutions in terms of one-sided regularizations. A consequence of the uniqueness of nonnegative distributional solutions of (<ref>) is that, if f and |f| satisfy (<ref>) in the sense of distributions, then f is the “good” (duality) solution (see Corollary <ref> below). We do not know whether this property characterizes the duality solution, or, in other words, whether the duality solution satisfies the renormalization property in general. This should be compared with <cit.>, where the authors resolve the same questions for half-Lipschitz velocity fields. The paper concludes in Section <ref> with the study of the nonlinear equation (<ref>) and the associated system (<ref>). We operate under the assumption that the discontinuous nonlinearities f and g satisfy, for some C ∈ L^1_+([0,T]), { for all i,j = 1,2,…, d and k,ℓ = 1,2,…, m, _x_i f^j(t,·,·) ≥ - C(t) δ_ij, _u_k g^ℓ(t,·,·) ≥ -C(t) δ_kℓ, _x_i g^ℓ≤ 0, and _u_k f^j ≤ 0. . Observe that (<ref>) is satisfied with C ≡ 0 by the particular example of the Burgers-like equation (<ref>). We develop a theory for solutions of (<ref>) that are decreasing with respect to the partial order (<ref>). The first observation is that the decreasing property is propagated, formally, by the solution operator. On the other hand, shocks form in finite time, and so, even if u_T, f, and g are smooth, u(t,·) will develop discontinuities for some t < T in general, requiring a notion of weak solution. We next note that, under the above assumptions, if u is decreasing, then the velocity field b(t,x) := f(t,x,u(t,x)) satisfies (<ref>). Solutions of the nonlinear equation (<ref>) can then be understood as fixed points for the linear problem (<ref>), and at the same time through the system of forward-backward characteristics (<ref>), using the theory for the regular Lagrangian flows in Section <ref>. Assume f and g satisfy (<ref>) and u_T is decreasing. Then there exist a maximal and minimal decreasing solution u^+ and u^- of (<ref>) in the fixed point sense. If u is any other such solution, then u^- ≤ u ≤ u^+. Continuous decreasing solutions of (<ref>) satisfy a comparison principle, which in particular implies that u^+ is below every continuous supersolution and u^- is above every continuous subsolution. In general, u^- and u^+ do not coincide, which must then be a consequence of the formation of discontinuities. This nonuniqueness of solutions is closely related to the question of multiplying distributions of limited regularity. Indeed, in the equation (<ref>), the product f(t,x,u) ·∇ u cannot be defined in a stable (with respect to regularizations) way in view of the fact that, in general, u ∈ BV_ and ∇ u is a locally finite measure. The well-posedness of both strong and weak solutions of MFG master equations in the continuum state space setting has been explored under various sets of monotonicity assumptions <cit.>. The approach in our setting, which involves appealing to Tarski's fixed point theorem for increasing functions on lattices <cit.>, has also been taken in the continuum state space setting, where maximal and minimal solutions were found under related assumptions; see for instance <cit.>. The partial order used in <cit.> comes from the notion of stochastic dominance for probability measures. We note that, for the equation (<ref>) posed on an infinite dimensional Hilbert space of L^2 random variables, the partial order (<ref>) is related to the analogous notion of stochastic dominance for random variables, and we aim to study the infinite dimensional version of (<ref>) in future work. We explore the nonuniqueness issue in more detail for the Burgers equation (<ref>) in one dimension, where the decreasing terminal value has a single discontinuity at 0: -_t u + u _x u = 0 in (0,T) ×, u(T,x) = { x ≤ 0}. It turns out that (<ref>) admits infinitely many fixed point solutions, consisting of a shock traveling with variable speed between 0 and 1. Of course, (<ref>) can be reframed as a scalar conservation law, whose unique entropy solution is the shock-wave solution with speed 1/2. We note that the notion of entropy solution does not extend to the nonconservative equations (<ref>) or (<ref>). We characterize the family of fixed point solutions (<ref>) as limits under distinct types of regularizations of the equation (<ref>). For any c ∈ W^2,1([0,T]) satisfying c(T) = 0 and -c' ∈ (0,1), there exists θ_c ∈ L^1([0,T]) such that, if u^_T is smooth, u^_T _(-,0) in L^1_, and u^ is the unique classical solution of -_t u^ + u^_x u^ = ( _x^2 u^ + θ_c(t) |_x u^|^2 ), u^(T,·) = u^_T, then, for all 1 ≤ p <, as → 0, u^→{ x ≤ c(t)} strongly in C([0,T], L^p_). We interpret Theorem <ref> as a selection-by-noise result for the nonunique problem (<ref>). Indeed, the result can be reformulated on the level of the system (<ref>), which, for (<ref>), becomes the forward-backward system of SDEs - d_s U^_s,t(x) = Z^_s,t(x)dW_s - 1/2θ_c(s) Z^_s,t(x)^2 ds, U^_T,t(x) = u^_T(X^_T,t(x)), d_s X^_s,t(x) = - U^_s,t(x)ds + √(2) dW_s, X^_t,t(x) = x. Various selection methods have been proposed to study mean field games models that do not admit unique solutions, and we refer in particular to <cit.> for problems involving stochastic selection. Our result is distinguished by the consideration of several different descriptions of small noise, each of which selects a different solution for the deterministic problem in the vanishing noise limit. §.§ A note on velocity fields with a one-sided Lipschitz condition Let us remark that the regime in which b satisfies (<ref>) shares many similarities with the setting in which b is half-Lipschitz from below[Indeed, (<ref>) and (<ref>) are identical in one spatial dimension.], that is, (b(t,x) - b(t,y)) · (x-y) ≥ -C(t)|x-y|^2 for (t,x,y) ∈ [0,T] ×^2d and some C ∈ L^1_+([0,T]). A key commonality in both settings is that ÷ b is bounded from below, but not necessarily absolutely continuous with respect to Lebesgue measure. Transport equations and flows for velocity fields satisfying (<ref>) have been studied from a variety of different viewpoints <cit.>, and in <cit.>, the authors obtain very similar results to those described above regarding the existence, uniqueness, and stability of the regular Lagrangian flow forward in time, as well as proving well-posedness and studying properties and characterizations of solutions to the problems (<ref>) and (<ref>) in Lebesgue spaces. A key difference between the two regimes is the behavior of the flow for (<ref>) in the compressive direction, that is, backward in time. For velocity fields satisfying the half-Lipschitz condition (<ref>), the backward ODE is uniquely solvable for all x ∈^d. Moreover, the resulting backward flow is Lipschitz continuous, and it can be identified as the left-inverse to the forward, regular Lagrangian flow; see <cit.> for more precise statements, as well as new characterizations of the time-reversed versions of (<ref>) and (<ref>). On the other hand, when b satisfies (<ref>), the backward problem (<ref>) is not in general unique for every x ∈^d, nor is it true that a globally Lipschitz flow can always be found. We note that, even for examples where the backward flow has a unique solution for Lebesgue-a.e. x ∈^d, neither the stability nor the solvability of the time-reversed versions of (<ref>)-(<ref>) in Lebesgue spaces can be expected to hold, because, in general, any backward flow solution to (<ref>) will concentrate on sets of Lebesgue-measure zero. For a detailed discussion and examples, see subsections <ref> and <ref> below. §.§ Notation Given a bounded function ϕ:^d →, ϕ_* and ϕ^* denote the lower and uppersemicontinuous envelopes, and, if ϕ is ^m, the same notation is used coordinate-by-coordinate. We often denote arbitrary functions spaces on ^d as X(^d) = X when there is no ambiguity over the domain. For p ∈ [1,], L^p_ and L^p_ denote the space of p-integrable functions with respectively the weak and weak-⋆ topologies. L^p_ denotes the space of of locally p-integrable functions with the topology of local L^p-convergence, and L^p_, and L^p_, are understood accordingly. The notation 1 denotes the vector (1,1,…, 1) in Euclidean space, the dimension being clear from context. Given two sets A and B, A B := (A\ B) ∪ (B \ A). § PRELIMINARY RESULTS This section contains a collection of results regarding vector-valued notions of increasing/decreasing, as well as a vector-valued maximum principle. §.§ Properties of increasing functions We first introduce a partial order on ^d that is used throughout the paper. For x,y ∈^d, we will write x ≤ y if x_i ≤ y_i for all i = 1,2,…, d. Given a,b ∈^d, a ≤ b, we denote by [a,b] the cube ∏_i=1^d [a_i, b_i]. A function ϕ: ^d →^m, d,m ∈, is said to be increasing if ϕ(x) ≤ϕ(y) whenever x ≤ y. Equivalently, ϕ is increasing if, for each i = 1,2,…,m, ϕ^i is increasing in the x_j-coordinate for each j = 1,2,…, d. Assume that ϕ: ^d →^m is increasing. Then ϕ∈ BV_(^d), and, for each i = 1,2…, m, lim inf_y → xϕ^i(y) = lim inf_y → x, y ⪈ xϕ^i(y) and lim sup_y → xϕ^i(y) = lim sup_y → x, y ⪇ xϕ^i(y). For all j = 1,2,…, d and i = 1,2,…, m, the distribution _j ϕ^i is nonnegative, and is therefore a locally finite measure. For any sequence y_n → x, if y_n ≰ x, then y_n may be replaced with a value y_n' such that y_n' ≤ x, y_n' ≤ y_n, and ϕ^i(y_n') ≤ϕ^i(x). A similar argument holds if y_n ≱ x for all n, and the claim follows. Lemma <ref> implies that each component of an increasing function ϕ: ^d →^m has limits from both the “left” and “right.” We will call an increasing function ϕ / if, for i = 1,2,…, m, ϕ^i(x) = lim sup_y → xϕ^i(y) = lim sup_y → x, y ⪈ xϕ^i(y). Given a nonnegative measure μ, the repartition function ϕ(x) := ∫_(-,x_1)∫_(-,x_2)⋯∫_(-,x_d) dμ is an example of a / increasing function, but such functions do not cover the full range of increasing functions if d ≥ 2; indeed, they are distinguished by the fact that mixed derivatives ∏_j=1^k _x_ℓ_jϕ for any distinct set (ℓ_j)_j=1^k ⊂{1,2,…, d} are still measures. Consider a smooth surface Γ⊂^d that partitions ^d into two open sets, that is, ^d = U_- ∪Γ∪ U_+, Γ = U_+ = U_-, U_+ ∩ U_- = ∅. Let n be the normal vector to Γ that, at all points of Γ, points inward to U_+. If n ≥ 0 everywhere on Γ, in the sense of (<ref>), then ϕ = _U_+ ∪Γ is a / increasing function that is not a repartition function. §.§ ABV functions In one dimension, functions of bounded variation can be written as a difference of non-decreasing functions. With respect to the partial order (<ref>), the generalization of this notion is a strict subspace of BV. Given - < a_i ≤ b_i < for i = 1,2,… d and a function ϕ: Q := ∏_i=1^d [a_i,b_i] →, we say ϕ∈ ABV(Q) if ϕ_ABV(Q) := sup_γϕ∘γ_BV([0,1]) < , where the supremum is taken over all curves γ: [0,1]^d → Q such that γ_i: [0,1] → [a_i,b_i] is increasing for all i = 1,2,…, d. We will say ϕ∈ ABV = ABV(^d) if ϕ_ABV(Q) < for all boxes Q ⊂^d. For example, C^0,1⊂ ABV. It is straightforward to see that ABV = BV when d = 1. Several generalizations of the notion of finite variation to multiple dimensions, besides the space BV, exist in the literature, and the one in Definition <ref> is due to Arzelà <cit.>. It is a strictly smaller subspace than BV (as we show below). This notion of variation, along with several others, seems not to have had the same ubiquity in the theory of PDEs as the usual notion of BV, but is particularly relevant in this paper. More details about ABV functions, and many other notions of variation in multiple dimension, can be found in <cit.>. Let ϕ: ^d →. Then ϕ∈ ABV if and only if ϕ = ϕ_1 - ϕ_2 for two increasing functions ϕ_1,ϕ_2:^d →. Let a, b ∈^d, a ≤ b. If ϕ: ^d → and γ: [0,1] → [a,b] are increasing, then ϕ∘γ is increasing, and thus ϕ∘γ_BV([0,1]) = ϕ(b) - ϕ(a). It follows that ϕ∈ ABV, and, by linearity, differences of increasing functions belong to ABV. Now assume ϕ∈ ABV([a,b]), and, for x ∈ [a,b], set ϕ_1(x) = 1/2( ϕ_1_ABV([a,x]) + ϕ(x)) and ϕ_2(x) = 1/2( ϕ_1_ABV([a,x]) - ϕ(x) ). Then ϕ = ϕ_1 - ϕ_2, and ϕ_1 and ϕ_2 are increasing. If ϕ∈ ABV, then ϕ is almost everywhere continuous and differentiable. It suffices to prove the claim about differentiability, and, by Lemma <ref>, we may assume without loss of generality that ϕ is increasing. We argue using the characterization by Stepanoff <cit.> of a.e.-differentiability, that is, we prove that, for almost every x ∈^d, lim sup_y → x |ϕ(y) - ϕ(x)|/|y-x| < , or, equivalently, lim sup_y → xϕ(y) - ϕ(x)/|y-x| < and lim inf_y → xϕ(y) - ϕ(x)/|y-x| > -. Denote by Q_r := [0,r]^d, and note that, because ϕ is increasing, lim sup_y → xϕ(y) - ϕ(x)/|y-x| = inf_r > 0sup_y ∈ x + Q_rϕ(y) - ϕ(x)/|y-x| = inf_r > 0ϕ(x + r 1) - ϕ(x)/√(d) r. The function [0,) ∋ r →ϕ(x+r 1) is increasing, and, thus, is differentiable almost everywhere in [0,). The finiteness of the above expression for almost every x ∈^d is then a consequence of Fubini's theorem. The argument for the lim inf is identical. In view of Lemmas <ref> and <ref>, an increasing function is equal almost everywhere to a / function, and we assume for the rest of the paper that increasing functions are /. §.§ One-sided regularizations It will convenient at several times in the paper to specify regularizations of discontinuous functions that enjoy certain ordering properties. We specify two such regularization procedures here, each of which has merit in different situations. We first discuss the inf- and sup-convolutions given, for some measurable function ϕ, by ϕ^(x) = sup_y ∈^d{ϕ(x-y) - |y|/} and ϕ_(x) = inf_y ∈^d{ϕ(x-y) + |y|/}. Then the following are either well-known properties or easy to check. Assume that ϕ is measurable and, for some M > 0 and all x ∈^d, |ϕ(x)| ≤ M(1 + |x|), and let ϕ_ and ϕ^ be defined by (<ref>), which are finite as long as < M^-1. Then * For all ∈ (0, M^-1), ϕ_ and ϕ^ are Lipschitz with constant ^-1, and ϕ_≤ϕ≤ϕ^. * For all x ∈^d, as → 0, ϕ^(x) ↘ϕ^*(x) and ϕ_(x) ↗ϕ_*(x) * If, moreover, ϕ is increasing, then so are ϕ_ and ϕ^. If ϕ is increasing, then the sup or inf can be restricted to “one side” of x. More precisely, given y ∈^d, define ŷ = - (|y_1|, |y_2|, …, |y_d|), Then ŷ≤ y and |ŷ| = |y|, and so, because ϕ is increasing, ϕ(x-y) - |y|/≤ϕ(x - ŷ) - |ŷ|/. It follows that ϕ^(x) = sup_y ≥ 0 {ϕ(x +y) - |y|/}, and similarly ϕ_(x) = inf_y ≤ 0{ϕ(x+y) + |y|/}. The second example involves convolving with mollifying functions that are weighted to one side. Let ρ be a smooth, positive function with support contained in [-1,1]^d and ∫ρ = 1, and define ρ_(z) = 1/^dρ( z - 1/) and ρ^(z) = 1/^dρ( z + 1/). Assume ϕ: ^d → is locally bounded, and set ϕ^ := ρ^ * ϕ and ϕ_ := ρ_ * ϕ. Then, as → 0, ϕ^ and ϕ_ converge to ϕ in L^p_ for any p ∈ [1,). Moreover, if ϕ is increasing, then * ϕ^ and ϕ_ are increasing, and ϕ_≤ϕ≤ϕ^. * As → 0, ϕ^↘ϕ^* and ϕ_↗ϕ_*. * For all > 0 and x ∈^d, ϕ^(x - 2 1) = ϕ_(x). The convergence in L^p_ is standard. For the rest of the proof, we assume ϕ is increasing. We see immediately that ϕ^ and ϕ_ are increasing. Observe that ρ_⊂ (0,2)^d and ρ^⊂ (-2,0)^d. As a consequence, because ϕ is increasing, ϕ^(x) = ∫_^dϕ(x-y) ρ^(y)dy = ∫_(-2,0)^dϕ(x-y) ρ^(y)dy ≥ϕ^*(x), and similarly ϕ_(x) ≤ϕ_*(x), so that part (<ref>) is established. Now assume that 0 < < δ. Using again that ϕ is increasing, we find ϕ^δ(x) = 1/δ^d∫_(-2δ,0)^dϕ(x - y) ρ( y/δ + 1 ) dy ≥1/δ^d∫_(-2δ,0)^dϕ(x - /δy) ρ( y/δ + 1 ) dy = 1/^d∫_(-2,0)^dϕ(x - y) ρ( y/ + 1 ) dy = ϕ^(x), and similarly ϕ_δ≤ϕ_. The convergence statements in part (<ref>) then easily follow. Finally, part (<ref>) is seen upon computing ϕ^(x - 2 1) = 1/^d∫_^dϕ(y) ρ( x - y - 1/) dy = ϕ_(x). An analogue of part (<ref>) in Lemma <ref> can also be seen for the sup- and inf-convolutions ϕ^ and ϕ_ in (<ref>), namely, for all R > 0, there exists C_R > 0 depending on the linear growth of ϕ such that, for all x ∈ B_R, ϕ^(x - C_R 1) ≤ϕ_(x). The advantage of the one-sided mollifications ϕ * ρ^ and ϕ* ρ_ is that the constant C_R can be replaced by a uniform constant that does not depend on the growth of ϕ, which will be convenient when we consider ϕ depending on an additional time parameter in an L^1 way. On the other hand, the sup- and inf-convolutions are very flexible one-sided regularizations even when ϕ is not itself increasing. §.§ A maximum principle The following multi-dimensional maximum principle is used at various points in the paper. For some M ∈, assume that { (a^i)_i = 1^M ⊂ C([0,T] ×^d, S^d), (b^i)_i=1^M ⊂ C([0,T] ×^d, ^d), and (c^i)_i=1^M, (d^i)_i=1^M, (λ^i_j)_i,j=1^M, (μ_k ℓ^i)_i,k,ℓ=1^M, are uniformly bounded. and { a^i(t,x) ≥ 0, d^i(t,x) ≥ 0, λ_j^i(t,x) ≥ 0, and μ^i_k ℓ(t,x) ≤ 0 for all (t,x) ∈ [0,T] ×^d and i,j,k,ℓ∈{1,2,…,M}. . Let (V^i)_i=1^M ⊂ C^1([0,T] ×^d) be bounded on [0,T] ×^d and satisfy the system _t V^i - [ a^i D^2 V^i] + b^i · D V^i + c^i V^i + d^i = ∑_j iλ^i_j V^j + ∑_k, ℓ iμ^i_kℓ V^k V^ℓ in (0,T) ×^d. Suppose that, for all x ∈^d and i = 1,2,…,M, V^i(0,x) ≤ 0. Then, for all (t,x) ∈ (0,T] ×^d and i = 1,2,…,M, V^i(t,x) ≤ 0. We prove the result first under the additional assumption that inf_(t,x) ∈ [0,T] ×^d d^i > 0, sup_x ∈^d V^i(0,x) < 0, and, for all t ∈ [0,T] and i ∈{1,2,…,M}, x ↦ V^i(t,x) attains a global maximum on ^d. In that case, define t_0 := inf{ t ∈ (0,T] : max_x ∈^d, i ∈{1,2,…,M} V^i(t,x) = 0 }, so that t_0 > 0. Let x_0 ∈^d and i ∈{1,2,…,M} be such that the maximum is achieved. Note then that D V^i(t_0,x_0) = 0, D^2 V^i(t_0,x_0) ≤ 0, _t V^i(t_0,x_0) ≥ 0, and, for all j ∈{1,2,…,M} and x ∈^d, V^j(t_0,x) ≤ V^i(t_0,x_0) ≤ 0. We thus obtain d^i(t_0,x_0) ≤∑_j iλ^i_j V^j(t_0,x_0) + ∑_k,ℓ iμ^i_k ℓ V^k(t_0,x_0) V^ℓ(t_0,x_0) ≤ 0, which is a contradiction in view of the assumption on d^i. In this case, we indeed conclude that V^i(t,x) < 0 for all (t,x,i) ∈ [0,T] ×^d ×{1,2,…,M}. We now turn to the general case. For x ∈^d, define ν(x) := √(1 + |x|^2), and note that D ν and D^2 ν are globally bounded on ^d. Then standard arguments yield that, for all i ∈{1,2,…,M}, β > 0, and t ∈ [0,T], if S_β is set of maximum points of x ↦ V^i(t,x) - βν(x), then lim_β→ 0βsup_x ∈ S_βν(x) = 0. It follows that there exists a smooth bounded function ν_β: ^d → such that sup_β > 0( Dν_β + D^2ν_β)< , lim_β→ 0βν_β = 0, and max_(t,x,i) ∈ [0,T] ×^d ×{1,2,…,M}( V^i(t,x) - βν(x)) = max_(t,x,i) ∈ [0,T] ×^d ×{1,2,…,M}( V^i(t,x) - βν_β(x)). Fix δ > 0 and a constant C > 0 to be determined. For (t,x) ∈ [0,T] ×^d, i ∈{1,2,…,M}, set Ṽ^i(t,x) := V^i(t,x) - β v_β(x) - δ e^Ct. Then, for all i ∈{1,2,…,M}, Ṽ^i(0,·) ≤ -δ, and, for all t ∈ [0,T], Ṽ^i(t,·) attains a global maximum over ^d. Moreover, in (0,T) ×^d, (Ṽ^i)_i=1^M solves the system _t Ṽ^i - [ a^i D^2 Ṽ^i] + b^i(t,x) · D Ṽ^i(t,x) + c^i Ṽ^i + d̃^i = ∑_j iλ^i_j Ṽ^j + ∑_k, ℓ iμ^i_kℓṼ^k Ṽ^ℓ, where d̃^i(t,x) = d^i(t,x) + Cδ e^Ct - β[a^i D^2ν_β] + β b^i · Dν_β + βν_β(x) ( c^i - ∑_j iλ^i_j - ∑_k,ℓ iμ^i_k ℓ V^k ) + δ e^Ct( c^i - ∑_j iλ^i_j - ∑_k,ℓ iμ^i_k ℓV^k). From the boundedness of the coefficients and the V^i's, and from the nonnegativity of d^i, it follows that there exists C̃ > 0 depending only on the bounds of the coefficients and the V^i such that, as β→ 0, d̃^i(t,x) ≥ (C - C̃) δ e^Ct - o(1). Taking C > C̃ and letting β be sufficiently small, in relation to δ, we then see that d̃^i > 0. From the first step, we conclude that, if β is sufficiently small, then, for all (t,x) ∈ [0,T] ×^d and i ∈{1,2,…,M}, V^i(t,x) < βν_β(x) + δ e^Ct. Sending first β→ 0 and then δ→ 0 yields the result. § THE REGULAR LAGRANGIAN FLOW The object of this section is to study the forward-in-time flow _t ϕ_t,s(x) = b(t,ϕ_t,s(x)) for s ∈ [0,T], t ∈ [s,T], ϕ_s,s(x) = x for a vector field b:[0,T] ×^d →^d satisfying { for some C_0, C_1 ∈ L^1_+([0,T]), |b(t,y)| ≤ C_0(t) (1 + |y|) for a.e. (t,y) ∈ [0,T] ×^d, and x ↦ b(t,x) + C_1(t) x is increasing for a.e. t ∈ [0,T]. . The second condition reads equivalently as _x_j b^i(t,·) ≥ -C_1(t) δ_ij for all i,j = 1,2,…, d. Moving forward, for convenience, we define the positive, increasing, absolutely continuous functions ω_0(t) = ∫_0^t C_0(s)ds, ω_1(t) = ∫_0^t C_1(s)ds, t ∈ [0,T]. Implicit in the assumption (<ref>) is a choice of basis on ^d. Indeed, the results of the paper continue to hold if b is replaced by A b(t,A^Tx) for a d × d orthogonal matrix A. The precise interpretation of the problem (<ref>), wherein b is discontinuous, is made sense of as a differential inclusion. Namely, at every discontinuity x of b(t,·), we have b_*(t,x) ⪇ b^*(t,x), and so the natural formulation is _t ϕ_t,s(x) ∈ [b_*(t,·), b^*(t,·)](ϕ_t,s(x)) for s ∈ [0,T], t ≥ s, ϕ_s,s(x) = x. In order to make notation less cumbersome, for a function ϕ: [0,T] ×^d →, we will always denote by ϕ_* and ϕ^* the lower and upper semicontinuous envelopes of ϕ in the space variable only; that is, for (t,x) ∈ [0,T] ×^d, ϕ_*(t,x) = ϕ(t,·)_*(x) and ϕ^*(t,x) = ϕ(t,·)^*(x). If ϕ: ^d →^m for m > 1, then ϕ^* and ϕ_* are taken to be the coordinate-by-coordinate lower and upper-semicontinuous envelopes, so, for instance, ϕ_* = (ϕ^1_*, ϕ^2_*, …, ϕ^d_*); equivalently, ϕ_*(x) = sup_r > 0inf_|x-y| ≤ rϕ(y) with respect to the partial order (<ref>) on ^m. The differential inclusion (<ref>), wherein b(t,x) is replaced with the smallest box [α,β] = ∏_i=1^d [α_i,β_i] containing all limit points of b(t,y) as y → x, is a slightly weaker formulation than the standard Filippov regularization <cit.>, where boxes are replaced with general convex sets. Before developing the general theory, it is useful to record the following a priori ODE bounds, which are an easy consequence of Grönwall's lemma. Assume, for some C_0 ∈ L^1_+([0,T]), that b: [0,T] ×^d ×^d satisfies _x ∈^d|b(t,x)|/1 + |x|≤ C_0(t) for a.e. t ∈ [0,T]. Then there exists a constant C > 0, depending only on T > 0, such that |X(t) - X(s)| ≤ C(1 + |x|) ∫_s^t C_0(r)dr. §.§ A comparison principle The following comparison result for ODEs is at the heart of much of the analysis of this paper. It leads to the existence and uniqueness of the regular Lagrangian flow, as well as stable notions of solutions to the transport and continuity equations with velocity fields satisfying (<ref>). Assume that B: [0,T] ×^d →^d satisfies { { t ↦∇ B(t,·)_L^(B_R)}∈ L^1([0,T]) for all R > 0, and _x_i B^j ≥ 0 for all i j. . Let X,Y ∈ C^0,1([0,T], ^d) be such that, with respect to the partial order (<ref>), X(0) ≤ Y(0) and Ẋ(t) ≤ B(t,X(t)) and Ẏ(t) ≥ B(t,Y(t)) for a.e. t ∈ [0,T]. Then X(t) ≤ Y(t) for all t ∈ [0,T]. The continuity of X and Y implies that there exists R > 0 such that |X(t,x)| ∨ |Y(t,x)| ≤ R for all (t,x) ∈ [0,T] ×^d. We may thus assume without loss of generality that ∫_0^T ∇ B(t,·)_ dt <. Define Δ = X - Y and, for i,j = 1,2,…, d, a_ij(t) = ∫_0^1 _x_j B^i(t, τ X(t) + (1-τ) Y(t))dτ. Observe that ∫_0^T a_ij(t,·)_ dt < for all i,j, and a_ij≥ 0 whenever i j. Then, for i = 1,2,…, d, Δ^i(0) ≤ 0 and Δ̇^i(t) ≤ a_ii(t) Δ^i(t) + ∑_i j a_ij(t) Δ^j(t) for t ∈ [0,T]. Fix δ > 0, set ψ_1(t) = ∫_0^t max_i=1,2,…, d a_ii(s)ds and ψ_2(t) = ∫_0^t max_i=1,2,…, d∑_j i a_ij(s)ds, and define t_0 := inf{ t ∈ (0,T] : there exists i ∈{1,2,… ,d} such that e^-ψ_1(t)Δ^i(t) - δ e^ψ_2(t) > 0 }. We have t_0 > 0. Assume by contradiction that t_0 < T, and let i be such that the maximum is attained. Then, for all j and s ∈ [0,t_0], e^- ψ_1(s)Δ^j(s) ≤ e^-ψ_2(t_0)Δ^i(t_0) = δ e^ψ_2(t_0). We compute d/dt e^- ψ_1(t)Δ^i(t) ≤∑_j i a_ji(t) e^-ψ_1(t)Δ^j(t), and so δ e^ψ_2(t_0) = e^-ψ_1(t_0)Δ^i(t_0) ≤∫_0^t_0∑_j i a_ji(s) e^-ψ_1(s)Δ^j(s)ds ≤δ∫_0^t_0∑_j i a_ji(s) e^ψ_2(s)ds ≤δ ( e^ψ_2(t_0) - 1 ), which is a contradiction. It follows that Δ^i(t) < δ e^ψ_1(t) + ψ_2(t) for all t ∈ [0,T] and i = 1,2,…, d. Sending δ→ 0 yields the result. §.§ Maximal and minimal flows The comparison principle from the previous subsection is now used to establish the existence of maximal and minimal (with respect to the order (<ref>)) semicontinuous solutions of the differential inclusion (<ref>) for vector fields satisfying _x_i b^j ≥ 0 for i j. Assume that b: [0,T] ×^d →^d satisfies ∫_0^T _y ∈^d|b(t,y)|/1 + |y| dt < and _x_i b^j ≥ 0 for all i j. Then there exist solutions ϕ^+ and ϕ^- of (<ref>) that are absolutely continuous in time such that * For all 0 ≤ s ≤ t ≤ T, ϕ^+_t,s and ϕ^-_t,s are increasing in the sense of (<ref>), ϕ^+_t,s is right-continuous, and ϕ^-_t,s is left-continuous. * If ϕ is any other solution of (<ref>), then ϕ^- ≤ϕ≤ϕ^+. * If 0 ≤ r ≤ s ≤ t ≤ T, then ϕ^+_s,t∘ϕ^+_r,s = ϕ^+_r,t and ϕ^-_s,t∘ϕ^-_r,s = ϕ^-_r,t * For all 0 ≤ s ≤ t ≤ T, (ϕ^+_t,s)_* ≥ϕ^-_t,s. When d = 1, the condition on the derivatives of b becomes vacuous, and we recover the existence of a unique minimal and maximal solution under the sole assumption that b is locally bounded. For > 0, let b^ and b_ denote the coordinate-by-coordinate sup- and inf-convolutions as in (<ref>). By Lemma <ref>, ∇ b^ and ∇ b_ are bounded on [0,T] ×^d, and so, for all (s,x) ∈ [0,T] ×^d, there exist unique solutions ϕ^±, of _t ϕ^+,_t,s(x) = b^(t, ϕ^+,_t,s(x)) and _t ϕ^-,_t,s(x) = b_(t, ϕ^-,_t,s(x)) in [s,T], ϕ^+,_s,s(x) = ϕ^-,_s,s(x) = x. By Lemma <ref>, for every (s,x) ∈ [0,T] ×^d, ϕ^+,_·,s(x) and ϕ^-,_·,s are bounded and continuous uniformly in . The comparison result, Lemma <ref>, immediately gives ϕ^-,≤ϕ^+,, and, if x ≤ y, ϕ^-,_t,s(x) ≤ϕ^-,_t,s(y) and ϕ^+,_t,s(x) ≤ϕ^+,_t,s(y). Fix 0 < < δ. Then Lemma <ref>(<ref>) implies that b^≤ b^δ and b_≥ b_δ. It follows that _t ϕ^+,_t,s≤ b^δ(t, ϕ^+,_t,s), and so Lemma <ref> yields ϕ^+,≤ϕ^+,δ. A similar argument gives ϕ^-,≥ϕ^-,δ. It now follows that that there exist bounded and Lipschitz functions ϕ^+_·,s(x) and ϕ^-_·,s(x) on [s,T], which are increasing in x, such that, as → 0, ϕ^+,_t,s(x) ↘ϕ^+_t,s(x) and ϕ^-,_t,s(x) ↗ϕ^-_t,s(x) uniformly for t ∈ [s,T]. Fix 0 ≤ s ≤ t ≤ T and x ∈^d. If h ≥ 0, then, for arbitrary > 0, ϕ^+_t,s(x) ≤ϕ^+_t,s(x + h) ≤ϕ^+,_t,s(x+h). Sending first h ↘ 0 and then → 0 gives lim_h ↘ 0ϕ^+_t,s(x+h) = ϕ^+_t,s(x), so that ϕ^+_t,s is right-continuous. A similar argument shows that ϕ^-_t,s(x) is left-continuous, and this completes the proof of part (<ref>). We next prove part (<ref>). If ϕ is any other solution of (<ref>), then, for all > 0, b_(t, ϕ_t,s(x)) ≤_t ϕ_t,s(x) ≤ b^(t,ϕ_t,s(x)) for t ∈ [s,T] and ϕ_s,s(x) = x. Appealing once more to Lemma <ref> gives ϕ^-,_t,s(x) ≤ϕ_t,s(x) ≤ϕ^+,_t,s(x), and sending → 0 gives the result. We now prove the flow property stated in part (<ref>). Suppose x ∈^d and 0 ≤ r ≤ s ≤ t ≤ T. Then, for > 0, because ϕ^+,_s,t is increasing, ϕ^+,_s,t( ϕ^+_r,s(x) ) ≤ϕ^+,_s,t( ϕ^+,_r,s(x) ) = ϕ^+,_r,t(x). Taking → 0 yields ϕ^+_s,t( ϕ^+_r,s(x) ) ≤ϕ^+_r,t(x). For the opposite inequality, observe that [s,T] ∋ t ↦ϕ^+_r,t(x) solves (<ref>) with value ϕ^+_r,s(x) at time t = s. It follows from part (<ref>) that ϕ^+_r,t(x) ≤ϕ^+_s,t( ϕ^+_r,s(x) ). The argument for the flow property for ϕ^- is analogous. Let x ∈^d. Sending y ↗ x in the inequality ϕ^-_t,s(y) ≤ϕ^+_t,s(y), using the fact that ϕ^- and ϕ^+ are increasing and ϕ^- is left-continuous, yields ϕ^-_t,s(x) ≤ (ϕ^+_t,s)_*(x). This concludes the proof of part (<ref>). §.§ Uniqueness and stability of the regular Lagrangian flow We now use the full assumption (<ref>), and in particular, the lower bounds on _x_i b^i for i = 1,2,…, d that were not needed to prove the existence of the maximal and minimal solution of (<ref>). In particular, we prove that ϕ^+ and ϕ^- are equal almost everywhere, giving rise to a unique regular Lagrangian flow. Assume that b satisfies (<ref>). If ϕ^+ and ϕ^- are the maximal and minimal flow from Proposition <ref>, then, for all 0 ≤ s ≤ t ≤ T, (ϕ^+_t,s)_* = ϕ^-_t,s. Moreover, for all (s,x) ∈ [0,T] ×^d and a.e. t ∈ [s,T], _t ϕ^+_t,s(x) = b^*(t,ϕ^+_t,s(x)) and _t ϕ^-_t,s(x) = b_*(t,ϕ^-_t,s(x)). Let ϕ̃^±_t,s(x) = e^ω_1(t)ϕ^±_t,s(e^-ω_1(t) x ) and b̃(t,x) = e^ω_1 (t) b(t, e^-ω_1(t) x ) + C_1(t) x, where ω_1 is as in (<ref>). Then b̃ satisfies (<ref>) with C_1 = 0 and with a possibly different C_0 ∈ L^1_+, and ϕ̃^± are the corresponding maximal and minimal flows. We may therefore assume without loss of generality that b(t,·) is increasing for t ∈ [0,T]. We first prove the statement involving the lower-semicontinuous envelope of ϕ^+_t,s, and in view of Proposition <ref>(<ref>), we need only prove the opposite inequality (ϕ^+_t,s)_* ≤ϕ^-_t,s. For t ∈ [0,T], define b^(t,·) = b(t,·) * ρ^ and b_(t,·) = b(t,·) * ρ_, where the one-sided mollifiers ρ^ and ρ_ are defined in (<ref>), and let ϕ^+, and ϕ^-, be the corresponding flows. Arguing exactly as in the proof of Proposition <ref>, appealing to Lemma <ref>, as → 0, ϕ^+,_·,s (x)↘ϕ^+_·,s(x) and ϕ^-,_·,s(x) ↗ϕ^-_·,s(x) uniformly on [s,T], for any s ∈ [0,T] and x ∈^d. Fix 0 < < δ and define X^δ,(t) = ϕ^+,_t,s(x - 2δ 1) + 2δ 1. Then, in view of Lemma <ref>(<ref>), X^δ, satisfies Ẋ^δ,(t) = b^(t, X^δ,(t) - 2δ 1) ≤ b_(t, X^δ,(t) ) in [s,T] and X^δ,(s) = x. We may thus appeal to Lemma <ref>, and find that, for all t ∈ [s,T], ϕ^+,_t,s(x - 2δ 1) ≤ϕ^-,_t,s(x) - 2 δ 1. Sending first → 0 and then δ→ 0 gives (ϕ^+_t,s)_* ≤ϕ^-_t,s, as desired. We now prove the final statement. For 0 < < δ, we have the inequalities, for t ∈ [s,T] and x ∈^d, b^*(t, ϕ^+_t,s(x)) ≤ b^*(t, ϕ^+,_t,s(x)) ≤ b^(t, ϕ^+,_t,s(x)) ≤ b^δ(t, ϕ^+,_t,s(x)). Since b^δ is Lipschitz continuous in the space variable, lim_→ 0 b^δ(t, ϕ^+,_t,s(x)) = b^δ(t, ϕ^+_t,s(x)). Therefore, b^*(t, ϕ^+_t,s(x)) ≤lim inf_→ 0 b^(t, ϕ^+,_t,s(x)) ≤lim sup_→ 0 b^(t, ϕ^+,_t,s(x)) ≤ b^δ(t, ϕ^+_t,s(x)), and so, sending δ→ 0, it follows that lim_→ 0 b^(t, ϕ^+,_t,s(x)) = b^*(t, ϕ^+_t,s(x)). For arbitrary > 0, 0 ≤ s ≤ t ≤ T, and x ∈^d, ϕ^,+_t,s(x) = x + ∫_s^t b^(r, ϕ^,+_r,s(x))dr. Sending → 0 and appealing to dominated convergence gives ϕ^+_t,s(x) = x + ∫_s^t b^*(r, ϕ^+_r,s(x))dr, and a similar argument gives ϕ^-_t,s(x) = x + ∫_s^t b_*(r, ϕ^-_r,s(x))dr. Recalling that increasing functions are continuous almost-everywhere, it now follows from Proposition <ref> that ϕ^+_t,s and ϕ^-_t,s are equal almost everywhere, given 0 ≤ s ≤ t ≤ T. We thus finally arrive at the almost-everywhere unique solvability of the ODE (<ref>), and the identification of the unique regular Lagrangian flow. For every s ∈ [0,T] and for almost every x ∈^d, there exists a unique absolutely continuous solution [s,T] ∋ t ↦ϕ_t,s(x) of the differential inclusion (<ref>). Moreover, ϕ_t,s satisfies the regular Lagrangian property: the map C_c(^d) ∋ f ↦ f ∘ϕ_t,s extends continuously to f ∈ L^1_(^d), and there exists C > 0 depending only on T such that, for all R > 0, f ∈ L^1_, t ∈ [s,T], f ∘ϕ_t,s_L^1(B_R)≤ e^d (ω_1(t) - ω_1(s))f_L^1(B_R + C(1+R)(ω_0(t) - ω_0(s))). In particular, { [s,T] ×^d ∋ (t,x) ↦ b(t, ϕ_t,s(x)) ∈ L^1_(^d, L^1([s,T]) ) }, and for a.e. x ∈^d, [s,T] ∋ t ↦ϕ_s,t(x) is the unique absolutely continuous solution of the integral equation ϕ_t,s(x) = x + ∫_s^t b(r, ϕ_r,s(x))dr. Finally, if 0 ≤ r ≤ s ≤ t ≤ T, then the composition ϕ_s,t∘ϕ_r,s is well-defined a.e. and is equal to ϕ_r,t. Taking f = _A in the estimate above, for some A ⊂^d of finite measure, we find the regular Lagrange property | ϕ^-1_t,s(A) | ≤ e^d(ω_1(t) - ω_1(s)) |A|. Let s ∈ [0,T] be fixed and, for N ∈, define t^N_n = s + n(T-s)/N, n = 0, 1, 2, …, N. Then, by Lemma <ref> and Proposition <ref>, there exist sets B^N_n ⊂^d of full measure such that ϕ^+_t^N_n,s = ϕ^-_t^N_n,s on B^N_n. Define now the full measure set B = ⋂_N ∈⋂_n = 0^N B^N_n. Let (t,x) ∈ [s,T] × B, and, for any N ∈, let n(t) = 0,1,2,…, N be such that |t^N_n - t| is minimized. Then, in view of the uniform continuity in time of ϕ^+ and ϕ^-, for some ω_N > 0 with lim_N →ω_N = 0, |ϕ^+_t,s(x) - ϕ^-_t,s(x)| ≤ |ϕ^+_t^N_n(t),s(x) - ϕ^+_t,s(x)| + |ϕ^-_t^N_n(t),s(x) - ϕ^-_t,s(x)| ≤ω_N, and, since N was arbitrary, we find that ϕ^+_t,s(x) = ϕ^-_t,s(x). It follows that, up to a full measure set, ϕ^+_·,s and ϕ^-_·,s may be identified as absolutely continuous functions on [s,T]. For f ∈ L^1(^d) ∩ C_c(^d), let > 0 and let b^ and ϕ^+, be as in the proof of Proposition <ref>. Then _t (D_x ϕ^+,_t,s(x)) = ÷ b^(t, ϕ^+,_t,s(x)) (D_x ϕ^+,_t,s(x)) ≥ -dC_1 (t)(D_x ϕ^+,_t,s(x)) , from which it follows that (D_x ϕ^+,_t,s(x)) ≥exp( - d∫_s^t C_1(r)dr ). The change of variables formula then gives ∫_^d f(ϕ^+,_t,s(x))dx ≤exp( d∫_s^t C_1(r)dr ) ∫_^d f(x)dx. The set (ϕ^+,_t,s(x))_x ∈ f, > 0 is uniformly bounded in view of Lemma <ref>, and so the bounded convergence theorem implies, upon taking → 0, that ∫_^d f(ϕ_t,s(x))dx ≤exp( d∫_s^t C_1(r)dr ) ∫_^d f(x)dx. This implies that the map f ↦ f ∘ϕ_t,s extends continuously to L^1(^d). The local statement follows from the finite speed of propagation implied by the a priori estimates in Lemma <ref>. It now follows easily that b(·, ϕ_·,s) belongs to L^1_([0,T] ×^d), and the uniqueness statement for absolutely continuous solutions of the integral equation follows from Proposition <ref> and the fact that ϕ = ϕ^+ = ϕ^- a.e. Finally, the composition ϕ_t,s∘ϕ_r,s is justified because ϕ_t,s∈ L^1_, and its equality to ϕ_r,t a.e. is a consequence of the a.e. uniqueness of the ODE and the flow properties in Proposition <ref>(<ref>). We now demonstrate that any regularizations of b lead to the flow ϕ in the limit, not only the one-sided regularizations used above. Assume (b^)_ > 0 is a family satisfying (<ref>) uniformly in , and, for every , ∫_0^T ∇ b^(t,·)_ dt < . If ϕ^ is the unique flow corresponding to b^, then, for all s ∈ [0,T] and p ∈ [1,), as → 0, ϕ^→ϕ in C([s,T], L^p_(^d)) and in L^p_(^d, W^1,1([s,T])). The regular Lagrange property (<ref>) in Theorem <ref> implies that, for all p ∈ [1,) and R > 0, sup_ > 0sup_t ∈ [s,T]∫_B_R |ϕ^_t,s(x)|^p dx < . Moreover, for all t ∈ [s,T], ϕ^_t,s is increasing, and thus BV. This along with the a priori estimates in Lemma <ref> imply that (ϕ^_·,s)_ > 0 is precompact in C([s,T], L^p_(^d)), and therefore, for some subsequence _n 0, ϕ^_n_·,s converges as n → in C([s,T], L^p_(^d)) to some ψ_·,s, which then also satisfies the regular Lagrangian property (<ref>). We now use (<ref>) to deduce that, for any R > 0, there exists R' > 0 independent of such that, for a.e. r ∈ [s,T] and for any δ >, b^(r, ϕ^_r,s) - b(r, ψ_r,s) _L^p(B_R) ≤b^(r,ϕ^_r,s) - b^δ(r, ϕ^_r,s)_L^p(B_R) + b^δ(r, ϕ^_r,s) - b^δ(r, ψ_r,s_L^p(B_R) + b^δ(r, ψ_r,s) - b(r, ψ_r,s)_L^p(B_R) ≤ e^d(ω_1(t) - ω_1(s))( b^(r,·) - b^δ(r,·)_L^p(B_R') + b^δ(r, ·) - b(r,·)_L^p(B_R')) + b^δ(r, ϕ^_r,s) - b^δ(r, ψ_r,s_L^p(B_R). Taking first _n → 0 (using the Lipschitz continuity of b^δ) and then δ→ 0, we find that the right-hand side above converges to zero. By Lemma <ref>, we may use the dominated convergence theorem to deduce that lim_n →→ 0∫_s^T b^_n(r, ϕ^_n_r,s) - b(r, ψ_r,s) _L^p(B_R)dr = 0. Sending n → in the integral equation ϕ^_n_t,s(x) = x + ∫_s^t b^_n(r, ϕ^_n_r,s(x))dr thus gives, in the distributional sense, ψ_t,s(x) = x + ∫_s^t b(r,ψ_r,s(x))dr. We also have, by Minkowski's inequality, _t ϕ^_n_s,· - _t ψ_s,·_L^p(B_R, L^1([s,T])≤∫_s^T b^_n(r, ϕ^_n_r,s) - b(r, ψ_r,s) _L^p(B_R)dr 0, and thus (<ref>) is satisfied in the integral sense for a.e. x ∈^d. By Theorem <ref>, ψ = ϕ, and therefore the convergence statements hold for the full family → 0. Another means of regularization is through the addition of stochastic noise. In particular, if s ∈ [0,T], x ∈^d, > 0, and W: Ω× [0,T] →^d is a Wiener process defined over a given probability space (Ω, F , P), then there exists a unique strong solution of the SDE d_t ϕ^_t,s(x) = b(t, ϕ^_t,s(x))dt + √(2) dW_t. This is true even if b is merely locally bounded; see <cit.>. The following is then proved exactly as for Theorem <ref>. Assume b satisfies (<ref>) and let ϕ^ be the solution of (<ref>). Then, for all s ∈ [0,T], with probability one, as → 0, ϕ^_s,· converges in C([s,T], L^p_(^d)) and L^p_(^d, C([s,T])) to ϕ. The transformation ϕ̃_t,s(x) = ϕ_t,s(x) - √(2)(W_t - W_s) leads to the equation _t ϕ̃^_t,s(x) = b(t, ϕ̃^_t,s(x) + √(2)(W_t - W_s)). The exact same arguments as those above show that ϕ̃^_t,s is increasing for all 0 ≤ s ≤ t ≤ T, satisfies the regular Lagrange property (<ref>), and, for any fixed Brownian path W, the a priori estimates of Lemma <ref> may be applied. Arguing as in the proof of Theorem <ref>, we may then extract a subsequence _n 0 such that ϕ̃^_n, and therefore ϕ^_n, converges in C([s,T], L^p_(^d)) and L^p_(^d, C([s,T]). The same arguments as in Theorem <ref> may then be used to conclude that any such limit is the unique regular Lagrangian flow ϕ from Theorem <ref>. If b is smooth, then _s ϕ_t,s(x) = - b(s, ϕ_t,s(x)). It is then straightforward to show that all the same theory above can be developed for the terminal value problem for the flow corresponding to -b. For every t ∈ [0,T] and p ∈ [1,), ϕ_t,·∈ C([0,t], L^p_(^d)) ∩ L^p_(^d, W^1,1([0,t])), and, for almost every x ∈^d, [0,t] ∋ s ↦ϕ_t,s(x) is the unique absolutely continuous solution of ϕ_t,s(x) = x + ∫_s^t b(r, ϕ_t,r(x))dr, s ∈ [0,t]. §.§ Some remarks on the backward flow We discuss next the question of solvability of the backward flow: for fixed s ∈ [0,T], this is the terminal value problem _t ϕ_t,s(x) = b(t, ϕ_t,s(x)) t ∈ [0,s], ϕ_s,s(x) = x. Formally, the lower bound on ÷ b suggests that the backward flow should concentrate positive measure sets to null sets. As an example when d = 1, if b(t,x) = x, then the backward flow is given by ϕ_t,s(x) = x - ( x) (s-t) for x ∈, t < s, and so all trajectories eventually concentrate at x = 0. This situation can be generalized to multiple dimensions when b satisfies the half-Lipschitz property (b(t,x) - b(t,y)) · (x-y) ≥ -C_1(t) |x-y|^2, C_1 ∈ L^1_+([0,T]), The condition (<ref>) then implies the existence of a unique, Lipschitz, concentrating solution of the backward flow (<ref>). This condition was used by Filippov <cit.> to build unique solutions of differential inclusions; see also <cit.> and the recent work of the authors <cit.>. The situation is very different when b satisfies (<ref>). The differential inclusion corresponding to (<ref>) then takes the form -_t ϕ_t,s(x) ∈[ -b^*(t, ·), - b_*(t, ·) ]( ϕ_t,s(x) ). As it turns out, there is no unique flow, and, in some cases, no Lipschitz flow. For x = (x_1,x_2) ∈^2, consider the vector field b(x) = x_1 (1,1). Since b is independent of t, we formally write the flow ϕ_t,s as ϕ_t-s. Observe that, for all x_2 ∈, b_*(0,x_2) = -(1,1) and b^*(0,x_2) = (1,1). It then follows that the direction in which ϕ_-τ(0,x_2) moves for τ > 0 lies in the box [-1,1]^2, and so any flow that solves (<ref>) must take the form, for some c ∈ [-1,1] and all τ≥ 0, ϕ^(c)_-τ(x) = x - ( x_1)(1,1)τ, |x_1| ≥τ (0,x_2 + c(τ - |x_1|), |x_1| < τ. There is therefore no unique flow in the strip {|x_1| < τ}, even under the restriction that the flow be Lipschitz. Recall that (<ref>) is a weaker notion of solution than the classical Filippov formulation (see Remark <ref>). In Example <ref> above, this means b(0,x_2) is taken to be the smallest convex set containing all limit points of b(y) as y → (0,x_2), that is, the line segment connecting (-1,-1) and (1,1). The unique such solution is then the flow ϕ^(0). However, we can construct an example where even the Filippov flow is not unique. Taking again d = 2, we set b(x_1,x_2) = (1,1), x_1 > 0 and x_2 > 0, (1,0), x_1 < 0 and x_2 > 0, (0,1), x_1 > 0 and x_2 < 0, (-1,-1), x_1 < 0 and x_2 < 0. There are then three possible solution flows, even with the general Filippov formulation, when the starting point (x_1,x_2) satisfies x_1 = x_2 and τ > |x_1| = |x_1|. Indeed, the nonuniqueness occurs once trajectories reach the origin. Then (<ref>) admits the three solutions ϕ^(1)_-τ(0) = -τ(1,0), ϕ^(2)_-τ(0) = -τ(0,1), and ϕ^(3)_-τ(0) = 0 for τ > 0. In fact, ϕ^(k)_-τ is not continuous for any τ > 0 and k = 1,2,3. This can be seen by considering the flows starting from (x_1,x_2) < (0,0) with x_1 x_2: for τ > 0, ϕ^(1)_-τ(x) = ϕ^(2)_-τ(x) = ϕ^(3)_-τ(x) = (x_1 + τ, x_2 + τ) if 0 < τ < -max(x_1, x_2) (x_1 - 2x_2 - τ, 0 ) if x_1 < x_2 < 0 and τ > |x_2|, (0,x_2 - 2x_1 - τ) if x_2 < x_1 < 0 and τ > |x_1|. In particular, given τ > 0 and > 0, for k = 1,2,3, ϕ^(k)_-τ(-τ/2 + , -τ/2) - ϕ^(k)_-τ(-τ/2, -τ/2+) = (τ/2 + 2) (1,-1). In <cit.>, Poupaud and Rascle explore the connection between the uniqueness (for every x ∈^d) of Filippov solutions of (<ref>) and a stable notion of measure-valued solutions to the continuity equation[Observe, from the minus sign in front of the velocity field, that this is not the same initial value problem as (<ref>).] _t f - ÷(b(t,x)f) = 0 in (0,T) ×^d and f(0,·) = f_0. As Example <ref> shows, we cannot take this approach in analyzing (<ref>), because there are three distinct measure-valued solutions when f_0 = δ_0. §.§ Stochastic differential equations For b satisfying (<ref>), we now consider the stochastic flow for the SDE d_t Φ_t,s(x) = b(t, Φ_t,s(x))dt + σ(t,Φ_t,s(x)) dW_t, t ∈ [s,T], Φ_s,s(x) = x. Here, for some m ∈, W: [0,T] ×Ω→^m is an m-dimensional Wiener process on a given probability space (Ω, F, P), and, for (t,x) ∈ [0,T] ×^d, σ is a d × m-dimensional matrix denoted coordinate-wise by σ_ij, i = 1,2,…, d, j = 1,2,…, m. Because σ may be degenerate, the addition of noise does not necessarily imply the existence and uniqueness of a strong solution. In reproducing the theory for the ODE (<ref>) , we are therefore led to the condition { σ∈ L^2([0,T], C^0,1(^d; ^d × m)) and, for a.e. t∈ [0,T] and all i = 1,2,…, d, j = 1,2,…, m, σ_ij(t,·) is independent of x_k for k i. . We assume in addition to (<ref>) a bounded oscillation condition for b: { for some C_2 ∈ L^1_+([0,T]), and for a.e. (t,x,y) ∈ [0,T] ×^d ×^d, |b(t,x) - b(t,y)| ≤ C_2(t)(|x-y| + 1). . While not strictly necessary, this helps simplify some of the arguments by allowing certain regularizations of b to be globally Lipschitz. The reformulation of (<ref>) as a differential inclusion then takes the form d_t Φ_t,s(x) = α_t dt + σ(t, Φ_t,s(x))dW_t, α_t ∈ [b_*(t,·), b^*(t,·)](Φ_t,s(x)), Φ_s,s = x. Assume b satisfies (<ref>) and (<ref>), and σ satisfies (<ref>). Then the following hold with probability one, for all s ∈ [0,T]. * There exist solutions of Φ^+_·,s and Φ^-_·,s of (<ref>) such that, for all t ∈ [s,T], Φ^+_t,s is upper-semicontinuous and increasing and Φ^-_t,s is lower-semicontinuous and increasing. * Any other solution Φ of (<ref>) satisfies Φ^-_·,s≤Φ_·,s≤Φ^+_·,s. * If 0 ≤ r ≤ s ≤ t ≤ T, then Φ^+_s,t∘Φ^+_r,s = Φ^+_r,t and Φ^-_s,t∘Φ^-_r,s = Φ^-_r,t. * For all t ∈ [s,T], (Φ^+_t,s)_* = Φ^-_t,s. * For a.e. x ∈^d, (<ref>) admits a unique integral solution, which is equal a.e. to Φ^+_·,s and Φ^-_·,s. * There exists ω: [0,) → [0,) such that lim_r→ 0^+ω(r) = 0 and, for all f ∈ L^1(^d) and 0 ≤ s ≤ t ≤ T, [ f ∘Φ_t,s_L^1 ] ≤ e^ω(t-s)f_L^1. We first establish an analogue of Lemma <ref>. Assume B: [0,T] ×^d →^d satisfies { { t ↦∇ B(t,·)_}∈ L^1([0,T]) and _x_i B^j ≥ 0 for all i j, . and σ satisfies (<ref>). Let X,Y,α,β: [0,T] →^d be adapted with respect to the Wiener process W such that, with probability one, X and Y are continuous, α,β∈ L^1([0,T]), α_t ≤ B(t,X_t) and β_t ≥ B(t,Y_t) a.e. in [0,T], and dX_t = α_t dt + σ(t,X_t)dW_t, dY_t = β_t dt + σ(t,X_t)dW_t in [0,T], X_0 ≤ Y_0. Then X_t ≤ Y_t for all t ∈ [0,T]. For t ∈ [0,T], define γ_t = α_t - β_t, which satisfies γ∈ L^1 and γ^i_t ≤ B^i(t, X_t) - B^i(t,Y_t) = ∑_j =1^d μ^ij_t(X^j_t - Y^j_t) for a.e. t ∈ [0,T], where μ^ij_(t) = ∫_0^1 _j B^i(t, θ X_t + (1-θ)Y_t)dθ. Note that μ^ij∈ L^1 for all i,j ∈{1,2,…, d}, μ^ij≥ 0 for i j, and μ^ii_t ≤ C(t) for a.e. t ∈ [0,T] and some C ∈ L^1_+. Define also, for i = 1,2,…, d and k = 1,2,…, m, ν^ik_t = ∫_0^1 _x σ_ik(t, θ X^i_t + (1-θ) Y^i_t)dθ, which satisfies ν^ik∈ L^2 (recall that σ_ik depends only on (t,x_i)). Therefore, if, for i =1,2,…, d, we set Δ^i = X^i - Y^i, we find that, for i = 1,2,…, d, dΔ^i_t = γ^i_t dt + ∑_k=1^m ν^ik_tΔ^i_t dW^k_t, γ^i_t ≤∑_j=1^d μ^ij_t Δ^j_t, Δ^i_0 ≤ 0. Now, for i = 1,2,…, d, define Z^i_t = exp( ∫_0^t ( 1/2∑_k=1^m (ν^ik_s)^2 - C(s) )ds - ∑_k=1^m ∫_0^t ν^ik_s dW_s ), which satisfies the SDE dZ^i_t = ( ∑_k=1^m (ν^ik_t)^2 - C(t) )Z^i_tdt - ∑_k=1^m ν^ik_tZ^i_t dW^k_t, Z^i_0 = 1. Then Itô's formula yields d(Z^i_t Δ^i_t) ≤∑_j iμ^ij_t Z^i_t Δ^j_t dt. Define, for some δ > 0 and absolutely continuous and increasing ω: [0,T] →_+ with ω(0) = 1, τ := inf{ t ∈ [0,T] : Z^i_t Δ^i_t - δ e^ω(t) > 0 for some i = 1,2,…, d }. We have τ > 0 because Z^i_0 Δ^i_0 ≤ 0. Assume for the sake of contradiction that τ <. If i = 1,2,…, m is such that Z^i_τΔ^i_τ = δ e^ω(τ), then δ e^ω(τ) = Z^i_τΔ^i_τ≤∑_j i∫_0^τμ^ij_t Z^i_t Δ^j_t dt ≤δ∑_j i∫_0^τμ^ij_t Z^i_t/Z^j_t e^ω(t)dt. We have Z^i_t/Z^j_t = ∏_k=1^m exp(1/2∫_0^t [ (ν^ik_s)^2 - (ν^jk_s)^2 ] ds + ∫_0^t (ν^jk_s - ν^ik_s)dW^k_s ), and so we obtain the contradiction δ < δ/2 for any δ > 0 if ω is to chosen to satisfy ω(0) = 1, ω'(t) = 2max_i=1,2,…, d∑_j iμ^ij_t Z^i_t/Z^j_t . This implies Δ^i_t ≤δ e^ω(t)(Z^i_t)^-1 for all i = 1,2,…, m, t ∈ [0,T], and δ > 0, and we conclude upon sending δ→ 0. Let b^ and b_ be the sup- and inf-convolutions of b as in the proof of Proposition <ref>, and define the stochastic flows Φ^+, and Φ^-, by d_tΦ^+,_t,s(x) = b^(t, Φ^+,_t,s(x))dt + σ(t, Φ^+,_t,s(x))dW_t in [s,T], Φ^+,_s,s = and d_tΦ^-,_t,s(x) = b_(t, Φ^-,_t,s(x))dt + σ(t, Φ^-,_t,s(x))dW_t in [s,T], Φ^-,_s,s = . Parts (<ref>), (<ref>), and (<ref>) are then established exactly as in the proof of Proposition <ref>, making use of the comparison result of Lemma <ref> above. Part (<ref>), and then, as a consequence, (<ref>), is proved similarly as in the proof of Proposition <ref> and Theorem <ref>, using now instead the one-sided mollifiers ρ^ and ρ_ to regularize b. Note that, in view of the assumption (<ref>), for some C ∈ L^1_+([0,T]), |∇ (b(t,·) * ρ^)(x)| ≤∫ |b(t, x) - b(t,y)| |∇ρ^(x-y)|dy ≤C(t)/, and similarly for b * ρ_. This allows for the use of the comparison Lemma <ref> in the argument, which requires global Lipschitz regularity in view of the lack of finite speed of propagation. Finally, (<ref>) follows exactly as in the proof of Theorem <ref>. § LINEAR TRANSPORT EQUATIONS WITH INCREASING DRIFT For velocity fields b satisfying (<ref>), we discuss the terminal value problem for the nonconservative equation -_t u - b(t,x) ·∇ u = 0 in (0,T) ×^d, u(T,·) = u_T. as well the initial value problem for the conservative equation _t f + ÷ (b(t,x) f) = 0 in (0,T) ×^d, f(0,·) = f_0. The lower bound on ÷ b implied by (<ref>), which gave rise to the regular Lagrangian property for the unique flow ϕ in the previous section, here allows for a theory of weak solutions of (<ref>) and (<ref>) in L^p-spaces, due to the expansive property of the flow. §.§ The nonconservative equation It is important to note that solutions u of (<ref>) taking values in L^p_ cannot be understood in the sense of distributions. This is because, under the assumption (<ref>), ÷ b is a measure that need not necessarily be absolutely continuous with respect to Lebesgue measure. Instead, we identify unique solution operators for (<ref>) that are continuous on Lebesgue spaces and stable under regularizations. This is done through the relationship with the flow in the previous section, and also by characterizing the solutions using “one-sided” regularizations for increasing or decreasing solutions. §.§.§ Representation formula When b and u_T are smooth, the unique solution of the terminal value problem for (<ref>) with terminal value u is u(s,x) = S(s,t)u(x) := u( ϕ_t,s(x)), s ∈ [0,t], where ϕ is the flow corresponding to the ODE (<ref>). In view of Theorem <ref>, this formula extends to u∈ L^p_, 1 ≤ p ≤ if the assumption on b is relaxed to (<ref>). We thus identify a family of solution operators for (<ref>) that are continuous on L^p and evolve continuously in time. Assume that b satisfies (<ref>), and define the solution operators in (<ref>) using the flow constructed in Section <ref>. Then the following hold: * For all 0 ≤ s ≤ t ≤ T, S(t,s) is continuously on L^p_(^d) for 1 ≤ p ≤, and there exists C > 0 depending only on T such that, for any R > 0, S(s,t)u_L^p(B_R)≤ e^d(ω_1(t) - ω_1(s))u_L^p(B_R + C(1 +R)(ω_0(t) - ω_0(s)) . * For all 0 ≤ r ≤ s ≤ t ≤ T, we have S(t,s) ∘ S(s,r) = S(t,r). * If 0 < t ≤ T, then S(s,t) u∈ C([0,t], L^p_(^d)) if u∈ L^p_ for p <, and S(s,t) u∈ C([0,t], L^_,(^d)) if u∈ L^_. * If (b^)_ > 0 is any family of smooth approximations of b satisfying (<ref>) uniformly in , (u_T^)_ > 0 are smooth functions such that u_T^ u_T in L^p_ for some p <, and u^ is the corresponding solution of (<ref>), then, as → 0, u^ converges to S(t,T)u_T strongly in C([0,T], L^p_(^d)). The same statement is true if u^ solves -_t u^ - b(t,x) ·∇ u^ = Δ u^ in (0,T) ×^d, u^(T,·) = u^_T. Properties (<ref>) and (<ref>) follow immediately from Theorem <ref>, while properties (<ref>) and (<ref>) follow from Theorem <ref> and the dominated convergence theorem (and see also Theorem <ref>). For the statement involving the viscous equation (<ref>), we remark that, in that case, u^ is given by u^(t,x) = u^_T(ϕ^_T,t(x)), where ϕ^ is the stochastic flow corresponding to (<ref>). The proof is then finished because of Theorem <ref>. The uniqueness of the semigroup is a consequence of the uniqueness of the flow established in the previous section. Note, however, that, solely under the assumption ÷ b ≥ -C_1 for some C_1 ∈ L^1_+([0,T]), any weakly-limiting family of solution operators must lead to solutions in C([0,T],L^p_(^d)), in the strong topology. More precisely, for > 0, we have the easy a priori bound S^(s,t)u_L^p≤exp( ∫_s^t C_1(r)dr) u_L^p, u∈ L^p. It follows from a diagonal argument that there exists _n → 0 and a family of continuous linear operators on L^p such that S(r,s)S(s,t) = S(r,t) for r ≤ s ≤ t, and, for all t ∈ [0,T], S^_n(·,t) u⇀ S(·,t) u weakly in L^([0,T], L^p(^d)) for all u∈ L^p. In particular, u := S(·,t) u∈ C([0,T], L^p_(^d)), and so, for any s ∈ [0,t], lim inf_h → 0^+u(s+h,·)_L^p≥u(s,·). On the other hand, u(s+h)_L^p≤exp( ∫_s^s+h C_1(r)dr ) u(s)_L^p, so that lim sup_h → 0^+u(s+h,·)_L^p≤u(s,·). We find then that u(s+h, ·)_L^p→u(s,·)_L^p, which, coupled with the weak convergence, means that u(s+h, ·) → u(s,·) strongly in L^p if p > 1 (and so for all p locally). A similar argument holds with h replaced by -h. The following renormalization property for the solution operator S(s,t), 0 ≤ s ≤ t ≤ T, is then immediate from the formula. If β: → is smooth and |β(s)| ≤ C(1 + |s|^r) for some r ≥ 1, then, for all u∈ L^r_, β( S(s,t) u) = S(s,t) (β∘u). §.§.§ Characterizing increasing/decreasing solutions The solution of the transport equation in the previous subsection was characterized as the unique limit under arbitrary regularizations, as well as through the formula involving the regular Lagrangian flow. The remainder of the section is dedicated to understanding further ways to characterize the solution, and in particular on the level of the equation (<ref>) itself. This will become useful in the study of nonlinear equations in the final section. We first observe that, if u_T is increasing/decreasing with respect to the partial order (<ref>), then so is u(t,·) for all t ∈ [0,T]. While this is immediately clear from the formula u(t,·) = u_T ∘ϕ_T,t and the fact that ϕ_T,t is increasing, it can also be seen directly from the equation. Indeed, if u is a smooth solution of (<ref>) and v_i = _x_i u, i = 1,2,…, d, then -_t v_i - b ·∇ v_i - (_x_i b^i ) v_i = ∑_j i( _x_i b^j) v_j. Therefore, if v_i ≥ 0 (or v_i ≤ 0) when t = T for all i = 1,2,…, d, then the same is true for t < T by the maximum principle Lemma <ref>. The result for general b satisfying (<ref>) follows from approximating b and using the limiting result in Theorem <ref>. By linearity, we have thus established the following: For all 0 ≤ s ≤ t ≤ T, S(s,t): ABV → ABV. In particular, since ABV is densely contained in L^p_(^d) and S(s,t) is continuous on L^p_, Proposition <ref> implies that belonging to ABV is a suitable criterion for the propagation of compactness. Notice for instance that this provides another proof of the fact that the convergence statements in Theorem <ref> are with respect to strong convergence in C([0,T], L^p_) for p <. We now demonstrate how the propagation of the increasing or decreasing property leads to a method for characterizing solutions of (<ref>), independently of the solution formula. The idea is to regularize u in a one-sided manner, as in subsection <ref>. Formally, if u solves (<ref>) and, for (t,x) ∈ [0,T] ×^d (recall that ω_1 is as in (<ref>)), ũ(t,x) = u( t, e^ω_1(T) - ω_1(t)x ) and b̃(t,x) = e^-(ω_1(T) - ω_1(t)) b( t, e^ω_1(T) - ω_1(t) x ) + C_1(t)x, then ũ solves (<ref>) with b replaced by b̃, and b̃ satisfies (<ref>) with C_1 = 0 and a possibly different C_0. We may therefore assume here, without loss of generality, that b(t,·) is increasing for a.e. t ∈ [0,T]. Recall the definition of the one-sided mollifiers ρ_ and ρ^ defined in (<ref>). A function u: [0,T] ×^d → is called a super (sub)solution of (<ref>) if, for all t ∈ [0,T], u(t,·) is decreasing, and, if u_ = u * ρ_ and u^ = u * ρ^, then -_t u_ - b(t,x) ·∇ u_≥ 0 ( resp. -_t u^ - b(t,x) ·∇ u^≤ 0 ) for a.e. (t,x) ∈ [0,T] ×^d. A solution is both a sub- and supersolution. A well-posed notion of sub- and supersolutions can be defined where u is approximated using, instead of the one-sided mollifiers, the method of inf- and sup-convolution: u_(t,x) = inf_y{ u(x-y) + |y|/} and sup_y{ u(x-y) - |y|/}. Assume b satisfies (<ref>) with C_1 = 0. Then, for all R > 0, there exists a modulus of continuity ω_R: [0,) → [0,) such that, if u,v:[0,T] ×^d → are respectively a sub- and supersolution of (<ref>) in the sense of Definition <ref>, then, for all t ∈ [0,T] and p ∈ [1,), ∫_B_R( u(t,x) - v(t,x))_+^p dx ≤∫_B_R + ω_R(T-t)( u(T,x) - v(T,x) )_+^pdx. In particular, for any decreasing u_T: ^d →^d, there exists a unique solution u of (<ref>) in the sense of Definition <ref>, which is given by u(t,·) = S(t,T) u_T, and which is continuous a.e. in [0,T] ×^d. Set u^ = u * ρ^ and v_ = v * ρ_; then, combining the inequalities for u^ and v_ given by Definition <ref>, we obtain - _t (u^ - v_) - b(t,x) ·∇ (u^ - v_) ≤ 0. Let β: → be smooth and increasing. Multiplying the above inequality by the positive term β'( u^ - v_) yields -_t β(u^ - v_) - b(t,x) ·∇β(u^ - v_) ≤ 0. We may then take β(r) = r_+^p (arguing with an extra layer of regularizations if p = 1) and find that, in the sense of distributions, -_t (u^ - v_)_+^p - b(t,x) ·∇ (u^ - v_)_+^p ≤ 0. Let ψ̂: [0,) → [0,) be smooth and decreasing, such that ψ̂= 1 on [0,1] and ψ̂= 0 on [2,), and set ψ(t,x) := ψ̂(|x|/R(t)) for some R: [0,T] → [0,) to be determined. Using ψ as a test function, we discover - _t ∫_^d (u^(t,x) - v_(t,x))_+^p ψ(t,x)dx ≤⟨ -_t ψ(t,·) - ÷ (b(t,·) ψ(t,·)) , (u^(t,·) - v_(t,·))_+^p ⟩ ≤∫_^d( u^(t,x) - v_(t,x) )_+^p ( -_t ψ(t,x) - b(t,x) ·∇ψ(t,x) ) dx, where in the last line we used the fact that ÷ b ≥ 0 and ψ≥ 0. Using the fact that ψ̂' ≤ 0, with ψ̂' 0 only if r ∈ [1,2], we find that - _t ψ - b ·∇ψ = - ψ̂'(x/R(t)) ( Ṙ(t)/R(t)^2- b(t,x) · x/R(t)|x|^2) ≥ - R(t)^-1ψ̂'(x/R(t)) ( Ṙ(t) - 2C_0(t)( 1 + 2 R(t) ) ). For t_0 ∈ [0,T], this is made nonnegative on [t_0,T] by choosing, for any fixed R > 0, R(t) = R e^4[ω_0(t) - ω_0(t_0) ] + 1/2( e^4[ω_0(t) - ω_0(t_0) ] - 1 ), and so ∫_^d ( u^(t_0,x) - v_(t_0,x) )_+^p ψ̂( |x|/R(t_0)) dx ≤∫_^d (u^(T,x) - v_(T,x) )_+^p ψ̂( |x|/R(T))dx. We may then choose functions ψ̂ that approximate _[0,1] from above, and then, by the monotone convergence theorem, ∫_B_R (u^(t_0, x) - v_(t_0,x))_+^p dx ≤∫_B_R(T) (u^(T,x) - v_(T,x))_+^p dx. The proof of (<ref>) is finished upon sending → 0 and setting ω_R(r) := sup_0 ≤ s ≤ t ≤ s + r ≤ T{( R + 1/2) ( e^4[ω_0(t) - ω_0(s)] - 1 ) }. The comparison inequality (<ref>) implies uniqueness for a solution with terminal condition u_T, and so it remains to show that S(t,T) u_T is a solution in the sense of Definition <ref>. Let (b^δ)_δ > 0 be a family of smooth approximations of b satisfying (<ref>) with C_1 = 0 and uniform C_0, and let u^δ_T = u_T* γ_δ for a family of standard mollifiers (γ_δ)_δ > 0. Note then that, for δ > 0, u^δ_T is decreasing, and, as δ→ 0, u^δ_T → u_T in L^p_ for all p <. Let u^δ be the corresponding solution of the terminal value problem (<ref>). It follows that u^δ(t,·) is decreasing for all t ∈ [0,T]. For any δ > 0 and > 0, -_t (u^δ * ρ^) - b^δ(t,x) ·∇ (u^δ * ρ^) = ∫_^d( b^δ(t,y) - b^δ(t,x) ) ·∇ u^δ(t,y) ρ^(x-y)dy ≤ 0, where the last inequality follows from the fact that ρ^(x-y) 0 only if x ≤ y, b^δ(t,·) is increasing, and ∇ u^δ≤ 0. Sending δ→ 0, it follows from Theorem <ref> that, in the sense of distributions, if u = S(·,T)u_T and u^ = u * ρ^, -_t u^ - b ·∇ u^≤ 0. It follows that S(·,T) u_T is a subsolution. A similar argument shows that it is a supersolution, and therefore the unique solution in the sense of Definition <ref>. Now, for some M > 0, define ψ(t) = exp(M ω_1(t)) and ũ^δ(t,x) := u^δ(t,x + ψ(t) 1). For any fixed R > 0, there exists M > 0 such that, in view of the linear growth of b^δ given by (<ref>) uniformly in δ, and the fact that u^δ is decreasing in the spatial variable, for any x ∈ [-R,R]^d, / tũ^δ(t,x) = (-∇ u^δ) ·( b^δ(t,x + ψ(t) 1) - ψ'(t) 1 ) ≤ C_1(t)(-∇ u^δ) ·( (1 + |x| + ψ(t)) - M ψ(t) ) ≤ 0. Since M is independent of δ, we may send δ→ 0 and conclude that ũ(t,x) := u(t,x + ψ(t)) is increasing on [0,T] × [-R,R]^d, and therefore continuous a.e. in that set by Lemma <ref>. The transformation leading from u to ũ preserves null sets, and we conclude that u is continuous almost everywhere. The idea behind Definition <ref> is to establish a sign for the commutator between convolution and differentiation along irregular vector fields, as compared to the work of DiPerna and the first author <cit.>, where the commutator is shown to be small for Sobolev vector fields. We must take convolution kernels with a specific one-sided structure in order to analyze the commutators; a less crude example of this idea is seen in the work of Ambrosio <cit.> for general BV velocity fields. A different notion of sub and supersolutions, which also selects S(·,T) u_T as the unique solution of (<ref>), can be obtained by instead regularizing b. Recall that we have assumed without loss of generality that b is increasing. Then a decreasing function u can be said to be a sub (super)solution if, for all > 0, in the sense of distributions, -_t u - b_·∇ u ≤ 0 ( resp. -_t u - b^·∇ u ≥ 0 ), where b_≤ b ≤ b^ are one-sided regularizations of b, for example the one-sided mollifiers or the inf- and sup-convolutions. The notion of sub and supersolution in Definition <ref> turns out to be more amenable to the study of the nonlinear systems in Section <ref>. Throughout this section, we have studied the setting where u_T, and therefore u(t,·) for t < T, are decreasing. The same analysis can be achieved for increasing solutions, in which case the inequalities in (<ref>) are reversed. Note, in particular, that the solution flows ϕ_t,s(x) are vector-valued solutions in this sense; i.e. - _s ϕ_t,s(x) - b(s,x) ·∇ϕ_t,s(x) = 0, ϕ_t,t(x) = x. We now observe that the full family of solution operators S(s,t) : L^p_→ L^p_, 0 ≤ s ≤ t ≤ T, can be constructed independently of the ODE flows ϕ_s,t, which can then be recovered with the theory of renormalization. Indeed, Definition <ref>, and its counterpart for increasing solutions, can be used to define S(s,t) u̅ for any u̅∈ ABV. The density of ABV in L^p_(^d), and the L^p-continuity of S(s,t), then allow the solution operators to be continuously extended to L^p_. It is, however, not clear whether solutions of (<ref>) can be characterized for arbitrary u_T ∈ L^p_, other than by the formula (<ref>) or as limits of solutions to regularized equations. §.§.§ Lower order terms We briefly explain how to extend the above results to equations with additional lower order terms, as in -_t u - b(t,x) ·∇ u - c(t,x)u - d(t,x) = 0 in [0,T] ×^d , u(T,·) = u_T, for functions c ∈ L^1([0,T], L^_) and d ∈ L^1([0,T], L^q_) for some q ∈ [1,]. Assume p ∈ [1,) ∩ [1,q] and u_T ∈ L^p_. Then there exists a unique function u ∈ C([0,T], L^p_) with the following properties: * There exists C_2 ∈ L^1_+([0,T]), and, for R > 0, a modulus ω_R: [0,T] → [0,T], depending only on the assumptions in (<ref>) and (<ref>) such that, for R > 0, u(t,·)_L^p(B_R)≤exp( ∫_t^T C_2(s)ds ) u_T_L^p_B_R + ω_R(T-t) + ∫_t^T C_2(s)ds. * Let (b^)_ > 0, (c^)_ > 0, and (d^)_ > 0 be families satisfying (<ref>) and (<ref>) uniformly in , such that, as → 0, (b^,c^, d^) → (b,c,d) a.e. Let (u^_T)_ > 0 be a family of smooth functions approximating u_T in L^p_, and let u^ be the corresponding solution of (<ref>). Then, as → 0, u^ converges strongly to u in C([0,T], L^p_). The same statement is true if u^ solves -_t u^ - b(t,x) ·∇ u^ - c(t,x)u^ - d(t,x) = Δ u^ in [0,T] ×^d, u^(T,·) = u^_T. * For (t,x) ∈ [0,T] ×^d, u has the formula u(t,x) = u_T(ϕ_T,t(x)) exp( ∫_t^T c(s, ϕ_s,t(x))ds ) + ∫_t^T d(s, ϕ_s,t(x))exp( ∫_t^s c(r, ϕ_r,t(x))dr )ds. Analogous statements hold when p = q =, in which case u ∈ C([0,T], L^_,) and the convergence in part (<ref>) is weak-⋆ in L^. For b^, c^, d^, u^_T, and u^ as in the statement of part (<ref>), if (ϕ^_t,s(x))_s,t ∈ [0,T] denotes the corresponding smooth flow, we have the formula u^(t,x) = u^_T(ϕ^_T,t(x)) exp( ∫_t^T c^(s, ϕ^_s,t(x))ds ) + ∫_t^T d^(s, ϕ^_s,t(x))exp( ∫_t^s c^(r, ϕ^_r,t(x))dr )ds. The convergence statements, and thus the formula in part (<ref>), are then proved just as in Theorem <ref>. In particular, arguing just as in that proof, we have lim_→ 0( u_T^(·,ϕ^_·,t) - u_T(·, ϕ_·,t) _L^1([t,T], L^p_) + d^(·,ϕ^_·,t) - d(·, ϕ_·,t) _L^1([t,T], L^p_)) = 0 and, for all r <, lim_→ 0c^(·,ϕ^_·,t) - c(·, ϕ_·,t) _L^1([t,T], L^r_) =0. In addition, c^(t, ϕ^_t,s) is uniformly bounded in L^1([0,T], L^_), in view of Lemma <ref>, and so we conclude parts (<ref>) and (<ref>) by the dominated convergence theorem. The L^p-estimates in part (<ref>) are proved just as before, either from the regularized equation itself or from (<ref>), using the lower bound on the divergence of b. Sub and supersolutions can be characterized when u_T is increasing/decreasing under the additional assumption that c = c(t) ∈ L^1_+([0,T]) and d(t,·) is decreasing for all t ∈ [0,T]. The propagation result Proposition <ref> is then easily generalized (the proof is almost identical and so we omit it): Assume (<ref>), (<ref>), and (<ref>). If u_T ∈ L^_ is increasing (decreasing) and u is the solution of (<ref>) specified by Theorem <ref>, then, for all t < T, u(t,·) is increasing (decreasing). Definition <ref> is then generalized as follows: A function u: [0,T] ×^d → is called a super (sub)solution of (<ref>) if, for all t ∈ [0,T], u(t,·) is decreasing, and, if u_ = u * ρ_ and u^ = u * ρ^, then -_t u_ - b(t,x) ·∇ u_ - c(t) u_ - d(t,x) ≥ 0 ( resp. -_t u^ - b(t,x) ·∇ u^ - c(t) u^ - d(t,x) ≤ 0 ) for a.e. (t,x) ∈ [0,T] ×^d. A solution is both a sub- and supersolution. Finally, the following is proved almost identically to Theorem <ref>. Assume b satisfies (<ref>) with C_1 = 0, and c and d satisfy (<ref>) and (<ref>). Then, for all R > 0, there exist moduli of continuity ω,ω_R: [0,) → [0,) such that, if u,v:[0,T] ×^d → are respectively a sub- and supersolution of (<ref>) in the sense of Definition <ref>, then, for all t ∈ [0,T] and p ∈ [1,), ∫_B_R( u(t,x) - v(t,x))_+^p dx ≤ e^ω(T-t)∫_B_R + ω_R(T-t)( u(T,x) - v(T,x) )_+^pdx. In particular, for any decreasing u_T ∈ L^p_(^d), there exists a unique solution u of (<ref>) in the sense of Definition <ref>, which is given by (<ref>), and which is continuous a.e. in [0,T] ×^d. §.§ The conservative equation In contrast to the theory for the nonconservative equation, solutions to (<ref>) belonging to Lebesgue spaces can be made sense of in the sense of distributions. However, under the general assumption (<ref>), these are not in general unique, as the simple example b(x) = x on shows. Drawing once more an analogy with the setting studied in <cit.> of half-Lipschitz velocity fields, the “good” (stable) solution of (<ref>) is identified using a particular solution formula, and, in <cit.>, this is shown to coincide with the pushforward by the regular Lagrangian flow of the initial density. As discussed in subsection <ref> above, the former strategy is unavailable; however, in view of the theory of the nonconservative equation and the forward regular Lagrangian flow that was built in the previous section, we may define solutions by duality. §.§.§ Duality solutions For 0 ≤ s ≤ t ≤ T< denote by S^*(t,s) the adjoint of the solution operator S(s,t). Let 1 ≤ p ≤ and f_0 ∈ L^p_, and define f(t,x) = S^*(t,0) f_0. Then f ∈ C([0,T], L^p_(^d)) if p < or f ∈ C([0,T], L^_,) if p =, and f is a distributional solution of (<ref>) and S^*(t,0)f_0 = (ϕ_t,0)_♯ f_0, where ϕ is the flow obtained in Section <ref>. If (b^)_ > 0 is a family of smooth approximations of b satisfying (<ref>) uniformly in > 0, (f_0^)_ > 0 is a family of approximations of f_0 in L^p_ for 1 ≤ p <, and f^ is the corresponding solution of (<ref>), then, as → 0, f^→ f weakly in C([0,T], L^p_(^d)). The same is true if f^ is taken to be the unique smooth solution of _t f^ + ÷ (b f^) = Δ f^ in [0,T] ×^d, f^(0,·) = f^_0, and analogous convergence statements hold in the weak-⋆ sense if p =. The identity (<ref>) follows immediately from (<ref>); observe that it is well-defined for f_0 ∈ L^p_ in view of the regular Lagrange property (<ref>). We now prove the convergence statements, and it suffices to prove the results for p <, since L^_⊂ L^p_ for any p <. Let (b^)_ > 0, (f^_0)_ > 0, (f^)_ > 0 be as in the statement. In view of the lower bound on the divergence, it is straightforward to prove a priori L^p bounds. Namely, for all R > 0, there exists a modulus of continuity ω_R: [0,) → [0,) depending only on T and C_0 from (<ref>), such that, for all > 0 and t ∈ [0,T], ∫_B_R |f^(t,x)|^p dx ≤ e^d(p-1)ω_1(t)∫_B_R + ω_R(t) |f^_0(x)|^p dx. It follows that there exists a subsequence _n 0 and f ∈ L^([0,T], L^p_(^d)) such that f^_n→ f weakly in L^([0,T], L^p_(^d)). Sending n → for = _n in the equation _t f^_n + ÷ (b^_n f^_n) = 0, using the fact that b^ converges strongly to b in L^p_ for any p <, we find that f is a distributional solution of (<ref>), and thus, moreover, f ∈ C([0,T], L^p_,(^d)). The weak-⋆ convergence statements when p = are proved analogously. Fix u ∈ C^1_c(^d), and t_0 ∈ [0,T], and let u^ denote the solution of (<ref>) with terminal value u^(t_0,·) = u; in view of the results of the previous subsection, u^ has compact support in [0,T] ×^d uniformly in > 0. Exploiting the duality of the equations and integrating by parts gives ∫_^du(x)f^(t_0, x)dx = ∫_^d u^(0,x) f_0^(x) dx. Sending n → for = _n gives the identity ∫_^du(x) f(t_0,x)dx = ∫_^d u(0,x) f_0(x)dx. It follows that f(t_0,·) = S(t_0,0)f_0, and we therefore have the full convergence for all → 0. An identical argument can be used to prove the convergence statement for the vanishing viscosity limit. Finally, the the fact that f is continuous from [0,T] into L^p_(^d) with the strong topology is a consequence of the continuity of the upper bound in the L^p-estimate (<ref>) (see Remark <ref>). It is clear from the proof above that, when 1 < p <, the initial functions f^_0 need only converge weakly in L^p_ to f_0 as → 0. It is not clear whether the convergence of regularizations f^ to the duality solution f of (<ref>) can be upgraded to strong convergence, except when d = 1, in which case (<ref>) coincides with a half-Lipschitz condition on b (see <cit.>). §.§.§ Nonnegative solutions and renormalization It is by now standard in the theory of the continuity equation (<ref>) that there is uniqueness of nonnegative solutions when the measure f_0 is concentrated on sets where the ODE (<ref>) has a unique solution; see <cit.>. In view of Theorem <ref>, we then have the following: If 1 ≤ p ≤, f_0 ∈ L^p_(^d), and f ≥ 0, then f := S^*(·,0) f_0 is the unique nonnegative distributional solution of (<ref>). We present here an alternative proof using the characterization of f as the duality solution, in order to emphasize again that the theory in this section can be developed independently of the analysis of the ODE (<ref>) in the previous section. By Theorem <ref>, f = S^*(·,0)f_0 is a distributional solution, and its nonnegativity can be seen through an approximation argument, since weak convergence preserves nonnegativity. Assume now that f ∈ C([0,T], L^p_(^d)) is a nonnegative distributional solution of (<ref>). Fix t_0 ∈ [0,T] and a decreasing function u: ^d →, and let u = S(·,t_0)u. Then u(t,·) is decreasing for t ∈ [0,t_0], and is a solution of (<ref>) in the sense of Definition <ref>. In particular, u_ = u * ρ_ and u^ = u * ρ^ satisfy respectively -_t u_ - b ·∇ u_≥ 0 and -_t u^ - b ·∇ u^≤ 0 in [0,t_0] ×^d. Let ψ∈ C^1_c(^d) with ψ≥ 0. Using u_(t,x) ψ(x) as a test function for f, we find that ∫_^d f(t_0,x) (u * ρ_)(x)ψ(x)dx - ∫_^d f_0(x) u_(0,x)ψ(x)dx = ∫_0^t_0∫_^d f(s,x) [ _s (u_(s,x) ψ(x)) + b(s,x) ·∇ (u_(s,x) ψ(x) ) ]dxds ≤∫_0^t_0∫_^d f(s,x) u_(s,x) b(s,x) ·∇ψ(x)dxds, where we have used the nonnegativity of ψ and f. Similarly, ∫_^d f(t_0,x) (u * ρ^)(x)ψ(x) dx - ∫_^d f_0(x) u^(0,x)ψ(x)dx ≥∫_0^t_0∫_^d f(s,x) u^(s,x) b(s,x) ·∇ψ(x)dxds. Sending → 0, we conclude that ∫_^d f(t_0,x) u(x) ψ(x) dx - ∫_^d f_0(x) S(0,t_0)u(x)ψ(x) dx = ∫_0^t_0∫_^d f(s,x) u(s,x) b(s,x) ·∇ψ(x)dx. By linearity, the same is true for all increasing u as well, and therefore, by a density argument, for all u∈ L^p'_(^d). In particular, we take u∈ C_c(^d), in which case S(·,t_0)u is supported in [0,T] × B_R for some R > 0, by the finite speed of propagation property. To conclude, we may then take ψ∈ C^1_c(^d) such that ψ≡ 1 in B_R, and therefore also ∇ψ = 0 in B_R. Assume that f and |f| are both distributional solutions of (<ref>). Then f is the unique duality solution of (<ref>); that is, f(t,·) = S(t,0) f(0,·). Corollary <ref> gives a sufficient criterion for a distributional solution to be the correct duality solution. However, we do not know whether this condition is necessary. In other words, it is an open question whether |S^*(t,s)f| = S^*(t,s) | f| for any f ∈ L^p_. This renormalization property is equivalent to a kind of injectivity for the forward flow ϕ_t,s, which we describe with the next result. Let 0 ≤ s ≤ t ≤ T, 1 ≤ p ≤, and f∈ L^p_(^d). Then the following statements are equivalent: * |S^*(t,s)f| = S^*(t,s) | f|. * If A_+ := {f > 0 } and A_- = {f < 0 }, then ∫_^dϕ_t,s^♯_A_+(x) ϕ_t,s^♯_A_-(x)dx = 0. Let ρ∈ L^p_(^d) with ρ≥ 0, and set A := { x : ρ(x) > 0 }. We first claim that | { x : S(t,s)ρ(x) > 0 }{ x : S(t,s) _A(x) > 0 }| = 0. Let B ⊂^d be measurable. Then by Theorem <ref>, ∫_B S^*(t,s) ρ(x) dx = ∫ρ(x) {ϕ_t,s(x) ∈ B }dx and ∫_B S^*(t,s) _A(x)dx = | {x ∈ A : ϕ_t,s(x) ∈ B } |. It follows that (S^*(t,x) ρ)_B = 0 a.e. if and only if (S^*(t,x) _A) _B = 0 a.e., whence (<ref>). It now follows that (<ref>) is equivalent to | { S^*(t,s) f_+ 0 }∩{ S^*(t,x) f_- 0 }| = 0. By linearity, S^*(t,s) f = S^*(t,s) f_+ - S^*(t,x) f_-, and so this is equivalent to (S^*(t,x) f )_± = S^*(t,s) f_±, and thus (<ref>). Either of the two renormalization properties would follow from the strong convergence of regularizations in Theorem <ref>. However, we do not know at this time whether the strong convergence actually holds; see Remark <ref>. The characterization in part (<ref>) of Proposition <ref> is a reformulation of the renormalization property in terms of the injectivity of the flow. For instance, we have the following. Assume that b satisfies (<ref>), let 0 ≤ s ≤ t ≤ T and f∈ L^p_, and suppose that | { x ∈^d : ϕ_t,s^-1({x}) ∩{ f > 0 }∅ and ϕ_t,s^-1({x}) ∩{ f < 0 }∅}| = 0. Then the renormalization property in Proposition <ref> is satisfied. It suffices to establish part (<ref>) in Proposition <ref>. Let A_± = {± f > 0}. Then, for any measurable B ⊂^d, ∫_B ϕ_t,s^♯_A_±(x)dx = |A_±∩ϕ^-1_t,s(B) |. Let B be any finite-measure set contained in B_+ := { y ∈^d: ϕ^-1_t,s({y}) ⊂ A_+^c }. Then ∫_B ϕ_t,s^♯_A_+dx = |A_+ ∩ϕ^-1_t,s(B) | = 0. It follows that ϕ_t,s^♯_A_+ = 0 a.e. in B_+. Similarly, ϕ_t,s^♯_A_- = 0 a.e. in B_- := { y ∈^d: ϕ^-1_t,s({y}) ⊂ A_-^c }. We conclude that ϕ_t,s^♯_A_+(y) and ϕ_t,s^♯_A_-(y) are both positive only if ϕ_t,s^-1({y}) intersects both A_+ and A_-. In view of the assumption of the lemma, the set of such y has measure 0. This establishes property (<ref>) of Proposition <ref>. Observe that, when d = 1, ϕ_t,s is actually injective for any 0 ≤ s ≤ t ≤ T. Notice that, for the drifts introduced in Examples <ref> or <ref>, the corresponding flow has the property that ϕ_t,s^-1({y}) is at most a singleton for any 0 ≤ s ≤ t ≤ T and a.e. y ∈^d. In view of the regular Lagrangian property, a kind of injectivity of the flow can be seen for particular ordered sets. Suppose that x,y ∈^d, x ≤ y, and ϕ_t,s(x) = ϕ_t,s(y). Then it cannot be true that x_i < y_i for all i = 1,2,…, d. If this were the case, then ϕ_t,s would be constant on the cube [x,y], which violates the regular Lagrange property. The following then follows from Lemma <ref>. Assume that f ∈ L^p_(^d) and, for a.e. x,y ∈^d such that f(x) > 0 and f(y) < 0, either x < y or y < x. Then renormalization is satisfied for S^*(t,s) f for all 0 ≤ s ≤ t ≤ T. The condition on f in Proposition <ref> is satisfied if there exist cubes (Q_n)_n ∈ such that Q_n < Q_n+1[In other words, for all x ∈ Q_n and y ∈ Q_n+1, we have x < y.], {f > 0}⊂⋃_n ∈ Q_2n, and {f < 0}⊂⋃_n ∈Å Q_2n+1. §.§ Some remarks on “time-reversed” equations As discussed in subsection <ref>, for velocity fields b satisfying (<ref>), there is not a satisfactory notion of reverse flow for the ODE (<ref>). Nevertheless, we can indirectly make sense of the backward Jacobian (D_x ϕ_0,t(x)), which, formally, should be the solution of (<ref>) with f_0 = 1, that is, J(t,·) := S^*(t,0)1. Let J be defined by (<ref>). * For all p ∈ [1,), J ∈ L^∩ C([0,T], L^p_(^d)). * If (b^)_ > 0 are smooth, satisfy (<ref>) uniformly in , and converge a.e. to b as → 0, and if ϕ^_0,t is the solution of _t ϕ^_0,t(x) = - b^(t, ϕ^_0,t(x)) in [0,T], ϕ^_0,0(x) = x, then, as → 0, (D_x ϕ^_0,·) converges weakly in C([0,T], L^p_(^d)) to J. * For all f_0 ∈ L^ and (t,x) ∈ [0,T] ×^d, | S^*(t,0)f_0(x) | ≤f_ J(t,x). Items (<ref>) and (<ref>) follow immediately from Theorem <ref>. To prove (<ref>), let b^ and ϕ^_0,t be as in the statement, let f^ be the solution of (<ref>) with drift b^ and initial condition f^_0 = f * ρ_, where ρ_ is a standard mollifier. Then, for (t,x) ∈ [0,T] ×^d, |f^(t,x)| = |f^_0(ϕ^_0,t(x))| (D_x ϕ^_0,t(x)). The statement follows upon sending → 0 and appealing to the weak convergence of f^ and (D_x ϕ^_0,t). Continuing the formal discussion from above, note that, if f_0 and b are smooth, then v(t,x) = f_0(ϕ_0,t(x)) solves the initial value problem for the transport equation _t v + b(t,x) ·∇ v = 0 in [0,T] ×^d, v(0,·) = f_0. The time direction for (<ref>) is forward, in contrast to (<ref>), where it is backward. We therefore cannot appeal to the theory for that equation. Nevertheless, if f_0 ∈ L^ and b satisfies (<ref>), then a candidate for the solution of (<ref>) is v(t,x) := J(t,x)/S^*(t,0)f_0(x), (t,x) ∈ [0,T] ×^d, which, by Proposition <ref>(<ref>), is a well-defined bounded function. Note, however, that studying the stability properties of the formula (<ref>) is complicated by the fact that J and S^*(t,0)f_0 are stable only under weak convergence in C([0,T], L^p_). To complement (<ref>), we also consider the terminal value problem for the continuity equation: _t g + ÷(b(t,x) g) = 0 in (0,T) ×^d and g(T,·) = g_T. The formula in this case should be g(t,x) = g_T(ϕ_T,t(x)) (D_x ϕ_T,t(x)). In fact, both terms in the product have meaning: u(t,x) := g_T(ϕ_T,t(x)) is the solution of (<ref>) with terminal value g_T, and (D_x ϕ_T,t(x)) is well-defined almost-everywhere by Lemma <ref> and the fact that ϕ_T,t is increasing. Furthermore, by regularizing b and taking weak distributional limits, it turns out that (D_x ϕ_T,t(x)) is a measure bounded from below. However, u is not continuous in general, and so it is not possible to make sense of the product in (<ref>). This is exactly what leads to multiple measure-valued solutions of the equation in general; see the discussion of Example <ref> above. §.§ Second-order equations We finish this section by briefly demonstrating how the first-order results can be extended to the second-order equations -_t u - b(t,x) ·∇ u - [ a(t,x) ∇^2 u] = 0 in (0,T) ×^d and u(T,·) = u_T and _t f + ÷ (b(t,x) f) - ∇^2 · (a(t,x) f) = 0 in (0,T) ×^d and f(0,·) = f_0, where b satisfies (<ref>) and (<ref>), and a: [0,T] ×^d → S^d is given by a(t,x) = 1/2σ(t,x) σ(t,x)^τ and σ is a matrix-valued function satisfying (<ref>). Observe that this means that a_ij(t,x) depends only on (t,x_i,x_j) ∈ [0,T] ×^2 for all i,j = 1,2,…, d. For all u_T ∈ L^p(^d), 1 ≤ p <, there exists a unique u ∈ C([0,T], L^p) with the following properties: * There exists a modulus ω: [0,T] →_+ such that max_t ∈ [0,T]u(t,·)_L^p≤ e^ω(T-t)u_T_L^p. * If (b^)_ > 0 is any family of smooth functions satisfying (<ref>), (<ref>) uniformly in , converging a.e. to b as → 0, and (u^_T)_ > 0 is a family of smooth functions converging in L^p to u_T, then, as → 0, the unique solution u^ of (<ref>) converges in C([0,T] , L^p) to u. The same is true for vanishing viscosity limits. * For any (t,x) ∈ [0,T] ×^d), u(t,x) = [ Φ_T,t(x)], where Φ denotes the stochastic flow from Theorem <ref>. The argument follows almost exactly as in the first order case (Theorem <ref>), using the stability and uniqueness results in Theorem <ref> for the SDE (<ref>) (recall that we are assuming the bounded-oscillation condition (<ref>) in addition to (<ref>)). Upon regularizing the velocity field b, the formal a priori L^p estimate in part (<ref>) can be made rigorous, which, in particular, gives boundedness of the solution operator on L^p for all p ∈ [1,), uniformly in > 0, so that the initial datum u_0 can always be assumed to belong to C_c(^d) without loss of generality. The existence and uniqueness of the strong limit and its identification with the formula in part (<ref>) are then a consequence of Theorem <ref>. If u_T is increasing (decreasing), then the same is true for u(t,·) for all t ∈ [0,T], which, again, can be checked with the representation formula (<ref>), or by the differentiating the equation and using (<ref>). For now, we do not discuss the question of characterizing solutions in a PDE sense. Similar results can be obtained for the equation with lower order terms -_t u - [a(t,x) ∇^2 u] - b(t,x) ·∇ u - c(t,x)u - d(t,x) = 0 in (0,T) ×^d, u(T,·) = u_T. For every f_0 ∈ L^p, 1 ≤ p <, there exists a distributional solution of (<ref>) with the following properties: * f is obtained uniquely from weak limits in C([0,T], L^p) upon replacing b with a regularization, or from vanishing viscosity limits, and the resulting solution operator is bounded on L^p, with bound depending only on the assumptions for b, for all p ∈ [1,). * If p ∈ [1,), t ∈ [0,T] and u∈ L^p', and if u is the solution identified in Theorem <ref> of (<ref>) in [0,t] ×^d with terminal value u(t,·) = u, then ∫ f(t,x)u(x)dx = ∫ f_0(x) u(0,x)dx. * For all t ∈ [0,T], f(t,·) = (Φ_t,0)_# f_0, where Φ_t,0 is the stochastic flow from Theorem <ref>. Thus, if f_0 is a probability density and X_0 is a random variable independent of the Wiener process with density function f_0, it follows that f(t,·) is the probability density function of Φ_t,0(X_0) (which is absolutely continuous in view of the regular Lagrange property). * If f_0 ≥ 0 and g ≥ 0 is a nonnegative distributional solution of (<ref>), then f = g. The proofs of parts (<ref>)-(<ref>) proceed similarly to the proof of Theorem <ref>, by first regularizing b, proving uniform L^p-estimates, and passing to the limit, exploiting the uniqueness results for (<ref>). The uniqueness of nonnegative solutions in part (<ref>) now follows from the Ambrosio-Figalli-Trevisan superposition principle; see for instance <cit.>. § NONLINEAR TRANSPORT SYSTEMS We now turn to the study of the nonlinear transport systems discussed in the introduction, that is _t u + f(t,x,u) ·∇ _x u + g(t,x,u) = 0 in (0,T) ×^d, u(T,·) = u_T, where, for some integer m ≥ 1, u: [0,T] ×^d →^m, f:[0,T] ×^d ×^m →^d, and g: [0,T] ×^d ×^m →^m. We also consider the associated forward-backward system of characteristics posed for fixed (t,x) ∈ [0,T] ×^d with s ∈ [t,T] by -_s U_s,t(x) = g(s,X_s,t(x),U_s,t(x)) U_T,t(x) = u_T( X_T,t(x)), _s X_s,t(x) = f(s,X_s,t(x), U_s,t(x)) X_t,t(x) = x. §.§ Weak solutions We will introduce assumptions on f, g, and u_T, that, freezing u, make the equation (<ref>) exactly of the form of those nonconservative linear equations studied in the previous section. This then leads to a natural notion of weak solution via a fixed point operator. Assume { (f,g) :[0,T] ×^d ×^m →^d ×^m, f(t,x,·), g(t,x,·) are continuous for (t,x) ∈ [0,T] ×^d, ∫_0^T sup_(x,u) ∈^d ×^m( |f(t,x,u)|/1+|x| + |g(t,x,u)|/1 + |u|)dt < , and, for some C_0 ∈ L^1_+([0,T]), a.e. t ∈ [0,T], and all i,j ∈{1,2,… ,d} and k,ℓ∈{1,2,…, m }, _x_i f^j(t,·,·) ≥ - C_0(t) δ_ij, _u_k g^ℓ(t,·,·) ≥ -C_0(t) δ_kℓ, _x_i g^ℓ≤ 0, and _u_k f^j ≤ 0. . Under these assumptions, any solution operator for (<ref>) should preserve the decreasing property of solutions, which we show formally assuming the data are smooth. Assume f, g, and u_T are smooth with uniformly bounded derivatives, f and g satisfy (<ref>), and u_T:^d →^m is decreasing with respect to (<ref>). If u is a smooth solution of the terminal value problem (<ref>), then, for all t ∈ [0,T], u(t,·) is decreasing. Let i ∈{1,2,…,m} and k ∈{1,2,…, d} be fixed, and define v_ik := _k u^i. Taking the derivative in x_k of the i-component of (<ref>) gives the system _t v_ik + f(t,x,u)·∇ v_ik + ( _x_k f^k(t,x,u) + _u_i g^i(t,x,u) ) v_ik + _x_k g^i(t,x,u) = - ∑_j k_x_k f^j(t,x,u) v_ij - ∑_ℓ i_u_ℓ g^i(t,x,u) v_ℓ k - ∑_j=1^d ∑_ℓ=1^m _u_ℓ f^j(t,x,u)v_ℓ k v_i j. In view of (<ref>), the system satisfied by { v_ik : i = 1,2,…, m, k = 1,2,…, d }, after reversing time, is of the form in (<ref>), and so the result follows from Lemma <ref>. We now make the connection with the linear transport equation theory of the previous sections. In particular, note that if u(t,·) is decreasing for all t ∈ [0,T], then b(t,x) := f(t, x, u(t,x)) satisfies the assumptions of (<ref>), while c(t) = C_0(t) and d(t,x) = g(t,x,u(t,x)) + C_0(t) u(t,x) satisfy (<ref>). Assume f and g satisfy (<ref>) and u_T is decreasing. Then a locally bounded function u: [0,T] ×^d →^m that is decreasing in the ^d-variable and satisfies { (t,x) ↦u(t,x)/1 + |x|}∈ L^ is called a solution of (<ref>) if u is a solution of the linear equation (<ref>) with b(t,x) = f(t,x,u(t,x)), c(t) = C_0(t), and d(t,x) = g(t,x,u(t,x)) + C_0(t) u(t,x). Solving (<ref>) in the sense of Definition <ref> thus amounts to solving a fixed point problem. We similarly give a weak sense to the system (<ref>) by using the properties of the flow from Section <ref>. Assume that u_T ∈ L^_ and u is a solution of (<ref>) in the sense of Definition <ref>. Let ϕ be the forward regular Lagrangian flow as in Section <ref> corresponding to _t ϕ_t,s(x) = f(t, ϕ_t,s(x), u(t, ϕ_t,s(x))) t ∈ [s,T], ϕ_s,s(x) = x, and define, for 0 ≤ s ≤ t ≤ T and x ∈^d, X_t,s(x) = ϕ_t,s(x) and U_t,s(x) := u(t, ϕ_t,s(x)). Then, for all 1 ≤ p <, X_·,s, U_·,s∈ C([s,T], L^p_) ∩ L^p_(^d, W^1,1([0,T])), and, for a.e. x ∈^d, (<ref>) is satisfied in the integral sense. The regularity properties of X are seen from Theorem <ref>, as well as the fact that, for a.e. x ∈^d, X_t,s(x) = x + ∫_s^t f(r, X_r,s(x), u(r, X_r,s(x))dr = x + ∫_s^t f(r, X_r,s(x), U_r,s(x))dr. Theorems <ref> and <ref> together imply that U_·,s∈ C([s,T], L^p_). Also, in view of Theorem <ref>(<ref>), u satisfies, for any 0 ≤ s ≤ t ≤ T, U_s,s(x) = u(s,x) = u(t, X_t,s(x))+ ∫_s^t g(r, X_r,s(x), u(r, ϕ_r,s(x)) ) dr = U_t,s(x) + ∫_s^t g(r, X_r,s(x), U_r,s(x))dr. It follows that, in the distributional sense, _t U_t,s(x) = g(t, X_t,s(x), U_t,x(x)). Arguing as in the proof of Theorem <ref>, the right-hand side belongs to L^p_(^d, L^1([0,T])), and we conclude as in that theorem. §.§ Minimal and maximal solutions In this section, we show that the assumptions (<ref>) give an increasing structure to the equation (<ref>), which allows for the identification of a unique minimal and maximal solution. Assume f and g satisfy (<ref>), u_T is decreasing, and u_t (1 + |·|)^-1∈ L^. Then there exist two decreasing solutions u^+, u^- of (<ref>) in the sense of Definition <ref> with the following properties: * For all t ∈ [0,T], u^+(t,·) is upper semicontinuous, u^-(t,·) is lower semicontinuous, and both u^+ and u^- are continuous a.e. in [0,T] ×^d. * If u is any other solution, then u^- ≤ u ≤ u^+. Throughout this subsection, we may assume, without loss of generality, that the term C_0 ∈ L^1_+ in (<ref>) is 0. Indeed, the function ũ(t,x) = exp( ∫_t^T C_0(s)ds) u( t, exp( ∫_t^TC_0(s)ds )x ), is decreasing in x, satisfies (<ref>), and, formally, solves the equation (<ref>) with f and g replaced by f̃(t,x,u) = exp( -∫_t^T C_0(s)ds) f( t, exp( ∫_t^T C_0(s)ds) x, exp( -∫_t^T C_0(s)ds) u ) + C_0(t) x and g̃(t,x,u) = exp( ∫_t^T C_0(s)ds) g ( t, exp( ∫_t^T C_0(s)ds) x, exp( -∫_t^T C_0(s)ds) u ) + C_0(t) u. As described above, we assume without loss of generality that C_0 ≡ 0. For some C > 0 to be determined, set L := { u: [0,T] ×^d →^m : u(T,·) = u_T, u(t,·) is decreasing for all t ∈ [0,T], and |u(t,x)| ≤C(1 + |x|) for all (t,x) ∈ [0,T] ×^d }. We define a map S on L as follows: for u ∈ L, let v := S(u) be the solution, as in Theorem <ref>, of the linear transport equation _t v + f(t,x, u(t,x)) ·∇ v + g(t,x,u(t,x)) = 0 in (0,T) ×^d, v(T,·) = u_T. Then, in view of the solution formula (<ref>) and the bounds (<ref>) on f and g, there exists a sufficient large C, depending on u_T, such that S maps L into L. We now note that L forms a complete lattice under the partial order u ≤ũ ⇔ u^i(t,x) ≤ũ^i(t,x) for all (t,x) ∈ [0,T] ×^d, i = 1,2,…, m; that is, every subset of L has a greatest lower bound and least upper bound, which is a consequence of the uniformly bounded linear growth of solutions in L. Suppose now that u, ũ∈ L satisfy u ≤ũ under the order (<ref>), and set b(t,x) := f(t,x, u(t,x)) and d(t,x) := g(t,x,u(t,x)). Then (<ref>) with C_0 ≡ 0 implies that b(t,x) ≥ f(t,x,ũ(t,x)) and d(t,x) ≤ g(t,x,ũ(t,x)), and so, in particular, v := S(u) and ṽ := S(ũ) are respectively a sub and supersolution of the linear equation (<ref>) with c(t) ≡ 0. It follows from Theorem <ref> that v ≤ṽ, and therefore S is increasing on the complete lattice L with respect to the partial order (<ref>). The existence of a unique maximal and minimal solution are now a consequence of the Tarski lattice-theoretical fixed point theorem <cit.>. The continuity a.e. in [0,T] ×^d of u^+ and u^- now follows from Theorem <ref>. Observe now that any version of the maximal solution u^+ is a solution in the sense of Definition <ref>. Because u^+(t,·) is decreasing, it is continuous a.e., and its maximal version is upper semicontinuous, which is therefore also the unique maximal solution in the everywhere-pointwise sense. A similar argument shows that the minimal everywhere-pointwise solution u^- is lower-semicontinuous in the spatial variable, and we conclude. The fixed point theorem of <cit.> used above further characterizes u^+ and u^- as u^+ = sup{ u ∈ L: S(u) ≥ u } and u^- = inf{ u ∈ L: S(u) ≤ u }, where the sup and inf are understood with respect to the partial order (<ref>). We alternatively characterize the maximal and minimal solutions in terms of appropriately defined sub and supersolutions of the equation (<ref>). Recall that ρ^ and ρ_ are the one-sided mollifying functions from subsection <ref>. We then define a notion of sub and supersolution. Assume f and g satisfy (<ref>) with C_0 ≡ 0 and u_T is decreasing. A function u:[0,T] ×^d →^m that is decreasing in ^d is called a subsolution (resp. supersolution) of (<ref>) if u(T,·) ≤ u_T (resp. u(T,·) ≥ u_T) and u^ := u * ρ^ (resp. u_ := u * ρ_ satisfies -_t u^ - f(t,x, u(t,x)) ·∇ u^ - g(t,x,u(t,x)) ≤ 0 ( resp. -_t u_ - f(t,x,u(t,x)) ·∇ u_ - g(t,x,u(t,x)) ≥ 0 ) a.e. in [0,T] ×^d. In other words, sub and supersolutions in the sense of (<ref>) are sub and supersolutions of the corresponding linear equations identified in Definition <ref>. It follows that a function which is both a sub and supersolution is in fact a solution. Under the conditions of Definition <ref>, assume that u and v are two subsolutions (resp. supersolutions) of (<ref>). Then u ∨ v (resp. u ∧ v) is also a subsolution (resp. supersolution). We prove only the subsolution statement, as the other one is analogously proved. It is clear that max{ u(T,·) , v(T,·) }≤ u_T and u ∨ v is decreasing in ^d. Set u^ = u * ρ^ and v^ = v * ρ^. Let Ψ: ^m ×^m →^m be smooth, increasing, and satisfy Ψ(u,v) ≥ u ∨ v for all u,v ∈^m. Note that, using (<ref>) with C_0 ≡ 0, f(t,x,u(t,x)) ∧ f(t,x,v(t,x)) ≥ f(t,x, u(t,x) ∨ v(t,x)) and g(t,x,u(t,x)) ∨ g(t,x, v(t,x)) ≤ g(t,x,u(t,x) ∨ v(t,x)), and so a simple application of the chain rule gives _t Ψ(u^, v^) + f(t,x, u ∨ v) ·∇Ψ(u^, v^) + g(t,x,u ∨ v) ≥ 0. In particular, for any smooth positive test function ϕ∈ C_c^((0,T) ×^d), ∫_0^T ∫_^d( Ψ(u^,v^)(t,x) _t ϕ(t,x) - g(t,x,u ∨ v)ϕ(t,x) ) dxdt+ ⟨∇ (f(·,·, u ∨ v)ϕ) , Ψ(u^, v^) ⟩≤ 0, where the last term is understand as the pairing between locally finite measures and continuous functions, because f(t,x,(u ∨ v)(t,x)) is BV in x. We may then approximate max(·,·) with such functions Ψ and determine that, in the distributional sense, _t (u^∨ v^) + f(t,x, u ∨ v) ·∇ (u^∨ v^) + g(t,x,u ∨ v) ≥ 0. For δ > 0, we convolve both sides of the above inequality with the one-sided mollifier ρ^δ and obtain, in view of (<ref>), _t (u^∨ v^) * ρ^δ + f(t,x,u ∨ v) ·∇[ (u^∨ v^) * ρ^δ] + g(t,x,u ∨ v) ≥ 0. For fixed δ, as → 0, (u^∨ v^) * ρ^δ→ (u ∨ v) * ρ^δ locally uniformly, and we may therefore send → 0 in the above inequality, again using f ∈ BV, to obtain the desired subsolution inequality for (u ∨ v) * ρ^δ. Assume f and g satisfy (<ref>) with C_0 ≡ 0, u_T is decreasing, and u is a subsolution (supersolution) of (<ref>) in the sense of Definition <ref>. Then there exists a subsolution (supersolution) ũ such that u ≤ũ (u ≥ũ) and ũ is continuous a.e. in [0,T] ×^d. For such a subsolution u, let ũ be the solution of the linear transport equation _t ũ + f(t,x,u(t,x)) ·∇ũ + g(t,x,u(t,x)) = 0 in (0,T) ×^d, ũ(T,·) = u_T. Then, by Theorem <ref>, u ≤ũ, and ũ is continuous a.e. in [0,T] ×^d. By (<ref>), f(t,x,u(t,x)) ≥ f(t,x,ũ(t,x)) and g(t,x,u(t,x)) ≤ g(t,x,ũ(t,x)), and it follows that ũ is a subsolution of (<ref>). The argument for supersolutions is identical. Let f and g satisfy (<ref>) with C_0 ≡ 0, and assume u_T has linear growth and is decreasing. Let u^+ and u^- be the maximal and minimal solution from Theorem <ref>. Then, in the sense of Definition <ref>, u^+ is the pointwise maximum of all subsolutions, and u^- is the pointwise minimum of all supersolutions. In view of Lemma <ref>, the maximum/minimum in Proposition <ref> may be restricted to sub/supersolutions that are continuous a.e. in [0,T] ×^d. We prove only the statement for u^+, since the proof is analogous for u^-. Let ũ^+(t,x) := sup{ u(t,x) : u is a subsolution}. Because u^+ is itself a subsolution, we clearly have u^+ ≤ũ^+. Suppose now that u is a subsolution of (<ref>), and let v be the solution of _t v + f(t,x,u) ·∇ v + g(t,x,u) = 0 in (0,T) ×^d and v(T,·) = u_T. It then holds that u and v are respectively a subsolution and solution of the linear equation (<ref>) with b(t,x) = f(t,x,u(t,x)), c(t,x) = 0, and d(t,x,) = g(t,x,u(t,x)). Then Theorem <ref> implies u ≤ v. Note also that v belongs to the lattice L from the proof of Theorem <ref>, because v(T,·) = u_T. If S is the fixed-point map from the proof of that theorem, we then set w = S(v), which, by a similar argument, satisfies w ∈ L and S(v) = w ≥ v. By the characterization of u^+ by the Tarski fixed point theorem, we must have u ≤ u^+, and therefore ũ^+ ≤ u^+. §.§ Continuous solutions We now investigate when the maximal and minimal solution u^+ and u^- identified in the previous subsection coincide. In this subsection, we prove a comparison principle for continuous sub and supersolutions, so that, in particular, u^+ = u^- if both are continuous. As we will see by example in the forthcoming subsections, this can fail in general if the assumption of continuity is dropped. We will first introduce a different but equivalent notion of solution for continuous solutions, using the theory of viscosity solutions. Let us assume throughout this subsection, in addition to (<ref>), that f and g are uniformly Lipschitz continuous and bounded, and C_0 ≥ 0 is constant. A function u: [0,T] ×^d →^m is a viscosity subsolution (supersolution) of (<ref>) if u(t,·) is decreasing for all t ∈ [0,T], and, whenever ϕ∈ C^1([0,T] ×^d, ), i = 1,2,…, m, and u^i,*(t,x) - ϕ(t,x) (resp. u^i_*(t,x) - ϕ(t,x)) attains a local maximum (resp. minimum) at (t_0,x_0), -ϕ_t(t_0,x_0) - f(t_0,x_0, u^*(t_0,x_0)) ·∇ϕ(t_0,x_0) - g(t_0,x_0,u^*(t_0,x_0)) ≤ 0 (resp. -ϕ_t(t_0,x_0) - f(t_0,x_0, u_*(t_0,x_0)) ·∇ϕ(t_0,x_0) - g(t_0,x_0,u_*(t_0,x_0)) ≥ 0). A viscosity solution is both a sub and supersolution. Assume (<ref>) and (<ref>). If u is a sub and supersolutions in the sense of Definition <ref> and is almost-everywhere continuous in [0,T] ×^d, then u is a viscosity sub (super) solution in the sense of Definition <ref>. In view of Lemma <ref>, no generality is lost in considering sub or supersolutions that are continuous a.e. in [0,T] ×^d. We prove only the subsolution property. Note that, because u is decreasing, we have u(t,·)^* = u(t,·) a.e. Then u^ = u * ρ^ is a (classical) subsolution of -_t u^ - f(t,x, u^*(t,x)) ·∇ u^ - g(t,x, u^*(t,x)) ≤ 0, and, as → 0, u^↘ u^*. Assume that u^*(t,x) - ϕ(t,x) attains a maximum at (t_0,x_0), which we may assume to be strict by adding a quadratic penalization to ϕ. For > 0, let (t_,x_) denote the maximum point of u^(t,x) - ϕ(t,x) on [0,T] ×B_1(x_0), and let (s,y) be a limit point of (t_,x_)_ > 0 along some subsequence (_n)_n ∈. By Lemma <ref>(<ref>), u^(t_,x_) ≤ u(t_, x_ - 2 1) + 1, and so lim sup_n → u^_n(t__n, x__n) ≤ u^*(s,y). Let now (s_n, y_n)_n ∈∈⊂ [0,T] ×B_1(x_0) be such that (s_n,y_n) (t_0,x_0) and u^_n(s_n,y_n) u^*(t_0,x_0). Taking n → in the inequality u^_n(s_n,y_n) - ϕ(s_n,y_n) ≤ u^_n(t__n, x__n) - ϕ(t__n, x__n) yields u^*(t_0,y_0) - ϕ(t_0,y_0) ≤ u^*(s,y) - ϕ(s,y). This implies that (s,y) = (t_0,x_0), the limit holds along the full family → 0, and lim_→ 0 u^(t_,x_) = u^*(t_0,x_0). By standard maximum principle considerations, and the fact that -f and g are increasing in u and ∇ u^≤ 0, -_t u^(t_,x_0) - f(t_,x_, u^(t_,x_)) ·∇ u^(t_,x_) - g(t_,x_, u^(t_,x_)) ≤ 0. Sending → 0 gives the desired subsolution inequality. Assume that f and g satisfy (<ref>) and (<ref>), and let u and v be respectively a bounded sub and supersolution of (<ref>) in the sense of Definition <ref> such that either u is continuous and v is lower-semicontinuous, or u is upper-semicontinuous and v is continuous. If u(T,·) ≤ v(T,·), then, for all t ∈ [0,T], u(t,·) ≤ v(t,·). The proofs of both statements are almost identical, so we prove only the statement when u is continuous and v is lower-semicontinuous. We first prove the result under the additional assumption that, for some δ > 0, u is a sub-solution of -_t u - [ f(t,x,u) + δ 1]·∇ u - g(t,x,u) ≤ - δ 1 and u(T,·) - v(T,·) ≤ -δ 1. Standard arguments from the theory of viscosity solutions then imply that w(t,x,y) := u(t,x) - v(t,y) is a subsolution of -_t w - [ f(t,x,u) + δ 1 ] ·∇_x w - f(t,x,v) ·∇_y w - g(t,x,u) + g(t,x,v) ≤ - δ 1. Let M > 0 be such that u≤ M and v≤ M, fix λ≥ 1 and β > 0, set [0,T] ∋ t ↦ϕ(t) := max_x,y ∈^dmax_i=1,2,…,m( u^i(t,x) - v^i(t,y) - λ/2 |x-y|^2 - β/2(|x|^2 + |y|^2) ), and, for some constant C > 0 to be chosen, define t̂ := sup{ t ∈ (-,T] : e^-Cλ^1/2 (T-t)ϕ(t) > 0 }. As λ→ and β→ 0, ϕ(T) = max_x,y,i( u^i(T,x) - v^i(T,y) - λ/2 |x-y|^2 - β/2 (|x|^2 + |y|^2) ) ≤ -δ + o(1), and so, if λ is sufficiently large and β is sufficiently small, depending on δ, then ϕ(T) < 0, and therefore t̂ < T. We next claim that t̂ < 0 for all sufficiently large λ and small β, which implies ϕ(t) ≤ 0 for all t ∈ [0,T], and, hence, gives the result. If this were not true and 0 ≤t̂ < T, then choose i, x̂,ŷ such that 0 = ϕ(t̂) = u^i(t̂,x̂) - v^i(t̂,ŷ) - λ/2 |x̂ - ŷ|^2 - β/2 (|x̂|^2 + |ŷ|^2). We then have u^j(t̂,x̂) - v^j(t̂,ŷ) ≤ u^i(t̂,x̂) - v^i(t̂,ŷ) for all j =1,2,…, m, and, from the fact that u and v are decreasing, λ(x̂_j - ŷ_j) + βx̂_j ≤ 0 and λ(x̂_j - ŷ_j) - βŷ_j ≤ 0 for all j = 1,2,…, d. Moreover, in view of the boundedness of u and v, arguing as in for instance <cit.>, λ|x̂ - ŷ| ≤ 2^3/2 M^1/2λ^1/2, max(|x̂|, |ŷ|) ≤ 2^3/2 M^1/2β^-1/2, and, for some δ_λ and _β satisfying lim_λ→δ_λ = lim_β→ 0_β = 0, λ|x̂ - ŷ|^2 ≤δ_λ and β(|x̂|^2 + |ŷ|^2 ≤_β. On [t̂,T] ×^d ×^d, the function u^i(t,x) - v^i(t,y) - λ/2 |x-y|^2 - β/2 (|x|^2 + |y|^2) - e^-C λ^1/2(t - t̂)ϕ(t̂) achieves a maximum at (t̂, x̂,ŷ), and so, applying Definition <ref> to the doubled equation (<ref>), -δ≥ C λ^1/2ϕ(t̂) - ∑_j = 1^d (f^j(t̂, x̂, u(t̂, x̂)) + δ)( λ (x̂_j - ŷ_j) + βx̂_j) - g^i(t̂, x̂, u(t̂, x̂)) + ∑_j=1^d f^j(t̂, ŷ, v(t̂, ŷ))( λ (x̂_j - ŷ_j) - βŷ_j ) + g^i(t̂, ŷ, v(t̂, ŷ)). Because of (<ref>), (<ref>), (<ref>), and (<ref>), we have ∑_j=1^d [ f^j(t̂, x̂,u(t̂,x̂)) + δ] (λ(x̂_j - ŷ_j) + βx̂_j ) ≤∑_j=1^d [ f^j(t̂, x̂,v(t̂,ŷ) + (u^i(t̂,x̂) - v^i(t̂,ŷ)) 1 ) - δ]( λ( x̂_j - ŷ_j) + βx̂_j ) ≤ -1/d^1/2δλ|x̂ - ŷ| + ∑_j=1^d f^j(t̂, ŷ,v(t̂, ŷ)) ( λ (x̂_j - ŷ_j) - βŷ_j ) + ∇_u fλ|x̂ - ŷ|(u^i(t̂,x̂) - v^i(t̂, ŷ)) + ∇_x fλ|x̂ - ŷ|^2 + ( δ/d^1/2 + f) β |x̂| + fβ |ŷ|. Similarly, using the fact that g^i is nondecreasing in the u_j-variable for all j i, we have g^i(t̂, x̂, u(t̂, x̂)) ≤ g(t̂, ŷ, v(t̂, ŷ)) + ∇_u g_ ( u^i(t̂, x̂) - v^i(t̂, ŷ)) + ∇_x g_ |x̂ - ŷ|. Therefore, (<ref>) becomes, using (<ref>), (<ref>), and (<ref>), -δ ≥( Cλ^1/2 - ∇_u fλ|x̂ - ŷ| - ∇_u g)(u^i(t̂,x̂) - v^i(t̂,ŷ)) + λ|x̂ - ŷ| ( 1/d^1/2δ - C δ_λ^1/2/2) - ∇_x fδ_λ - 2^3/2 M^1/2∇_x gλ^-1/2 - 2^3/2 M^1/2( δ/d^1/2 + f)β^1/2 - 2^3/2 M^1/2fβ^1/2 - 2Cλ^1/2/2_β. Once again using (<ref>), Cλ^1/2 - ∇_u fλ|x̂ - ŷ| - ∇_u g≥ C λ^1/2 - 2^3/2∇_u fλ^1/2M^1/2 - ∇_u g≥ 0, provided C is large enough, depending only f, g and M. Therefore, since u^i(t̂, x̂) ≥ v^i(t̂, ŷ), the first two lines of (<ref>) are nonnegative if λ is sufficiently large, depending on δ. Taking first λ sufficiently large and then β sufficiently small, we conclude that the right-hand side of (<ref>) is strictly larger than -δ, which is the desired contradiction. We now prove the general statement. For δ > 0 and (t,x) ∈ [0,T] ×^d, set ũ(t,x) = u(t,x + δψ_1(t) 1) - δψ_2(t) 1, where ψ_1 and ψ_2 are two nonnegative scalar functions satisfying ψ_1(T) = 0 and ψ_2(T) = 1. Then ũ(T,·) = u(T,·) - δ 1, and so, using the the fact that u is nonincreasing, we formally compute[The computations can be made rigorous using test functions and Definition <ref>. Note that the argument for u above is always (t, x + δψ_1(t) 1).] -_t ũ - [ f(t,x,ũ) - δ]∇ũ - g(t,x, ũ) = _t u - δ∇_x u · 1 ψ̇_1 + δψ̇_2 1 - [ f(t, x + δψ_1 1, u - δψ_2 1) + δ] ∇ u - g(x + δψ_1 1, u - δψ_2 1) + δ 1 ≤δ (- 1 ·∇ u) ( ψ̇_1 + ∇_u f_ψ_2 + 1) + δ( ψ̇_2 + ∇_u g_ψ_2 + 1 ) 1. We then choose ψ_1 and ψ_2 so as to satisfy ψ̇_1 = - ∇_u f_ψ_2 - 1 and ψ̇_2 = - ∇ _u g_ψ_2 - 1, that is, ψ_1(t) = ∇_u f_ ( ∇_u g_ + 1) /∇_u g_^2 ( e^∇_u g_(T-t) - 1) + (1 - ∇_u f_/∇_u g_) (T-t) and ψ_2(t) = ( 1 + 1/∇_u g_)e^∇_u g_(T-t) - 1/∇_u g_. Then (<ref>) becomes exactly (<ref>), and so, for (t,x) ∈ [0,T] ×^d, u(t,x + δψ_1(t)) - δψ_2(t) ≤ v(t,x). We conclude upon sending δ→ 0 and appealing to the continuity of u. Let us note the following corollary of Theorem <ref>, Lemma <ref>, and Theorem <ref>. Under the same conditions on f and g as in Theorem <ref>, let u_T be bounded and decreasing and let u^+ and u^- be the maximal and minimal solutions identified in Theorem <ref>. Then u^+(t,x) = inf{ u(t,x) : u is a continuous viscosity supersolution with u(T,·) ≥ u_T } and u^-(t,x) = sup{ u(t,x) : u is a continuous viscosity subsolution with u(T,·) ≤ u_T }. The comparison principle with continuous sub and supersolutions also implies a conditional uniqueness and stability statement. Assume f and g satisfy (<ref>) and (<ref>) and u_T is bounded, continuous, and decreasing. If there exists a bounded and continuous viscosity solution u of (<ref>), then it is the unique viscosity solution. Moreover, if u^ is the unique classical solution of _t u^ + f(t,x,u^) ·∇ u^ + g(t,x,u^) + Δ u^ = 0 in (0,T) ×^d, u^(T,·) = u_T, then, as → 0, u^ converges locally uniformly to u. The uniqueness is a consequence of the comparison principle in Theorem <ref>, since u is both a sub and supersolution. The local uniform convergence as → 0 of u^ to the unique continuous viscosity solution u is then proved with standard stability arguments from the theory of viscosity solutions, arguing with half-relaxed limits and applying the comparison principle, Theorem <ref>. §.§ A one-dimensional example For the rest of the paper, we assume d = m= 1 and consider the example of (<ref>) with f(t,x,u) = -u and g(t,x,u) = 0: -_t u + u _x u = 0 in (0,T) ×, u(T,·) = u_T. Observe that (<ref>) can be formally written in conservative form, where it becomes the Burgers equation with flux u^2/2. When d > 1, it is not necessarily the case that (<ref>) can be written as a conservation law, and the product of f(t,x,u) with ∇ u cannot be understood by integrating by parts. If the decreasing function u_T is Lipschitz continuous, then the system of characteristics (<ref>) can be solved in some maximal time interval [T-τ, T] depending on the Lipschitz constant for u_T. This gives rise to a Lipschitz continuous solution of (<ref>), which can easily be checked to be a viscosity solution in the sense of Definition <ref> and is therefore unique. However, for t < τ, the system of characteristics fails to be solvable on [t,T], due to the formation of shocks. This is in contrast to the case where u_T is Lipschitz and increasing, in which case (<ref>) has a Lipschitz continuous solution on (-, T]. We therefore see that the situation where a continuous solution exists is not typical on an arbitrary time horizon, even if the function u_T is smooth. We therefore study in this subsection a simple example of a decreasing and discontinuous terminal data, namely u_T(x) = {x ≤ 0 }. Viewed as a scalar conservation law with flux u^2/2, (<ref>) is a Riemann problem whose solvability is resolved with the theory of entropy solutions. Indeed, the unique entropy solution is given by u(t,x) = { x ≤T-t/2}, describing a shock wave moving with constant speed 1/2 (in reverse time), the constant 1/2 being uniquely determined from the Rankine-Hugoniot condition. §.§.§ Nonuniqueness of discontinuous solutions When viewed as a nonlinear transport equation, there is a strong failure of uniqueness for this problem. Let c: [0,T] → satisfy c( T) = 0 and -c' ∈ [0,1] a.e. Then u_c(t,x) := { x ≤ c(t) } is a solution, in the sense of Definition <ref>, of (<ref>) with terminal data (<ref>). Define u_c^ = u_c * ρ^. Then u_c^(t,x) = ∫_-^c(t)ρ^(x-y)dy, and so _t u_c^(t,x) = ρ^(x-c(t)) c'(t) and _x u_c^(t,x) = - ρ^(x - c(t)). Thus, -_t u^_c(t,x) + u_c(t,x) _x u^_c(t,x) = ρ^(x - c(t))[ -c'(t) - { x ≤ c(t) }]. Recall that ρ^∈ [-2, 0], and so the above expression is nonzero only if x ≤ c(t). In that case, because -c' ≤ 1, -c'(t) - {x ≤ c(t) } = -c'(t) - 1 ≤ 0, and we conclude that u^_c is subsolution. The proof of the supersolution property is similar, and uses the fact that -c' ≥ 0. Note that u_c is continuous a.e. in [0,T] ×^d, and, therefore, by Lemma <ref>, is also a viscosity solution in the sense of Definition <ref> The system of characteristics (<ref>) becomes, for (t,x) ∈ [0,T] ×^d, - _s U_s,t(x) = 0, U_T,t(x) = { X_T,t(x) ≤ 0 }, _s X_s,t(x) = -U_s,t(x), X_t,t(x) = x. Recall that the system, and, in particular, the equation for X, is viewed as a differential inclusion, where, for all s ∈ [t,T], U_s,t(x) = U_T,t(x) = {1} if X_T,t(x) < 0, {0 } if X_T,t(x) > 0, and [0,1] if X_T,t(x) = 0. By Proposition <ref>, each solution u_c corresponds to a solution (X^c,U^c) of (<ref>), which we can compute explicitly: namely, for x ∈ and 0 ≤ s ≤ t ≤ T, U^c_s,t(x) = u_c(t,x) = { x ≤ c(t) } and X^c_s,t(x) = x - s + t, x ≤ c(t), x - c(t) s-t/T-t, x = c(t), x, x ≥ c(t). We note that, for any solution (U,X), we must have X_s,t(x) = x - s + t for x < 0 and X_s,t(x) = x for x ≥ T-t. However, for x ∈ [0,T-t], there is ambiguity in the speed at which the X_s,t(x) travels: it can move with speed -1, 0, or anything in between, where in the latter case the characteristic is constrained to end at X(T,t,x) = 0. The precise value x ∈ [0,1] for which X_T,t(x) = 0 thus encodes the choice of the shock-wave speed c(t) in the definition of u_c. §.§.§ Stochastic selection An important feature in the theory of entropy solutions of scalar conservation laws is the stability under regularizations, and in particular under vanishing viscosity limits. In the above context, the entropy solution (<ref>) of (<ref>) arises as the strong limit in C([0,T], L^1_(^d)), as → 0, of the unique smooth solution u^ of -_t u^ + u^_x u^ = _x^2 u^, u^(T,·) = u^_T, where u^_T: ^d →^d is smooth and lim_→ 0 u^_T = _(-,0) in L^1_. By contrast, we show here that any solution u_c can arise as a limit from suitably regularized equations. Fix any c ∈ W^2,1([0,T], ) satisfying c(T) = 0 and -c' ∈ (0,1). Then there exists θ_c ∈ L^1([0,T]) such that, if u^_T is as in (<ref>) and u^ is the unique classical solution of -_t u^ + u^_x u^ = ( _x^2 u^ + θ_c(t) |_x u^|^2 ), u^(T,·) = u^_T, then, for all 1 ≤ p <, as → 0, u^→ u_c strongly in C([0,T], L^p_). For θ: [0,T] → to be determined, define, for t ∈ [0,T], f(t,v) = log(θ(t) v + 1)/θ(t) if θ(t) 0 and θ(t) v > -1, v if θ(t) = 0 and note that f(t,·)^-1(u) = e^θ(t) u - 1/θ(t). For > 0, let v^ be the classical solution of -_t v^ + f(t,v^)v^_x = _t f(t, v^)/_v f(t, v^) + _x^2 v^ in [0,T] × and v^(T,·) = f(T,·)^-1( u^_T)). By the maximum principle, it is easily checked that the values of v^ fall within the domain of f and its derivatives, and the solution of (<ref>) is given exactly by u^(t,x) = f(t, v^(t,x)). Standard stability results <cit.> yield that, as → 0, v^ converges strongly in C([0,T], L^p_) for all p ∈ [1,) to the unique entropy solution v of -_t v + f(t,v)_x v = _t f(t,v)/_v f(t,v) in [0,T] × and v(T,·) = f^-1(T, ·)(1)_x<0 + f^-1(T,·)(0) _x > 0. We then set F(t,v) = (θ(t)v + 1)log(θ(t) v + 1) - θ(t) v/θ(t)^2, which satisfies _v F(t,v) = f(t,v), so that the equation (<ref>) is equivalently written as -_t v + _x F(t,v) = _t f(t, v)/_v f(t, v) in [0,T] × and v(T,·) = f^-1(T, ·)(1)_x<0 + f^-1(T,·)(0) _x > 0. The unique entropy solution v of the conservative equation (<ref>) is then given by v(t,x) = v^-(t), x < c(t), v^+(t), x > c(t), where, for 0 ≤ t ≤ T, v_t^+ and v_t^- are defined by -_t v^±_t = _t f(t, v_t^±)/_v f(t, v_t^±) t ∈ [0,T], v^-_T = f^-1(T,·)(0), v^+_T = f^-1(T, ·)(1), and c(t) is determined by the Rankine-Hugoniot condition c(T) = 0 and -c'(t) = F(t, v^-(t)) - F(t, v^+(t))/v^+(t) - v^-(t). Observe that t ↦ f(t, v^±(t)) is constant, and therefore -c'(t) = F(t, f(t,·)^-1(1)) - F(t, f(t, ·)^-1(0)) /f(t, ·)^-1(1) - f(t, ·^-1(0) C(θ(t)), where, for θ∈, C(θ) = θ e^θ - e^θ - 1/θ(e^θ - 1). We observe that C' > 0, lim_θ→ - C(θ) = 0, and lim_θ→ + C(θ) = 1. Thus, letting c: [0,T] → with -c' ∈ (0,1) be as in the statement of the theorem, we finally choose θ_c(t) := C^-1(-c'(t)), so that u^ converges strongly in C([0,T], L^p_), p ∈ [1,), to f^-1(t,·)(v(t,x)) = u_c(t,x). The restrictions that c”∈ L^1 and c' lie strictly within (-1,0) are put in place to ensure that θ_c ∈ W^1,1 and - < θ_c <, so that the equation for v^ is well-posed. Achieving speeds c that are only Lipschitz, and where c' is allowed to be either 0 or 1, is possible by letting θ_c = θ_c^ depend suitably on . The selection principle in Theorem <ref> can be recast in terms of the nonunique generalized characteristics problem (<ref>). Namely, we fix a filtered probability space (Ω, F = ( F_t)_t ∈ [0,T], P) with a complete, right-continuous filtration F and a Wiener process W: [0,T] ×Ω→ progressively measurable with respect to F, and, for > 0 and (t,x) ∈ [0,T] ×, we consider the forward-backward SDE (FBSDE) on the interval s ∈ [t,T]: - d_s U^_s,t(x) = Z^_s,t(x)dW_s - 1/2θ_c(s) Z^_s,t(x)^2 ds, U^_T,t(x) = u^_T(X^_T,t(x)), d_s X^_s,t(x) = - U^_s,t(x)ds + √(2) dW_s, X^_t,t(x) = x. Let c and θ_c be as in Theorem <ref>. For every > 0 and (t,x) ∈ [0,T] ×, there exists a unique strong solution (X^_·,t(x), U^_·,t(x), Z^_·,t(x)) to the FBSDE (<ref>). Moreover, with probability one, as → 0, X^_·,t→ X^c_·,t in C([t,T], L^p_()) for all p ∈ [1,), where X^c is as in (<ref>). The existence of a unique solution is a consequence of the nondegeneracy of the noise; see <cit.>. Moreover, if u^ solves (<ref>), then U^_s,t(x) = u^(s, X^_s,t(x)) and Z^_s,t(x) = √(2) (_x u^)(s, X^_s,t(x)). It follows that X^_·,t is the solution of the SDE (<ref>) with b(t,x) = -u^(t,x), and so, by Theorems <ref> and <ref>, as → 0 with probability one, X^_·,t converges in C([t,T], L^p_()) to the regular Lagrangian flow corresponding to the drift -u_c. This is exactly X^c, and we conclude. acm
http://arxiv.org/abs/2307.06142v1
20230712125312
Orthodox or Dissident? The Evolution of Bohm's Ontological Reflections in the 1950s
[ "Andrea Oldofredi" ]
physics.hist-ph
[ "physics.hist-ph", "quant-ph" ]
Orthodox or Dissident? The Evolution of Bohm's Ontological Reflections in the 1950s Andrea OldofrediContact Information: Centre of Philosophy, University of Lisbon, Alameda da Universidade 1600-214, Lisbon, Portugal. E-mail: [email protected] ============================================================================================================================================================================= David Bohm has often been considered unable to understand the meaning of the quantum revolution and to embrace its radical metaphysical implications. Similarly, his pilot-wave theory was negatively portrayed as an attempt to restore a classical and deterministic worldview. Against this background, the aim of this paper is twofold: in the first place, it will be argued that the accusations of dogmatism advanced by several eminent physicists contra Bohm are scientifically unfounded, showing a biased understanding of his works. Referring to this, two case studies will be discussed: the Bohm-Pauli correspondence, and the difficult relationship between Bohm and Leon Rosenfeld, a fervent supporter of Bohr's philosophy of quantum mechanics. As the reader will see, both examples clearly indicate that the opposition against the pilot-wave approach was for the most part not based on scientific grounds. In the second place, I will reconstruct and analyze the evolution of Bohm's philosophical reflections about ontology, scientific realism and pluralism studying private correspondences as well as his main works in the fifties culminated in the book Causality and Chance in Modern Physics. Underlining the originality of Bohm's thoughts, it will be concluded that his perspective can be characterized a form of local realism. Keywords: David Bohm; Wolfgang Pauli; Quantum mechanics; Infinitism; Local Realism Acknowledgements: I acknowledge the support of the Birkbeck College, University of London for their kind permission to publish letters of David Bohm whose originals are held in their archives, and for having provided an electronic copy of the archival material. Many thanks go to the audiences in Warsaw and Lisbon for valuable feedback on previous drafts of this essay as well as to Olga Sarno for careful comments on the final version of the manuscript. Financial support for this research has been provided by the Fundação para a Ciência e a Tecnologia (FCT) (Grant no. 2020.02858.CEECIND). § INTRODUCTION Quantum Mechanics (QM) is one of our most accurate answers to questions concerning the intrinsic structure of matter. Such a theory describes the behavior of elementary particles, atoms and molecules in a way that drastically changed our classical conception of the world, speaking about ontological indeterminacy and inherently stochastic quantum jumps. For instance, according to its standard formulation isolated quantum systems do not instantiate properties with definite values, i.e. quantum items possess indefinite attributes prior to experimental observation, marking a significant ontological difference with respect to classical objects. Measurements in QM, then, do not reveal pre-existing values of physical magnitudes—which in fact are essentially contextual—and their results are inherently probabilistic. In this theoretical context it is therefore a meaningless effort to look for a causal story that would explain a particular experimental outcome from its initial conditions (cf. <cit.>, <cit.>, <cit.> <cit.>). This particular image of the world found large support and endorsement among the physicists who contributed to the quantum revolution—as for example Bohr, Born, Dirac, Heisenberg, Jordan and Pauli to mention a few—to the point that such a Weltanschauung soon became the orthodoxy with respect to the interpretation of the quantum formalism. Although historians and philosophers of physics argued that it is disputable whether a common metaphysical perspective existed among the fathers of quantum mechanics[There was disagreement for example about whether the wave function undergoes an actual collapse in measurement situations. Whereas according to Bohr there was no such a collapse of the ψ function, being entanglement and complementarity the real novelties introduced by QM (cf. <cit.>, p. 672), for Born, Dirac, Heisenberg, von Neumann and others the stochastic jumps and the non-commutativity of quantum operators were the primary innovations of quantum theory.], there is however a precise sense in which one may properly speak about a cohesive and unitary orthodox or “Copenhagen” view of the theory. In fact, Bohr and the physicists who shaped modern QM shared a set of ideas which can be characterized as the inner core of such orthodoxy. Referring to this, Freire underlined that in spite of the existence of important differences, both the intellectual backgrounds and the scientific views of people like Bohr, Pauli, Heisenberg, Born, and Jordan, who had been working together on the collective construction of quantum mechanics, had several points in common. All of them endorsed both indeterminism and the assumption of the corpuscular and discrete nature of atomic phenomena. They also firmly believed in the completeness of quantum theory. [...]. [They] were attached to the revolutionary character of quantum mechanics, and were unsympathetic to any attempt to restore such classical ideals like causality and visualizability in microphysics (<cit.>, p. 81). On the same vein, <cit.> note that: [t]he existence of an “orthodox view” of quantum mechanics was generally taken for granted since the 1930s. However, the meaning of such a label was far from being univocally determined. Several factors contributed to keeping its definition vague, and by the same token to reinforcing the impression that an orthodox view did indeed exist (<cit.>, p. 99). Remarkably, such an orthodox view was so vastly supported by the founding fathers of QM that they not only formed an intellectual imperialism—as argued by <cit.>—but also believed that their philosophical perspective constituted the only feasible interpretation of the quantum formalism, and not merely a possible reading of it (<cit.>, p. 191). Against this background, there were a few eminent physicists dissatisfied by the philosophical content of quantum theory. For instance, on the one hand Einstein, Podolsky and Rosen famously argued that the QM could not have been considered a complete description of the physics at quantum scales, opening the discussion about its possible completion with hidden parameters (cf. <cit.>, see also <cit.>). Notably, Einstein endorsed a statistical (or ensemble) interpretation of the quantum formalism, showing in several places his discomfort with the usual view. On the other hand, Schrödinger proposed a realist interpretation of |ψ|^2 as charge density, and later proved that the measurement problem is a logical consequence of the principles of QM (cf. <cit.>). Despite these criticisms against the orthodoxy, Schrödinger's proposal was shown to be empirically inadequate, while Einstein's opposition to such philosophy of QM never translated into a full-fledged interpretation of the theory. The very first alternative formulation of QM, in fact, saw the light only in the early fifties with the work of David Bohm, who rediscovered and extended de Broglie's pilot-wave theory (cf. <cit.>; for a historical account see <cit.>). It is well-known, however, that such an interpretative framework was poorly received in virtue of its ontological picture, which was erroneously intended as an attempt to restore a classical— therefore outdated—worldview (cf. <cit.>, <cit.>, <cit.>. On the contrary, as we will see in the remainder of this work, Bohm's goal was to show that QM could have been provided with a clear ontology. Therefore, the indeterminate and fuzzy aspects of the orthodox view were not necessary features of the theory and could have been removed from its metaphysics, reintroducing an anschaulich, i.e. visualizable, intelligible picture of the physical processes taking place at quantum length scales.[Cf. the debate between Heinseberg and Schrödinger on the notion of Anschaulichkeit in quantum theory nicely resumed in <cit.>.] The aim of this paper is twofold: in the first place, we will explain that the accusations of dogmatism made by many physicists contra Bohm are for the most part scientifically unfounded, showing a philosophically biased understanding of the implications and significance of his works (Section <ref>). Referring to this, after a brief introduction of the pilot-wave theory, in the next section we discuss two important case studies: the Bohm-Pauli correspondence and the difficult relationship between Bohm and Leon Rosenfeld, who was a fervent supporter of Bohr's philosophy of quantum mechanics. As the reader will see, both examples clearly indicate that the opposition against the pilot-wave theory was generally not based on scientific grounds, but rather were of conceptual nature. In the second place, we will reconstruct and analyze the evolution of Bohm's philosophical reflections about ontology, scientific realism and pluralism studying private correspondences as well as his main works in the fifties culminated in the book Causality and Chance in Modern Physics (Section <ref>). Underlining the originality of Bohm's thoughts, it will be argued that his perspective can be characterized as a form of local realism (Section <ref>). Conclusions are drawn in Section <ref>. § DAVID BOHM: ORTHODOX OR DISSIDENT? David Bohm joined the foundational debate proposing the very first alternative interpretation of the quantum formalism in his papers A suggested interpretation of the quantum theory in terms of hidden variables Part I & II published in January 1952 in the prestigious Physical Review.[In 1951 Bohm published the textbook Quantum Theory, providing an introduction to the standard interpretation of QM. His interests in the foundational debates and his dissatisfaction with the traditional viewpoint was sparked (among other sources) by several discussions with Einstein, who is also acknowledged in Bohm's 1952 papers. For details on the political and personal vicissitude of Bohm between 1950 and 1952 see <cit.>.] There he showed that a causal description of quantum phenomena in terms of particles moving along continuous trajectories was not only possible, but also mathematically and physically consistent. In his essays Bohm criticized what he called the “usual interpretation” of QM—referring mainly to Bohr's and Heisenberg's ideas—objecting that the mere empirical consistency of quantum theory is not a sufficient reason to exclude other (possibly better) alternative formulations. Similarly, he stressed that QM forces us to abandon the idea of a precise characterization of physical systems at quantum scale as well as an accurate description of their dynamical evolution “without proving that such a renunciation is necessary” <cit.>, p. 168. Moreover, he was deeply dissatisfied by the inability of the standard theory to explain the actualization of macroscopic measurement outcomes. To overcome these shortcomings Bohm's interpretation exhibits a clear metaphysics describing quantum systems in terms of particles guided by wave functions—considered real physical fields—whose dynamical evolution is in turn governed by the Schrödinger equation iħ∂/∂ tψ=Hψ. Such a proposal is ontologically unambiguous because Bohmian corpuscles always have a precise localization in three-dimensional space and a well-defined velocity independently of any observation. Contrary to the principles of standard QM, then, the postulate according to which ψ alone provides the complete specification of a certain system is rejected. To this regard, Bohm also stated that in his framework the uncertainty principle becomes just “an effective practical limitation on the possible precision of measurements” (<cit.>, p. 171). Hence, it should not be interpreted as an irreducible impossibility to conceive position and momentum as simultaneously defined quantities, in open opposition to what has been claimed in <cit.>. In order to outline Bohm's theory one can express the wave function in polar form ψ=Re^(iS/ħ)—where R and S represent two coupled real functions corresponding to the amplitude and the phase of the wave. Then, posing P(x)=R^2(x), where P(x) represents the probability density, one obtains the following equations for R and S: ∂ P/∂ t + ∇·(P∇ S/m)=0, ∂ S/∂ t + (∇ S)^2/2m+V(x)-ħ^2/4m[∇^2 P/P-1/2(∇ P)^2/P^2]=0. The former is the quantum continuity equation for the probability density, whereas the latter is the quantum Hamilton-Jacobi equation describing the motion of a particle (or a configuration of particles) with kinetic energy (∇ S)^2/2m and subjected to the influence of both a classical and a new quantum potential: U(x)=-ħ^2/4m[∇^2 P/P-1/2(∇ P)^2/P^2]=-ħ^2/2m∇^2R/R. Finally, Bohm defined the particles' velocity as follows: v=∇ S/m. Therefore, (<ref>) can be rewritten as ∂ P/∂ t + ∇· (Pv)=0, where Pv is interpreted as the mean current of the particles in a given configuration. This equation is particularly important since the Born distribution holds as a consequence of it—hence the empirical adequacy of the pilot-wave theory is formally established. Referring to this, it should be stressed that in Bohm's theory probabilities do not refer to a inherent indeterminacy of quantum objects, but rather possess an epistemic character: in experimental situations we can neither know, nor manipulate the initial positions of the particles, consequently measurement outcomes will be practically unpredictable. Thus, in this context randomness arises as in classical statistical mechanics. In addition, it is worth mentioning that in the second part of his paper Bohm provides a precise theory of quantum measurement, explaining how experimental results emerge from the motion of individual quantum particles without the introduction of stochastic quantum jumps, filling the explanatory gap left by quantum theory (cf. <cit.>, Section 2). This remarkable achievement showed the actual possibility to provide a detailed description of the physical processes causally responsible of measurement outcomes, avoiding the conceptual and technical difficulties present in standard QM. Notwithstanding the formal consistency of this proposal and its empirical equivalence with respect to the predictions of QM, the pilot-wave theory was poorly received being seen as a sterile attempt to restore an anachronistic worldview. As a consequence, Bohm was considered a dogmatic physicist stuck with reactionary and “orthodox”—this time in the sense of classical—ideas, unable to fully embrace the conceptual revolution generated by quantum theory. In the reminder of this section we will discuss two relevant case studies where such a negative opinion clearly emerges. §.§ The Bohm-Pauli Correspondence Taking into account the Bohm-Pauli correspondence occurred between July and December 1951 one can readily understand that the Austrian physicist had little regard of Bohm's work, considering it a plagiarism of de Broglie's pilot-wave theory.[Notably, Bohm was unaware of de Broglie's work on the pilot-wave theory, Pauli mentioned it to him, cf. <cit.>, p. 346.] As it is well-known, Pauli raised strong objections against the latter at the Solvay congress in 1927 (cf. <cit.>), and from the letters at our disposal one may fairly say that he approached Bohm's proposal with a negative bias. By the summer 1951, in fact, Pauli did not yet read carefully his manuscripts, dismissing the new formulation of the pilot-wave approach as a “simple minded”, cheap solution to the problems of QM (cf. Letters 1263 and 1264 in <cit.>). Evidence for these claims can be found also in other letters that Pauli sent to Fierz, Panofsky and Rosenfeld. For instance he wrote “Die Sache von Bohm ist beinahe ein plagiat!”[The Bohm's thing is almost plagiarism!, author's translation from the original German.] to the former on 10 January 1952, similarly a month later Panfosky was told that Bohm's theory was a plain copy of de Broglie's old works of 1926-1927. Even more explicitly, Pauli defined the new pilot-wave theory as a “revival of de Broglie's old errors of 1927” in a letter to Rosenfeld dated 16 March 1952 (cf. respectively, <cit.>, Letter 1340, Letter 1364, and Letter 1386). This lack of interest may be explained by three related factors: firstly, in the early 1950s physicists were working on the cogent issues affecting quantum field theory—thus, a more advanced theory with respect to non-relativistic QM. Secondly, already at the Solvay conference Pauli, in agreement with Bohr and against the opinions of de Broglie, Lorentz and Schrödinger, argued that a spacetime representation of quantum phenomena are not obtainable in virtue of the polydimensional character of the ψ-function (cf. <cit.>, pp. 214-216). Finally, the significant empirical and theoretical progresses of the orthodox view made this interpretation the sole possible understanding of quantum physics, as already stressed in the previous section. Given (i) that Pauli was deeply involved in the research about the quantum theory of fields and quantum electrodynamics since the 1930s (cf. <cit.>), and (ii) that he was very close to the Copenhagen view, it can safely be claimed that he considered the pilot-wave approach as a dead program, thereby reading Bohm's papers would have been a waste of time. Indeed, Pauli initially attempted to reject Bohm's proposal with the same criticisms he posed against de Broglie in 1927. This can be straightforwardly inferred since in July 1951 Bohm wrote that in the second draft of his papers all the objections against de Broglie's theory are answered in detail, calling his attention and directing him to the relevant sections of the essays as for instance Section 7 of <cit.> and in the second Appendix of <cit.>. Referring to this, Bohm insisted in asking Pauli to read thoroughly the papers before discarding the causal interpretation: With regard to your questions raised in the letter, they are answered in my “long” paper. You really have put one in an impossible position. If I write a paper so “short” that you will read it, then I cannot answer all of your objections. If I answer all of your objections, then the paper will be too `long” for you to read. I really think that it is your duty to read these papers carefully, especially if you wish to carry out your promise of sending me your “permanent and persistent scientific opposition” (in <cit.>, p. 346, Letter 1264). Similarly, in mid-October 1951 Bohm wrote another long letter replying to the same objections reiterated by Pauli in previous correspondence and explaining several details of the pilot-wave theory, with particular attention to scattering of particles, the theory of measurement, the empirical equivalence between his theory and QM. There Bohm pointed again out that “[i]n the second version of the paper, these objections are all answered in detail” and that “[i]t is difficult for me to answer your objections in detail without simply repeating what is in section 7 of paper I and in the first five or six sections of paper II” (in <cit.>, p. 389, p. 390 respectively, Letter 1290). From the available letters it is possible to deduce not only that Pauli continued to ignore (or to read superficially) Bohm's manuscripts until the fall 1951, but also that tried to refute the pilot-wave theory appealing to von Neumann's no-go theorem allegedly proving the impossibility to complete QM with hidden parameters (cf. <cit.>, Chapter 4). To this regard, Bohm explained that such a result is not in contradiction with his theory, underlining essentially that von Neumann implicitly assumed that “the hidden variables are only in the observed system and not in the measuring apparatus. On the other hand, in my interpretation, the hidden variables are in both the measuring apparatus and the observed system. Moreover, since different apparatus is needed to measure momentum and position, the actual results in each respective type of measurement are determined by different distributions of hidden parameters” (<cit.>, p. 392, Letter 1290, emphasis in the original). Remarkably, he also emphasized to Pauli that von Neumann himself admitted the logical consistency of the pilot-wave approach.[This fact is reported by Bohm also in a letter to Melba Philips in early 1952 (printed in <cit.>, p. 147). For details cf. <cit.>, Section 9, p. 187, <cit.>, <cit.>, p. 47 and <cit.>).] A few weeks later, precisely on 20 November 1951, Bohm wrote again to Pauli who in the meanwhile read with more attention his papers and gave non-negative feedback on them, as one can understand from the outset of this correspondence. Such a letter is particularly interesting for our discussion, since there Bohm made non-trivial statements about the possibility to modify his own interpretation, showing the willingness to develop and generalize it beyond the energy/length scale where non-relativistic QM is applied. In fact, he claimed that the pilot-wave theory may entail new physics at very short length scales, where it admits extensions which would make the evolution equation for the ψ-function non-linear, contrary to the case of standard QM (see also Section 9 of <cit.>). To this regard, Bohm proposed an interesting inductive inference about the form of future theories, underlying that wherever we have found linear differential equations, we have always found that they are only approximations to non-linear equations (e.g. sound, light, equation for heat flow, etc.). There is no way to prove that the same is not true for ψ waves; in fact, it seems implausible to me to suppose that even though in all other fields, nature must be described by non-linear equations, there will be one field, viz, quantum theory, where no such considerations will ever be needed. But if the equations are actually non-linear in a higher approximation, then the usual interpretation cannot be the ultimate one. I therefore believe that the practical necessity of restricting the description to a part of the world will not necessarily lead to a limitation on the accuracy of description of that part, but that it does so only under present conditions of experiment, in which we do not know how to take advantage of the richer laws prevailing at a more fundamental level, which would permit us to reduce these disturbances far below limits set by our present incomplete laws” (<cit.>, p. 430, Letter 1309). Along the same reasoning and considering very short scales, Bohm offered speculative suggestions speaking about the conceivability of measurement of unlimited precision as well as the existence of superluminal signals implying a modification of the special theory of relativity, since—he says—”we have proof only that relativity in its present form holds for distances much greater than 10^-13 cm” (ibid., p. 431). In addition, he even stated that the causal interpretation would be able to provide a new definition of simultaneity which would differ with respect to the one introduced in general relativity. Taking into account the elements contained in this correspondence, Bohm's unconventional ideas concerning the restricted validity of physical theories such as relativity and quantum mechanics appear forcefully. Furthermore, it is already clear that he considered the causal interpretation just one among the possible steps towards a better comprehension of quantum physics and not as a final word about its ontology. Hence, one could have easily seen not only the originality of Bohm's views and his scientific pluralist spirit—that would have emerged more systematically in later works—but also that he was not a scientist with a conservative attitude, being ready to envision new paths and directions for his research in both physics and philosophy. Interestingly, Pauli reviewed Bohm's paper for Physical Review as underlined in <cit.>, and on 3 December 1951 he eventually acknowledged the internal consistency of Bohm's work writing that: I do not see any longer the possibility of any logical contradiction as long as your results agree completely with those of the usual wave mechanics and as long as no means is given to measure the values of your hidden parameters both in the measuring apparatus and in the observed system (<cit.>, p. 436, Letter 1313). However, he also raised two distinct problems concerning the treatment of photons and the lack of description of the phenomenon of pair creation: I wish also to point out that the whole streamline picture is essentially non relativistic (as it fails both for photons and for pair generation). Therefore I cannot consider an argument as sound, which claims to reform the theory in the relativistic region, whilst it attacks just the non relativistic part of the theory which is correct (ibid.). Bohm faced these criticisms in a letter sent at the end of December 1951 closing their correspondence for that year.[From an exchange between Pauli and Pais we know that between mid-April and the beginning of May 1952 Bohm wrote to the former a “very crazy and impudent letter” and they were then in rather bad terms, cf. <cit.>, p. 627 Letter 1412. Unfortunately, this letter has been lost.] In order to reply to these last objections, he underlined that in his interpretation photons are not treated as particles. Indeed, as shown in the appendix A of <cit.> field configurations are introduced, corresponding to the transverse part of the vector potentials A(x). In his work, as underlined also in <cit.>, Bohm provides a guidance equation for the field coordinates, and the ψ function is interpreted as a functional of all the the Fourier components of the vector potentials. As Bohm pointed out, his treatment of the electromagnetic field leads to the same prediction of the standard formulation of QM, answering thereby the first criticism. In order to reply to the second objection the Dirac sea picture is explicitly mentioned for the first time, “[t]here, the creation of a pair is simply a transition from a negative to a positive energy state”. Such pilot-wave formulation of the hole theory was developed in the subsequent years, as we can see in <cit.>, reaching a mature formulation in his last published work <cit.>.[This revival of Dirac's ideas have been recently put forward in many essays as for instance <cit.>, <cit.> and <cit.>.] Thus, Bohm concluded, “there are no precesses which can be treated by the usual interpretation and which cannot also be treated in the causal interpretation” (<cit.>, p. 444, Letter 1315). In order to present historical facts accurately and to be fair with Pauli's criticisms, it should be noted that the Dirac sea hypothesis was poorly considered at that time (cf. <cit.> for details). Thus, Bohm's answer to the question about pair creation could have left him unsatisfied. In addition, it should be underlined that Pauli's last objection is still employed as one of the main arguments against the Bohmian interpretation of the quantum formalism. Nevertheless, we should also say on the one hand that the standard formulation of quantum field theory is not immune from severe philosophical issues, and on the other hand that interesting steps have been made in order to provide a coherent pilot-wave approach to the phenomena of particle creation and annihilation, as for instance in <cit.>, <cit.>, <cit.>, <cit.>. Thus, although pair creation were not treated by the theory contained in the 1952 papers, this was not a motivation to discard in principle this new research program. Therefore, even in the absence of arguments able to prove the pilot-wave theory definitively flawed and to consider Bohm a reactionary physicist, Pauli nonetheless continued to describe him as a dogmatic thinker and to criticize his approach to QM.[See also the irreverent, almost disrespectful description of the pilot-wave theory given by Pauli in a letter to Fierz recalling anecdotes of 1927 (<cit.>, Letter 1337).] For example, he defined Bohm as a “Sektenpafaff” (a priest of a sept) in his letters to Pais and Stern—cf. <cit.>, Letter 1412 and Letter 1454 respectively—and his work as “ein klassisch-deterministischer Mythos des atomphysikalischen Geschehens” (a classical deterministic myth of atomic physics) in his letters to Fierz (<cit.>, Letter 1337 and Letter 1368). In the same vein, after the publications of Bohm's papers Pauli felt the need to discredit them publicly for two related reasons. In the first place, he feared that without a public reaction against the causal interpretation his opposition would have been reduced to a mere philosophical disagreement—his worries were justified since Bohm communicated e.g. to Einstein that Pauli eventually acknowledged the logical consistency of the pilot-wave approach.[In December 1951 Bohm wrote to Einstein: “[i]t may interest you to know that Pauli has admitted the logical consistency of my interpretation of the quantum theory in a letter, but he still rejects the philosophy. He states that he does not believe in a theory that permits us even to conceive of a distinction between the observer's brain and the rest of the world”. Folder C11, David Bohm Papers, Birkbeck College, University of London.] To this regard he wrote to Fierz: There is also the danger that—if I simply remain silent—Bohm will spread the word that I have nothing to object to his “theory” “except philosophical prejudices” (<cit.>, p. 501, Letter 1337, author's translation from the original German). Similar concerns appear also in a correspondence to Rosenfeld: It was necessary for me to write something about it, because I am not only always asked “what I think about it”, but also because the younger fellow travellers of Bohm (mostly `deterministic' fanaticists, more or less marxistically coloured) are spreading incorrect rumors about my opinions. (They also try to persuade de Broglie, that there is some truth in his old attempts of 1927) (<cit.>, p. 582, Letter 1386). In the second place, Pauli was afraid that Bohm's theory would have found consensus among young scientists, especially in France where Bohr's complementarity doctrine was negatively received[This can be understood from Pauli correspondence with Destouches in <cit.>.], as we can understand by the already mentioned letter to Fierz, (cf. Letter 1337 in <cit.>). In fact, in his essay honoring the 60th birthday of de Broglie, Pauli wrote an essay that criticized explicitly the pilot-wave theory, arguing that it did not preserved the symmetry between position and momentum representation. Moreover, he did not waste occasion to shared his negative opinions about such a theory in several letters warning other physicist not to pay attention to it as reported in <cit.>. In sum, Pauli was and remained convinced that Bohm had a dogmatic faith in determinism, portraying him as a physicist with an obsolete Weltanschauung anchored to outdated ideas.[It is interesting to highlight an ironic plot-twist: in one of their last exchanges, the “reactionary” Bohm accused Pauli to be excessively conservative, being stuck with old positivistic ideals that prevented him to appreciate the novelty of his proposal: Since you admit the logical consistency of my point of view, and since you cannot give any arguments showing that it is wrong, it seems to me that your desire to hold on to the usual interpretation can have only one justification; namely, the positivist principle of not postulating constructs that do not correspond to things that can not be observed. This is exactly the principle which caused Mach to reject the reality of atoms, for example, since no one in his day knew how to observe them. [...] After all, we must not expect the world at the atomic level to be a precise copy of our large scale experience (as proponents of the usual interpretation are so fond of saying). Rather than accept a perfectly logical and definite concept of polydimensional reality that leads to the right results in all known cases, and opens up new mathematical possibilities, you prefer the much more outlandish idea that there is no way to conceive of reality at all at the atomic level. Instead you are willing to restrict your conceptions to results that can be observed at the long scale level, even though more detached conceptions are available, which show at least, never the production of these results might be understood causally and continuously (<cit.>, p. 442, Letter 1314). ] Given the textual evidence emerging from their correspondence and from Bohm's published papers, however, this opinion seems to be rather unjustified. On the one hand, determinism was not an issue for Bohm; he thought in fact that new quantum theories will be needed in order to describe the physics at very short length scales, and he expected them to be non-linear for the ψ-field (cf. Letter 1309 in <cit.>). On the other hand, in his letters Bohm provided innovative views about several other issues, as for instance the poly-dimensional character of ψ, the non-locality of his approach and the limited validity of our best physical theories, showing his pluralist attitude about interpretational debates. §.§ Rosenfeld's War Against Bohm's “Obscurantism” Leaving behind the Pauli's case—that shows a negatively biased reception of the pilot-wave theory—it is worth noting that Bohm's work has been discarded and discredited by several physicists on political grounds or, even worst, without solid scientific motivations. A remarkable example is given by Oppenheimer's reaction to the causal interpretation.[Oppenheimer was Bohm's PhD advisor.] As the physicist Max Dresden reported in an interview (see <cit.>, p. 133), during a talk he gave in Princeton—the former affiliation of Bohm before his emigration from U.S. to Brazil—the audience's reaction was unimaginable: no one read Bohm's paper since it was simply considered a waste of time. Oppenheimer declared that his theory was a form of “juvenile deviationism”, an expression used also by Abraham Pais. Remarkably, at the end of Dresden's talk Oppenheimer sentenced that “if we cannot disprove Bohm, then we must agree to ignore him” as reported also in <cit.>, p. 203. Against this hostile background, the second historical case study that we will be considering concerns the harsh ideological critique of Bohm's theory made by Léon Rosenfeld, who was “Niels Bohr's closest assistant for epistemological matters”, as underlined by Freire. He was himself a Marxist, and being very close to the philosophy of complementarity “saw the battle against the causal interpretation as part of the defense of what he considered to be the right relationship between Marxism and science” (<cit.>, p, 36). As we have already seen a few lines above, Rosenfeld and Pauli exchanged letters in which they spoke about Bohm's reformulation of the pilot-wave theory. From the correspondence of 16 March 1952 Rosenfeld's negative opinion about the latter is crystal-clear: I hope that the people who told you that I was “interested” in Bohm's heresy did not suggest that I was in any way impressed by it! I am only interested in stamping out this new obscurantism, because it is positively harmful; I know some of Bohm's “fellow-travellers” and am distressed to see such intelligent and sincere young people waste their energy in this way (<cit.>, p. 587, Letter 1389). Apart from the colorful language used against Bohm's proposal, this letter is significant for other two reasons. On the one hand, Rosenfeld explicitly shared Pauli's need to publicly attack the pilot-wave theory in order to dissuade young scientists from supporting this interpretation. To this regard he wrote: “I feel we have also a duty to help these people out of the bog if we can. Your article is very forceful indeed and I enjoyed it very much; I hope it will make due impression” (ibid.). On the other hand, and contrary to Pauli's strategy, Rosenfeld thought that the causal interpretation must be fought on philosophical rather than physical grounds, since its metaphysics contains “the root of all evil”. Because of Bohm's explicit criticism to Bohr's view in his 1952 paper, Rosenfeld aimed at showing the inconsistency of the “metaphysical character of the deterministic pseudo-interpretation of quantum theory”. Once again the issue of determinism—the source of Bohm's “obscurantism”—is mentioned, regardless Bohm's own arguments according to which the pilot-wave theory (i) is derived from the mathematical structures of quantum theory, and (ii) may be modified and made non-linear to describe physics at very short length scales of order 10^-13m, as clearly stated in Section 9 of <cit.>. It is worth noting that in May 1952 Rosenfeld personally wrote to Bohm in order to explain his reasons to deny the very existence of a debate concerning the interpretation of quantum theory. As reported by Freire, in a letter dated 30 May 1952 we read: “I certainly shall not enter into any controversy with you or anybody else on the subject of complementarity, for the simple reason that there is not the slightest controversial point about it” (<cit.>, p, 36). This is another clear illustration of what we already said above, namely that the Copenhagen perspective—in this case the Belgian physicist was referring to Bohr's complementarity doctrine—was so widespread and rooted that it was the only conceived interpretation of quantum theory. Rosenfeld viewed the deterministic character of the pilot-wave theory as a return to surpassed classical ideas, contrary to the principle of complementarity which entails the abandonment of determinism. This criticism was shared also by other physicists as for instance Heisenberg, who stressed the unsuccessfulness of the then proposed alternatives to QM, in particular he believed that these alternatives tried to “push new ideas into an old system of concepts belonging to an earlier philosophy” (<cit.>, p. 23). As we have seen, this view is also supported by Pauli who in December 1954 wrote to Born: Contrary to all reactionary efforts (Bohm, Schrödinger etc. and in a certain sense also Einstein), I am certain that the statistical character of the ψ-function and thereby of the laws of nature (...) will determine the style of the laws for at least a few centuries. It could be that later on, something completely new will be found, for example in connection with the processes of life; but to dream of a way back, back to the classical style of Newton-Maxwell (and they are merely dreams, to which these gentlemen dedicate themselves) seems to me without hope, devious, of bad taste. And, we may add, it is not even a good dream (<cit.>, p. 887). Notably, not only Rosenfeld attacked Bohm's theory in papers and workshops—particularly important is the Colston Symposium held in Bristol in 1957, where he criticized publicly the causal approach claiming that the complementarity view was the only conceivable option for QM (for more details see <cit.>, Section 1.2)—but also invited colleagues to oppose this proposal and to prevent its diffusion and circulation: [h]e pushed Frédéric Joliot-Curie—a Nobel prize winner and member of the French Communist Party—to oppose French Marxist critics of complementarity; advised Pauline Yates—Secretary of the “Society for cultural relations between the peoples of the British Commonwealth and the USSR”—to withdraw her translation of a paper by Yakov Ilich Frenkel critical of complementarity from Nature; asked Nature not to publish a paper by Bohm entitled “A causal and continuous interpretation of the quantum theory”, and advised publishers not to translate one of de Broglie's books dedicated to the causal interpretation into English (<cit.>, p, 38). Juliot-Curie did not take part to Rosenfeld's campaign, but several distinguished physicists adhered to it, as for instance Abraham Pais, Vladimir Fock and Adolf Grünbaum among others. Notably Fock, one of the most prominent figure of USSR physics, defined the pilot-wave theory as a widespread illness, thinking that such an approach would represent a dead-research program given its return to classical ideas (cf. <cit.>). Finally, to mention one last result of Rosenfeld's efforts against the diffusion of the causal view, he was able to prevent the publication of another Bohm's manuscript submitted to Nature pushing the editors of the journal not to accept it, as reported in <cit.> and <cit.>. As we have seen with many historical examples, several criticisms against the pilot-wave theory were not based on scientific grounds, and numerous physicists were opposed to this approach primarily for ideological or philosophical reasons, being convinced that the orthodox view could not be questioned. Notably, it was a diffuse opinion that Bohm tried to promote an old and outdated worldview. However, his scientific pluralism and non-classical ideas emerge vigorously from his public and private productions, showing also a strong anti-dogmatic attitude towards the interpretational debate concerning the meaning of the quantum formalism. Referring to this—and contrary to the beliefs held by many critics—he considered the causal approach only an initial attempt to improve the physical and philosophical content of QM. In fact, Bohm himself suggested possible ways to modify the pilot-wave theory in order to test it against ordinary QM at sub-quantum scales, showing the will to modify, extend and revise his own theoretical framework. Moreover, he was ready to accept the non-classical feature of the theory as the polydimensional character of the ψ-field and non-locality. In addition, it is remarkable that many contested the deterministic character of the causal interpretation, reading it as a sign of Bohm's conservative views, while determinism is merely a mathematical feature of the formalism employed. In fact, the velocity expressed in (<ref>) is a direct consequence of the Schrödinger equation which is a deterministic dynamical law (cf. <cit.> on this specific issue). Furthermore, as already stressed, Bohm himself proposed several ways to modify such an equation making it non-linear and inhomogeneous; indeed, he advanced also a stochastic approach to the casual view in a later work with the French physicist J.P. Vigier (cf. <cit.>). Thus, we can safely say that determinism was neither an a priori assumption of the causal view, nor some characteristic that Bohm wanted to restore. Unfortunately, notwithstanding the actual content of his papers and letters, he was seen somewhat ironically as both a reactionary physicist whose aim was to reintroduce classical ideas as well as as a dissident for his challenge to the orthodox view. If in this section we argued that it is incorrect to portrait him as a conservative scientist, in what follows we will introduce in detail Bohm's reflections on the structure of reality and the representational power of physical theories. As we will see, Bohm's philosophy of physics not only reveals deep observations about the epistemic role of science, but also it proves useful even for contemporary debates about scientific realism and pluralism. § THE INFINITE STRUCTURE OF REALITY Contrary to the positivistic doctrine directly or indirectly endorsed by several physicists supporting the orthodox view, David Bohm was a realist; namely, he supported the thesis according to which material entities and physical processes exist in the world independently of our minds, knowledge and observations.[To this regard, it is useful to recall that in the early 1950s he was influenced by Marx and Engels materialist and naturalist ontology. Consequently, Bohm was convinced that the world is composed only and uniquely by material things—that can take infinite forms—which not only exist independently of our minds, but also exhaust reality in itself, i.e. there are no other supernatural entities playing a role in the construction of our universe. For a discussion of the Marxist influence on Bohm's positions about realism in the 1950s cf. <cit.>, Chapter 6.] As a consequence of this philosophical position, science should be understood as our best effort to grasp and comprehend the inherent structure of reality, which in turn should be clearly reflected in the principles of physical theorizing. Referring to this, Bohm had little consideration of positivism, defining it in several places a poor working hypothesis. The history of physics shows in fact that the postulation of yet-unobserved entities have been often fruitful for future discoveries and observations—the conjecture about the atomistic nature of matter made in XIX century being one of the most notable examples. Thus, to claim that only what can be (in principle) observed deserves to be considered real is an unjustified limitation to the scientist's ability to provide a precise representation of the physical world. Sticking to such a belief, Bohm argued, is what led generations of physicists to accept the obscure metaphysics of quantum theory as the only meaningful way to interpret its formalism. In a nutshell, this is the reason for which Bohm was not only deeply unsatisfied with Bohr and Heisenberg ideas about QM, as we have seen at the outset of the previous section, but also with the claim that the orthodox interpretation was the only feasible one, as explicitly claimed by Rosenfeld. In open opposition w.r.t. the then-dominant philosophy of QM, Bohm was convinced that ontological clarity had to be an essential feature of any theoretical framework, and insisted that also quantum theory must have been provided with a clear metaphysical content, as one can easily see from the 1952 papers. As we already mentioned, he argued that the empirical robustness of QM and its contingent mathematical structure are not sufficient motivations to exclude a priori other ontologically clearer formulations of the theory, giving an explicit counterexample to the metaphysical imperialism of the orthodox view. In addition to ontological accuracy, in order to understand Bohm's philosophical views in the 1950s it is worth considering another key element concerned with the representational power of physical theories—their capacity to describe (portions of) reality. It is indeed crucial to emphasize that in his opinion theoretical frameworks always have a limited domain of application: laws and entities of a given theory can provide a good description of the physical world only within a specific interval of energy/length scales. In turn, reality is characterized in its totality by different levels of description, and no theory can be successfully employed at every scale. To this specific regard, Bohm interestingly argued the pilot-wave theory as well as standard QM have a limited validity and that “at distances of the order of 10^-13cm or smaller and for times of the order of this distance divided by the velocity of light or smaller, present theories become so inadequate that it is generally believed that they are probably not applicable” (<cit.>, footnote 6). At these regimes he expected that new ontologies and new theories will be discovered, so that the pilot-wave theory and QM would emerge as limiting cases at distance much greater than 10^-13cm. Regarding the finite applicability of quantum theory, Bohm interestingly argued that the mathematical structure of standard QM is itself a limitation in order to find deeper physical laws, and consequently a more precise description of reality. Namely, he claimed both in private correspondence and publicly that quantum theory is based on the unjustified assumption of linearity, which is a central, essential feature for its formalism and cause of many paradoxes (<cit.>, <cit.>)[Interestingly, Bohm and Pauli discussed the necessity of the linearity of the Schrödinger's equation, the reader may refer to Letters 1313, 1314 and 1315 included in <cit.>. The technical details of such a debate are not strictly relevant to our purposes and will not be mentioned in what follows. A public comment about linearity can be found in <cit.>, Section 9, p. 179.]: among all the possible changes that have been considered, people have avoided questioning the weakest assumption of all; viz, that all of physics must be contained in the theory of linear Hilbert spaces (<cit.>, Letter 1309, p. 430). Speaking about the linearity of the Schrödinger's equation, as we have already seen in the previous section, Bohm underlined that in the history of physics as soon as differential equation have been found for a certain law, theory of phenomena, then they would have appeared as approximations to non-linear equations.[Another important aspect that should be highlighted is that Bohm's justifies his arguments to Pauli with inductive reasons; as we will see below this method by induction will appear frequently in Bohm's philosophy.] Thus, he claimed that there is no procedure to prove that this will not happen with the quantum mechanical ψ field as well. Contrary to the standard quantum mechanical case, Bohm underlined to Pauli that the interpretation of de Broglie (as extended by me) is potentially capable of leading to a richer variety of laws of nature than those which are consistent with the usual interpretation. Thus, the usual interpretation must assume a linear and homogeneous equation governing the coefficients of the “state vectors” in Hilbert space. From my point of view, no such an assumption is necessary. [...] I believe that the equation governing ψ may, for example be non-linear, and that the usual wave equation is only a linear approximation. [...] (ibid., p. 429). In particular, in this letter and in Section 9 of <cit.>, he hypothesized (i) that at distances of the order of 10^-13cm or smaller the wave field will be governed by a non-linear equation of motion, and (ii) the existence of a direct coupling between the ψ field and the hidden parameters in a way that the disturbance induced by an observer in measurement scenarios will be less than those imposed by the Heisenberg uncertainty relations. Thus, Bohm wrote to Pauli “it is quite possible to contemplate theories which would be inconsistent with the usual interpretation. [...] If this is true, then after we understand the new laws governing short distances, we should be able to make measurements much more precise than would be consistent with the uncertainty principle. Even though the observer still disturbs the hidden variables in the measuring apparatus, the effects of this disturbance on the nuclear system of interest can in principle be reduced a great deal below the limits set by the uncertainty principle, provided that, for example, the equations become appreciably non-linear at short distances” (ibid., p. 430). These deeper non-linear laws applied at very short distances will allow for a precise description of matter beyond the threshold set by current quantum theory, envisioning experiments with unlimited predictions able to test the predictions of the pilot-wave theory, making it falsifiable. This means that on the one side, the current forms of QM and pilot-wave theory do not constitute the final word about the structure of matter, being valid only in a precise range of energies. On the other side, the possibility to extend the causal interpretation makes it possible to test the hidden variable hypothesis, which if confirmed would have provided a decisive argument against the standard formulation of quantum theory (cf. also Section 5 of <cit.>).[Moreover, discussing the possibility to extend the causal interpretation to higher energy/length scales, and in connection with the idea of measurement of unlimited precision, Bohm claims that also the special theory of relativity has a restricted domain of application, as we have already underlined in the previous section. Cf. <cit.>, p. 431, Letter 1309.] In Bohm's view this constituted a direct objection against the empiricist basis of QM. On the other end of the spectrum, he rejected another perspective, later called by Bohm “mechanistic philosophy” (cf. <cit.>), for which reality can be fully explained starting from a fixed set of entities, and a restricted set of laws—something close to what philosophers call foundationalism. He warns us not to expect such a knowledge “because there are almost certainly more elements in existence than we possibly can be aware of at any particular stage of scientific development. Any specified element, however, can in principle ultimately be discovered, but never all of them” (<cit.>, p. 189). This is a hint of the metaphysical infinitism endorsed by Bohm in the 1950s to which we now turn.[It is interesting to note that Bohm's view about the infinite structure of reality is influenced by Engels' Dialectics of Nature, as underlined in <cit.>, Chapter 6.] §.§ The seeds of Metaphysical Infinitism To the knowledge of the present author, the very first exposition of Bohm's infinitism has to be found in <cit.>, a paper containing replies to the objections contra the causal interpretation made by the Japanese theoretical physicist Takehiko Takabayasi. In this essay Bohm explores a variety of issues among which the possibility to extend the pilot-wave picture to spin and quantum fields, the reality of hidden parameters, the features of the ψ-field and the relation between his theory and classical physics. For the purposes of our discussion it is relevant to mention that in this essay Bohm illustrates the aims and scope of the causal approach even more explicitly with respect to his 1952 papers. In section 4 in fact he states that the main goal of the pilot-wave approach is to show that a logically consistent, causal formulation of quantum mechanics is actually obtainable, and thereby the orthodox view should not be considered the only conceivable interpretation of the quantum formalism. Moreover, Bohm repeats with vigor that his proposal is by no means an effort to provide a final or fundamental ontology for quantum theory—as he stressed several times in his correspondence with Pauli—since there are unlimited possibilities to extend and modify it. He therefore emphasizes again the limited validity and applicability of his proposal. Related to this issue, in section 6 of the paper under consideration Bohm provides an interesting discussion concerning the abandonment of causality in the realm of quantum physics. Contrary to the orthodox view—according to which the empirical successes of QM are to be found in the “renunciation of causality”—Bohm claims that the predictions of the theory are derived from the Schrödinger & Dirac equations together with Born's statistical interpretation of |ψ|^2, which can be provided with a causal interpretation. Thus, he argues, the a-causal philosophy of QM endorsed by physicists supporting positivism is not the key to understand the empirical successes of the theory. In turn, this fact entails that causality can be maintained in quantum domains and that this notion can play a significant explanatory role. However, Bohm sees a potential objection—implicitly present also in Takabayasi's criticisms—namely that preserving some form of causality in quantum theories would represent a return to a classical, Newtonian type of mechanics. Analyzing this specific issue, Bohm acknowledges that an “unlimited extension of causal laws of the type appearing in classical mechanics would lead to most implausible results” (<cit.>, p. 285). Nonetheless, one should not radically conclude that the concept of causality has to be rejected altogether given the empirical inadequacy of Newton's theory in more fundamental domains. Indeed, such implausible results would emerge not because classical mechanics is inadequate or wrong tout court, but rather from an unjustified assumption of universal validity of this theory and its laws. These difficulties would disappear as soon as one admits the restricted validity, and thereby applicability, of Newtonian laws in the classical domain.[Notably, similar points were raised also in the conclusions of the 1952 papers, where Bohm underlined that “our epistemology is determined to a large extent by the existing theory. It is therefore not wise to specify the possible forms of future theories in terms of purely epistemological limitations deduced from existing theories” (<cit.>, p. 188). Bohm wrote this sentence in relation to the limited applicability of physical theories, explaining that what is observed depends on the theory at hand—following <cit.>—and therefore it is a dangerous move to extend the laws, concepts and their normative power to domains outside the scope of validity of a certain theoretical framework.] Interestingly, Bohm underlines that in order to establish whether a Newtonian-type of law would be valid at deeper levels, one would have to analyze rigorously such more fundamental layers of reality and verify “to what extent a simple theory resembling Newton's laws of motion may be valid there”. The consistency and empirical adequacy of the pilot-wave approach then suggests that causal types of laws are possible even in some quantum domain. Therefore, it is not a priori impossible to extend the notion of causality to—and hence to find causal laws in—the quantum realm: “[f]or there is nothing intrinsically wrong with classical types of laws, as long as we do not try to extrapolate them unjustifiably by imagining (with Laplace) that they furnish a final theory, or at least a final general framework, within which the details only remain to be filled in” (ibid). To express this latter point even more clearly Bohm resorted once again to the history of physics. He argued that every time a final theory or a final truth was thought to be achieved, new and more complex levels of reality, phenomena and entities were discovered overturning those conclusions based on an improper universalization of some laws, which were instead appropriate only in specific domains. The transition between classical and quantum physics provides a useful example.[As Bohm correctly highlights “conclusions drawn only within the limited domain of the previous laws were however never overturned”, <cit.>, p. 286.] Taking historical evidence seriously into account, then, one should not believe that with quantum mechanics or quantum field theory we achieved a final theory of matter. On the contrary, past scientific theories indicate that it is unlikely that we will ever achieve experimental evidence for a framework valid at all levels. Hence, says Bohm it is necessary to formulate our theories in such a way that we explicitly recognize the possibility of an inexhaustible number of new levels, in which entirely new types of laws may be needed. If we do this, then even if we discover that simple Newtonian types of laws do hold in the domain of 10^-13cm, we know that the final course of the world is not necessarily determined “mechanically” by such laws. For there is a continual interaction between all levels; and the more complex laws that may be appropriate to the unlimited number of new levels (which we have hardly even begun to scratch) could easily invalidate the conclusions coming from the unfounded extrapolation of Newtonian laws to all levels. Thus, the unsatisfactory aspects of Newtonian types of laws are not present in a theory that limits itself to a finite domain, in which it might hope to verify such laws; but are present only when we try to fit all possible future human knowledge into the limited conceptual framework of these laws (<cit.>, p. 286). Notably in a footnote to the above quotation Bohm explicitly says that “below the level of 10^-13cm probably lies still another level, etc. ad infinitum”, which completes the very first published illustration of his views about reality—which discloses an infinity of different layers—and of science itself, whose goal is not to find absolute and universal truths, but rather to find the correct types of laws and entities at every given level. Thus, since Bohm denied that reality has a fundamental bottom ground—thereby rejecting any sort of foundationalism—we can claim that he endorsed a form of metaphysical infinitism.[More details will be given below. For an interesting discussion of foundationalism and infinitism cf. <cit.>.] It is interesting for our discussion to point out that such ideas were already present before the publication of the 1952 papers, as shown by Bohm's letter to the mathematician Miriam Yevick dated 28 November 1951: “Another important concept that must be gotten across is that of the infinite number of levels, that must be used in describing the behaviour of matter. Such a point of view automatically prevents us from closing our concepts, at any particular level” (<cit.>, p. 207). A few months later on 7 January 1952, Bohm sent another letter to her explaining how the diachronic existence of things depends on the motion of infinite layers of reality: How then do we explain the prevalence of change and the transiency of material things? This is done by the notion of endless transformation. The “things” at each level, are made up of smaller “elements” at a more fundamental level, and it is the motion of these more fundamental elements (not usually directly visible to us, except with the aid of elaborate scientific research) which causes the appearance and disappearance of the “things” existing at a higher level. These more fundamental “elements” however, cannot be permanent, but must be made up of still more fundamental “elements” and so on ad infinitum. Thus, we can see that every “thing” that exists may at some time come into existence and later go out of existence, but there is always a deeper level, in terms of which this change can be viewed rationally as a transformation of a more elementary form of matter, which is not itself basically altered in this particular transformation. Nevertheless, no single “thing” is uncreatable or indestructible. Only matter as a whole in its infinity of properties and potentialities is eternal (<cit.>, p. 227, emphasis added). This quote contains a crucial remark, namely it is clearly stated that the existence of an certain object existing at a given scale depends upon the motion of entities at more fundamental levels. Generalizing this idea, we can say that the entities defined at a precise scale will be ontologically dependent upon the motion of other infinite items living at more fundamental levels. However, despite the ontological dependence of a given level on deeper layers of reality, the reduction to an absolutely primary class of objects and laws is not achievable since in this account there is no such fundamental substratum. The infinitistic view of reality is again illustrated to Yevick in a letter dated 15 February 1952. Here Bohm interestingly refers to his previous work in plasma physics as an influence to his conception of reality. Indeed, he said, the behavior of a given individual object at a certain scale can be described as constrained by a collective motion of substructures present at more deeper levels, so that the particles that we see with present day technology are in fact constituted by aggregates of other items. Inferring inductively the existence of an infinite number of layers, then, each individual object at a particular level will be “discovered to be collectively conditioned”. From this claim, one draws the general conclusion that (i) the most fundamental individual items of our current science will be discovered to be collectively conditioned by lower level objects and motions, and (ii) that our “universe cannot be analyzed into a series of components, each of which are the constituents of the next higher level, and each of which determine the higher levels in a purely analytic way. For the higher levels will also always help determine the character of things that may exist at the lower levels. Thus, every level is in a sense, just as real as every other, since the “whole picture” cannot be deduced by starting at the “lowest level” and working upward” (<cit.>, p. 246). Similar concepts are expressed in a letter sent in early 1952 to Hanna Loewy. Here we read even more clearly about the ontological dependence relations existing between levels as well as the reasons for which a reduction to a fundamental ontology would be untenable: all matter contains an infinity of qualitatively different levels, all interconnected. Moreover, there is another interesting point. The so called “particles” of any given level are made up of structures in the “particles” of the lower level, etc. ad infinitum. [...] Because of the infinity of levels, you cannot say that there are any ultimate “individuals”, which are “fundamental” in the sense that their character is unalterable, and their existence eternal. At any level, any particular form of matter can always come into existence & go out of existence as a result of a transformation in the components existing at a lower level, but only matter as a whole, in its infinity of properties and possibilities, is eternal. [...] I should also add that I believe that no law is absolute or final, but that each law provides a successively better approximation to an absolute truth, that we can never possess in a finite time, because it is infinite in all its aspects, both qualitative and quantitative (<cit.>, pp. 123-124). This letter is important for the purposes of the present paper, since it provides further evidence that Bohm endorsed a form of scientific realism, for according to him as science progresses better approximations to an absolute or noumenal truth about reality are discovered, although a full comprehension of it will never be achieved. Moreover, from this quote one can understand that in spite of physical theories being adequate only within a certain domain of application, they should not be considered false, but rather relatively or partially true—as we have previously seen discussing the limited applicability of Newtonian mechanics. Notably, given that for Bohm every theory is only applicable in certain specific range of energy/length scales, each framework can be only partially true. Consequently, one should not claim that classical mechanics is wrong because it has been proved inadequate to describe empirical phenomena at microscopic regimes, but rather its inability to represent certain experimental facts tells us the limits in which such a theory is (or is not) a correct description of the world. Referring to this, another letter sent to Yevick on 31 March 1952 is relevant to clarify a further aspect of infinitism. In this correspondence one deduces that she tried to understand the latter in terms of Cantor's theory of transfinite numbers. Bohm's reply underlines that while Cantor's infinities consist in a collection of discrete, separated individuals all similar to each other, the levels of reality he is speaking about are all qualitatively distinct and different, so that each one must be treated independently with respect to the others. More importantly, says Bohm, his view of reality would avoid the mechanistic and deterministic philosophy of XIX century physics as well as the a-causal metaphysics of quantum theory. On the one hand, the infinite number of levels entails by construction that nature cannot be explained and reduced to a finite number of fundamental entities and laws—against the mechanistic materialist spirit of the pre-quantum era. On the other hand, a causal description of physical phenomena can be retained also at the quantum (and sub-quantum) domain as explicitly show in his 1952 papers. In a nutshell, as Bohm wrote, “although each level is causal, the totality of levels cannot ever be taken into account. Thus, as a matter of principle, we say that complete determinism could not even be conceived of, yet, each level can be determined” (<cit.>, p. 254). To this specific regard, in one of the final passages of the letter he explained the difference between causality and determinism, two notions that although tightly related are not equivalent. The former, to be understood as efficient causality at this stage of Bohm's career, entails that knowing the cause of a certain fact, we know that its effects will be obtained. Conversely, if we manipulate and modify the causes, one thereby changes the effects “in a predictable way”. On the other hand, the latter notion implies only predictability, but not the possibility of changing initial conditions. Assuming that reality can be described in a finite number of levels, then causality would be equivalent to determinism—i.e. the future would be logically contained in the present, writes Bohm. On the contrary, by stipulating the existence of an infinite number of layers we cannot in principle “conceive the world as completely determined”. These ideas continued to be developed in the following years, where it became even more evident that infinitism was essential to avoid a completely deterministic and mechanistic perspective, according to which the world would be reducible to a set of basic primary entities. In the following letter we have still other demonstration that Bohm's project was not to restore a deterministic world-view against what Pauli, Rosenfeld and others erroneously thought. Interestingly, in his correspondence with Melba Philips dated 15 March 1954 he explained the logical relations between causality and a mechanistic worldview: it is necessary to sharpen the distinction between causality and mechanism (or deterministic mechanism). Mechanism is characterized by two fundamental aspects: (1) Everything is made of certain basic elements which themselves never change in essence (i.e., qualitatively). (2) All that these elements can do is to undergo some quantitative change according to some fixed laws of change. For example, if they are bodies, they can move in space. If they are fields, they can change their numerical values, etc. But the basic elements themselves never undergo qualitative change. If we postulate an infinity of levels, then we make a step beyond mechanism. For the elements existing at each level are made of still smaller elements in motion (i.e., changing quantitatively), and the mode of being of the higher level elements arises out of the motions of the lower level elements. Thus, there are no elements that can never change. Indeed, even if we have a finite number of levels, some qualitative change is possible within a mechanistic theory. For example, with atoms in chaotic motion, we obtain new large scale properties, such as pressure, temperature, etc., new entities, such as gas, liquid, solid, and qualitative changes between them. Now, at first sight, it may seem that we could eliminate the large-scale level by analyzing it in terms of its basic molecular motions. And if there were a finite number of levels, this would be true. But if there are an infinite number, then each level stands on a footing that is, in the long run, as basic as that of any other. For every level has below it a deeper one. Indeed, matter can be regarded as made up of the totality of all levels” (<cit.>, p. 170). From this quote we can infer that any layer of reality should be treated independently of any other, although dependence relations do exist, as we have already underlined in this section. Notably, the laws of a specific given theory are insensitive to the motions of more fundamental levels. This emphasizes the pluralistic ontological views supported by Bohm (cf. the next section). To this specific regard, Bohm claims that albeit often one may infer many features of a certain set of objects studying the behaviour of its components, there are cases in which “there may be properties that cannot so be deduced. Not only may these properties be peculiar to a given level, but they may involve “crossing” of levels. For example, the general large scale conditions, such as electric field, gravitational field, may actually change the conditions of existence of smaller particles, such as electrons, neutrons, etc., so that in strong fields, the very “elementary” particles into which we now analyze matter would change. Thus, there can be a reciprocal influence from a higher to a lower level, which by itself would make impossible a complete analysis of all properties of the higher level in terms of the lower” (ibid.). Hence, Bohm concludes, each level contributes in its own way to the totality of reality. To close this section it is interesting to note that Bohm discussed his views about the infinite richness of nature also with Einstein. Let us quickly contextualize their exchange, since it provides a nice summary of Bohm's views about quantum theory and the structure of reality at the end of 1954. In a letter dated 28 October 1954 Einstein wrote to Bohm that every effort so far made to complete quantum theory was not satisfactory, even his own attempt at generalizing the law of gravitation including the atomistic and discrete nature quantum systems.[Einstein discussed at length with Bohm the merits and problems of the causal interpretation. However, he did not find the pilot-wave theory an adequate solution to the problems of QM. For details cf. <cit.>.] He closes by saying that if a fundamental ontology of fields cannot be given as foundations for an objective description of reality, then the notion of continuum should be abandoned altogether even in the context of space and time. On 14 November 1954 Bohm replied to Einstein with a long letter. In the first place, he holds a different position with respect to the father of relativity, claiming that the possibilities of an objective description of nature in terms of continuous notions have not been all analyzed and exhausted. In particular, he claims that studying the macroscopic structure of reality—in this case starting from general relativistic considerations—brings little advantage in order to find the correct laws for the microscopic regimes. In fact, while the microscopic structure of the world is only very weakly reflected at the macroscopic regimes, analyzing the microscopic structure one obtains greater insights about large scale phenomena and laws, which often are statistical approximations of the microscopic ones. Thus, he was doubtful that looking for the correct quantum laws starting from macroscopic field laws would have been a fruitful methodology. The first part of the letter is interesting for us since it provides useful insights to better characterize Bohm's thoughts about the structure of reality. Indeed, although our universe is composed by an infinite number of levels, there are relations among them. More precisely, less fundamental levels are ontologically dependent on more fundamental ones, although they are not strictly speaking reducible to them, because of the infinite chain of layers. In addition, Bohm explicitly affirms that macroscopic laws, i.e. roughly speaking those of classical physics dealing with larger scales, are statistical approximations of microscopic structures and laws. In the second place, but related to the first point, Bohm affirms once again that in his views there would be another sub-quantum level beyond QM characterized by a continuous and causally determined motion. The basic entities of such a sub-quantum level would be a set of fields obeying non-linear equations; this field needs not be defined as other classical fields, rather the latter would emerge as averages from the motion of this deeper entity as well as the usual quantum mechanical wave function. What happens at the quantum level, he conjectures, would be determined by the evolution of a yet-unknown but qualitatively novel set of entities. In particular, the Schrödinger equation would describe averages of the dynamical effects of sub-quantum entities. More specifically, the relation between the sub-quantum and quantum scales can be associated to the relation between Brownian motion and “the atomic level. In other words, events at the atomic level are contingent on the general irregular motions of some as yet unknown but qualitatively new kind of entity, existing below the atomic level”. In this context the ψ function is to be conceived as an average of the dynamical motion of the lower level fields, and assuming that “the basic fields undergo a rapid, quasi-ergodic type of fluctuation, then with reasonable assumptions about these fluctuations, one obtains the Schrödinger equation as an average equation, satisfied by a suitable mean of the ψ function”. Interestingly, Bohm claims that although quantum theory would emerge as a statistical average from this deeper level, its laws will be insensitive with respect to the precise forms of the more fundamental dynamical equations valid at the sub-quantum level, highlighting thereby the autonomy of the quantum level with respect the more fundamental sub-quantum one—and consequently, generalizing this argument, the autonomy of each level. This fact in turn would show that it is very unlikely that we will be able to deduce of infer the motion and the behavior of the more fundamental fields from the non-relativistic quantum regime, since what is relevant at quantum scales are averages of sub-quantum dynamical evolutions. On the contrary, Bohm underlines that as soon as one has knowledge of the lower level, then one can derive more easily conclusions about the quantum level. Concluding the letter, Bohm eventually expresses his infinitistic view about reality[From Einstein's reply dated November 24, 1954 we understand that he did not shared Bohm's views about “an unending hierarchy” of structures and laws preferring a methodology based on logically simple laws with general validity.]: On the whole, I do not find the idea of avoiding the continuum of space and time very plausible. I think rather that the continuum is infinitely rich in qualities. In other words, below any given type of level will always be a new level of motion and structure, so that each type of entity contains within it new types of entities that are still smaller. In general one may expect that irregular quasi-ergodic motion is characteristic of all the levels. Thus, every level will be subject to chance for fluctuations arising from the lower level motions. Nevertheless, there will be no limit to the application of causality, and to the possibility of making an objective description of these various levels of motions and of being. (Courtesy of the Birkbeck College Archives). As we can learn from the material studied in this section, Bohm continued to develop his ideas on infinitism during the years 1951-1954. We can then consider the reply to Takabayasi's paper as well as the letters to Yecick, Loewy, Philips and Einstein as a portion of the preparatory work for the book Causality and Chance in Modern Physics, which contains the most detailed illustration of Bohm's infinitistic views about the inherent structure of reality. §.§ Causality and Chance in Modern Physics In 1957 Bohm published Causality and Chance in Modern Physics which can be considered one of the pillars of his scientific and philosophical production.[From now on we will refer to this book simply as Causality and Chance.] For it perfectly synthesizes his metaphysical reflections about the structure of reality, scientific theories, laws and causality, as well as it provides the basis to understand future directions that Bohm will explore in later years, as for instance the the philosophy of processes, developed mainly in 1960s with the volume <cit.>, and holism, initiated in the early 1970s and culminated in the books <cit.> and <cit.>. Causality and Chance begins with a discussion of causality which serves as a foundation for every other thesis defended by Bohm. Interestingly, he states a general principle—which is even prior to the notion of causality—which says that “everything comes from other things and gives rise to other things” (<cit.>, p. 1). This statement is inferred from the empirical observations that in nature nothing remains constant, but rather everything is in perpetual modification and transformation, coming from something that existed before. Such a principle then prepares the basis for the notion of causality. In the study of nature, in fact, scientists found (and continue to find) patterns of connections and relationships between events, objects and phenomena which are necessary in the following sense: (i) every time we observe the fact A, the effect B will follow, and (ii) if we modify A, the effect B will be modified accordingly in a predictable way. This is in essence the notion of causality at play in the book. Discussing a wide examples of causal relationships varying from medicine to theoretical physics, Bohm claims that since we can make predictions from them, these are neither the results of random connections between events, nor we can change them at our will, so that they represent objective features of our world. Moreover, in this philosophical framework such necessary or constant relationships define the way things are and behave, and are labeled causal laws. He writes for instance: “[t]he fact that such predictions are possible shows that the causal laws are not like externally imposed legal restrictions that, so to speak, merely limit the course of events to certain prescribed paths, but that, rather, they are inherent and essential aspects of these things. Thus, the qualitative causal relationship that water becomes ice when cooled and steam when heated is a basic part of the essential properties of the liquid, without which it could not be water. Similarly, the chemical law that hydrogen and oxygen combine to form water is a basic property of the gases hydrogen and oxygen, without which they could not be hydrogen and oxygen”. Analogously, he claims that since causal laws are essential in order to define the qualities and feature of physical objects, it would not be possible for us to conceive or discover them if they would not satisfy some sort of nomological regularity and objectivity. Alternatively, according to Bohm the simple fact that a certain object has a given attribute implies that it “will react in a certain way when it is subjected to specified conditions (e.g. the red object exposed to white light will reflect mostly red light). In other words, the causal laws that a thing satisfies constitute a fundamental and inseparable aspect of its mode of being” (<cit.>, p. 10). Interestingly, Bohm crucially underlines that in defining a certain law one must restrict the description of such a causal connection to the relevant factors which are necessarily involved in the problem under consideration, leaving outside the infinitely many other elements having negligible effects on it, because one may find an infinite number of contributing causes. Thus, the significant or relevant ones for a given effect are those that have appreciable influence on it in a certain context of interest. In particular, Bohm stresses that one always deals with incomplete precision while facing a scientific problem because every single event, process or phenomenon de facto depends on an infinite series of factors. However, the majority of them do not pertain to the context under consideration and can be neglected. These negligible contributions will be then cancelled out and will not produce appreciable effects, so that it is actually possible to study a problem with a rigorous approximation, and without taking into consideration the actual infinity of elements which would be needed in order to obtain a completely perfect description of a phenomenon or for the prediction of a certain result. In turn, this entails that such perfect representations of physical phenomena are not achievable, and that our laws are always approximated descriptions of the physical reality, i.e. abstractions of real processes taking place in the world.[For more detail of the various kinds of causal relations the reader may refer to <cit.>, Chapter 1, Sections 7 and 8.] From Bohm's analysis of causal laws, hence, we can deduce the following conclusions that are essential to understand his philosophical views: * Causal laws individuate constant relationships in a given context that represent objective features of reality; * They are fundamental in order to define properties of physical objects at the relevant scales. This entails that a law cannot be employed to characterize the attributes of items appearing in more fundamental theories, where new types of nomological relations occur; * Similarly, they provide approximately correct descriptions of a certain set of events, processes and phenomena within a specific range of energy/length scales. As Bohm underlines, consequently “any theory extrapolated to an arbitrary context and to arbitrary conditions will (...) lead to erroneous predictions. The finding of such errors is one of the most important means of making progress in science. A new theory, to which the discovery of such errors will eventually give rise, does not, however, invalidate the older theories. Rather, by permitting the treatment of a broader domain of phenomena, it corrects the older theories in the domain in which they are inadequate and, in so doing, it helps define the conditions under which they are valid (<cit.>, p. 21, emphasis in the original); * Consequently, causal laws do not represent absolute truths because they cannot be applied universally, i.e. without approximations to every specific context at every scale. Thus, the goal of science is to find laws and theories which are progressively more fundamental and accurate. Interestingly, he presents his own metaphysical approach in contrast to the mechanistic philosophy usually attached to classical physics. Indeed, the second chapter of the book is devoted to the exposition of the main tenets of such a metaphysical account. These can be simply summarized saying that every portion of reality can be reduced to a finite set of absolutely fundamental laws and objects, so that every other physical entity, event and phenomenon can be explained in terms of these basic constituents. Hence, no new qualitative features of matter can arise from nature's elementary ingredients, that determine and exhaust all the possible modes of being. In particular, the philosophy of mechanism has been extrapolated from Newton's theory of mechanics (mainly by Laplace) assuming its universal applicability to every domain and layer of reality. Contrary to this view, Bohm argues that Newton's laws of motion form the formal basis of the science of mechanics, which by itself does not provide a complete determination of the future behaviour of the whole universe. The premise that everything must be described and fall into the domain of application of Newtonian mechanics, then, represents a mere projection of Newton's theory to all possible contexts and domains of applications. However, this generalization is not grounded in scientific facts, rather it is the consequence of a philosophical edifice that conceived our universe as a mechanism built upon a restricted set of entities and laws. Therefore, Bohm claims that “mechanism cannot be a characteristic of any theory, but rather, as we have already stated above, a philosophical attitude towards that theory. Thus, it would have no meaning to say, for example, that Newtonian mechanics is mechanistic; but it has meaning to say that a particular scientist (e.g. Laplace) has adopted a mechanistic attitude towards this theory” (<cit.>, p. 25). In the fifth and last chapter of Causality and Chance, we find a detailed tripartite objection against mechanistic philosophy. In the first place, history of physics disconfirms the basic tenets of this view, since the revolutions that occurred from Newton to this day radically changed the structure of physical theories, introducing entities and laws in open contrast with respect to those of classical physics. Moreover, Bohm notably argued that in virtue of the crisis that physics was facing after the Second World War—in particular with several issues affecting quantum field theory—future theoretical frameworks would have been as revolutionary as QM was compared to classical mechanics. In the second place, the assumptions concerning the final character of any particular ontology are neither necessary, nor empirically provable, because future theories may demonstrate their limited validity. Referring to this, Bohm exemplifies his argument taking into account the transition between classical and quantum mechanics, pointing out that “Newton's laws of motion, regarded as absolute and final for over two hundred years, were eventually found to have a limited domain of validity, these limits having finally been expressed with the aid of the quantum theory and the theory of relativity” (<cit.>, p. 90). Finally, the mechanistic philosophy contravenes the principles of the scientific method, since the latter imposes that every object and law must be continuously subjected to verification. This process of testing may well end up in finding a contradiction with new discoveries or new domains of science. Looking at how physics evolved, claims Bohm, such contradictions not only systematically appeared, but also led to a deeper comprehension of the world. Contrary to the mechanistic philosophy, he proposed a version of metaphysical infinitism according to which there is no a bottom ground of fundamental entities and laws upon which everything else depends. As we have seen in the previous section, an important feature of Bohm's infinitism consists in rejecting the universality of any known ontology and set of laws, i.e. to reject the idea that a certain class of objects and causal relations can be successfully and perfectly applied to every level of reality. In essence, Bohm stated that looking at (i) the historical evolution of physical sciences, (ii) the available experimental data as well as (iii) the then-current crisis of theoretical physics that was shaking the foundations of the quantum theory of fields, one is pushed to endorse a conception of nature constituted by an infinity of different entities and causal relations (<cit.>, p. 91). According to this view of science, physical theories do not always lead us closer to a fundamental ground, but instead show the infinite complexity of our universe. Furthermore, he believes that empirical data cannot a priori provide any justification to metaphysical restrictions concerning a particular set of items to be chosen as absolutely ontologically independent. On the contrary, conforming to Bohm's infinitism, scientific practice always discloses new entities, laws and phenomena which contribute to our continuous, never-ending process of understanding the limitless structure of reality. However, although Bohm denied the existence of a fundamental level, he firmly believed that every theory must be ontologically unambiguous in its domain of application. Therefore, theoretical frameworks must provide a clear ontology to be applied at the relevant energy/length scale, meaning that (i) the terms appearing in the vocabulary of a physical theories should refer to/denote objects existing in the world, and (ii) the entities forming the basic ontology of a given theory have to be considered relatively fundamental. Referring to this, he stated that [a]ny given set of qualities and properties of matter and categories of laws that are expressed in terms of these qualities and properties is in general applicable only within limited contexts, over limited ranges of conditions and to limited degrees of approximation, these limits being subject to better and better determination with the aid of further scientific research (<cit.>, p. 91). Thus, a well-defined physical theory should provide a clear ontological picture for the domain in which it is a reliable description of physical phenomena. Nonetheless, its ontology and laws can be substantially modified with the progress of scientific research. This certainly exemplifies Bohm's scientific pluralism and his heterodox metaphysical views with respect not only to the dominant paradigm towards the interpretation of QM (cf. <cit.>), but also to a reductionist conception of science. Related to this, Bohm acknowledges that infinitism is a metaphysical thesis that cannot receive direct empirical confirmation exactly as mechanicism. However, the former is more adherent to the scientific practice since it takes seriously into account the crucial role played by boundary conditions and approximations in setting the limits to the validity of each physical theory. Moreover, assuming an infinite layers of reality one may apply a mechanistic viewpoint in each level, avoiding nonetheless a strong reductionist perspective as well as absolute determinism, as we have already seen in several letters in the previous section. Speaking about reductionism, it is worth noting that although there are relations of ontological dependence among levels, it is possible to study and describe each layer independently to the others. Already in the second chapter of the book Bohm explained the autonomy of levels through examples taken from classical statistical mechanics. Indeed, it was pointed out that the kinetic theory of gases was one of the first examples in which large-scale, macroscopic regularities, albeit ontologically dependent upon the microscopic molecular structure of the gas, where independent respect to the precise details of the molecular motions occurring at the microscopic regime—in fact, a given macro-state can be multiply realized by an infinity of micro-states. To this precise regard, Bohm claimed that macroscopic average quantities (such as the mean number of molecules in a given region of space or the mean pressure on a given surface) are extremely insensitive to the precise motions and arrangements in space of the individual molecules. This insensitivity originates, at least in part, in the fact that an enormous number of different motions and arrangements in space can lead to practically the same values for these quantities. [...] Because these mean values depend almost entirely only on the general over-all properties of the molecules, such as the mean density, the mean kinetic energy, etc., which can be defined directly at the large-scale level, it becomes possible to obtain regular and predictable relationships involving the large-scale level alone. It is clear that one is justified in speaking of a macroscopic level possessing a set of relatively autonomous qualities and satisfying a set of relatively autonomous relations which effectively constitute a set of macroscopic causal laws (<cit.>, p. 34). Examples of such autonomy of levels can be found very easily in physics, as for instance one may consider that non-relativistic quantum theory is insensitive to the internal structure of nuclei, which are instead studied at the level of the quantum theory of fields, or that the latter theory is insensitive to the events and phenomena taking place at Planck's scale, etc.. Thus, one may fairly say that at the various levels of description one finds a relative autonomy of behavior, so that one can study a given set of entities, laws and relationships “which are characteristic of the level in question” (ibid.). Referring to the relative independence of each layer of reality, Bohm interestingly claims that although every entity, process or phenomenon are dependent upon an actual infinity of other qualities and relations which are all interconnected—here we can see the seeds of Bohm holism which will become primary in later stages of his work—one of the essential problem of science is to practically disentangle such causal relationships in order to be capable of dealing with subsystems of the universe and particular set of causal laws. In fact, he continues, it is a crucial task for scientists to individuate those entities and laws in a given level which are able to influence other things without themselves being significantly influenced. For instance, taking into account the example of the kinetic theory of gases considered a few lines above, one may say that despite the tight connections between macroscopic and microscopic states, the former possess a relative autonomy in their modes of being, therefore, one can study their features and behavior independently of their internal microscopic structure. One of the most philosophically significant aspects of Bohm's views emerges from two contrasting features of his view that we just mentioned, namely the relative autonomy of levels of description on the one hand, and the actual dependence of physical objects and processes upon an infinity of other entities and causal relations on the other hand. From this apparent tension he remarkably concluded that the notion of “thing” or “object” is an idealization and an abstraction from the infinite background of structures that provide a given entity its conditions of existence. Therefore, he affirms that the notion of the infinity of nature leads us to regard each thing that is found in nature as some kind of abstraction and approximation. It is clear that we must utilize such abstractions and approximations if only because we cannot hope to deal directly with the qualitative and quantitative infinity of the universe. The task of science is, then, to find the right kind of things that should be abstracted from the world for the correct treatment of problems in various contexts and sets of conditions. The proof that any particular kinds of things are the right ones for a given context is then obtained by showing that they provide us with a good approximation to the essential features of reality in the context of interest (<cit.>, p. 100). Given the practical and conceptual impossibility to deal with the infinite complexity of reality, Bohm believes that physical theories should be considered abstractions from the actual structure of our universe, capturing only the relevant and essential qualities and processes with which we can achieve a faithful—but always approximated—description and knowledge of nature. With the progress of science, then, we will achieve increasingly better representations of matter, although a one-to-one correspondence between our physical theories and reality will never been obtained. This conclusion bring us to analyze the last section of Causality and Chance dealing with the notions of truth and objective reality, which will be important for the reminder of this essay. There Bohm drawn the logical conclusions from all his previous arguments. Given that reality is composed by an infinite multiplicity of layers, whose entities, laws and relations are all reciprocally interconnected, and given that such levels are relatively autonomous, each theoretical framework will be able to uncover only relative truths, valid at certain levels but never universally applicable and/or generalizable. Nonetheless, as already underlined in this section, physical laws do represent objective aspects of reality, for they describe necessary connections between entities, events, phenomena, etc., which are independent of our minds, wills or “the way in which we think about things”. § THE LOCAL REALIST VIEW As we have seen several times in the previous sections, Bohm often provided evidence for his arguments from the history of physics, as it would have become customary in philosophy of science in the following years. The mechanistic view is in fact criticized for its unwarranted universalization of Newtonian mechanics, extending its application to every scale. Contrary to this philosophical perspective, Bohm emphasized that so far every physical theory has been shown to be a reliable description only of some limited portion of our universe, i.e. valid only within a specific interval of energy/length scales. Hence, theoretical entities and laws describing a certain layer of it are only relatively fundamental—i.e. given that each level can be studied independently, its objects and dynamical laws will be considered fundamental only at that particular level. Therefore, one inductively infers that future theories will be only valid within certain specific domains of application. Moreover, because history of physics shows that scientific investigations always disclosed new features and levels of reality, he claimed that its structure is limitless and not reducible to an absolute bottom level. Hence, we will be unable to embrace the infinite complexity of nature with a finite set of entities and laws, so that no final theory of everything will be possibly found. Let us call Bohm's argument the infinitistic meta-induction. According to this meta-induction, scientific theories progressively come closer to a true description of reality with increasingly better approximations, expanding our knowledge of the world by discovering new layers of nature. It is worth recalling once more that Bohm was a metaphysical realist, namely he thought that the external world exists independently on our minds, knowledge and possible observations of it. Indeed, reality is explicitly defined in Causality and Chance as the totality of existing matter, laws and relations in their continue process of becoming (cf. <cit.>, pp. 114-115). However, given the ideas expressed in the latter book is not trivial to claim that Bohm was a full-fledged scientific realist, since he did not believe that the sentences or statements expressed by scientific theories are literally true, as we have already pointed out in the previous section. In particular, one must take into account that our best theoretical frameworks are abstractions, i.e. idealized representations of the real objects and causal relationships actually constituting our universe. Let us then try to understand what kind of scientific realism best approximates David Bohm's perspective, relying on the main philosophical conclusions individuated in the writings analyzed so far. In the opinion of the present author one may interpret his approach as a form of local realism, according to which it is possible to maintain a realist ontological commitment towards the theoretical entities and laws appearing in our current physical theories, without being committed to a fundamental, scale-invariant ontology. According to such an account, given a particular theory T, one would be uniquely committed to the existence of those entities and laws in T with a direct physical meaning, however, such commitment would be constrained by T's specific domain of application.[Another label for Bohm's view could have been “internal realism”; we did not employed this name here since it may generate ambiguities with Putnam's internal realist view.] For instance, if the non-relativistic quantum regime would be correctly described by the pilot-wave theory, one would be ontologically committed to the presence in the world of quantum particles with certain definite properties, e.g. position and velocity, as well as of a new kind of field, namely the ψ function. Considering instead classical electromagnetism or general relativity as valid representations of other levels of reality, one would accept the actuality of different kinds of physical fields. Nonetheless, existential claims implying the reality of entities contained in the vocabularies of these theories can be considered approximately true only within their respective domains of validity. Consequently, the ontological commitment implied by the mentioned frameworks is limited to the specific length scales in which they can be considered approximately reliable descriptions of nature. Moreover, Bohm argued that the ontology of a certain theory may be subjected to substantial modifications with the progress of scientific research. For example he underlined several times that the metaphysical content of the causal interpretation may vary importantly at the sub-quantum level. Similarly, Maxwell and Einstein theories cannot be extended to more fundamental domains, e.g. to the quantum regimes, without substantial modifications of their metaphysical content and laws. Hence, according to this form of local realism, the fundamentality of a given ontology will be always relative to the particular theory at hand and bounded by its limited range of application. This feature portraits accurately Bohm's belief, since he stated that any given set of entities and laws are in general applicable only within “limited contexts, over limited ranges of conditions and to limited degrees of approximation”, as we have already seen. It is worth noting that endorsing this local realist view in the context of metaphysical infinitism, one has to accept the possibility to have ontological discontinuity between physical theories: the entities and laws defining a certain framework may in fact greatly differ with respect to those employed at more or less fundamental scales. This fact however caused no troubles for Bohm, who explicitly admitted the presence of contradictory types of motion at different levels: the relative autonomy in the modes of being of different things implies a certain independence of these things, and this in turn implies that contradictions between these things can arise. For if things were co-ordinated in such a way that they could not come into contradiction with each other, they could not be really independent. We conclude, then, that opposing and contradictory motions are the rule throughout the universe, and this is an essential aspect of the very mode of things (<cit.>, p. 102). Such views certainly exemplify Bohm's pluralist views concerning the ontology of physical theories; after all, according to his perspective one cannot resort to a monist metaphysics in order to describe the infinite richness and complexity of nature (cf. also <cit.>). Hence, following local realism as formulated by Bohm one may conclude that (i) whatever ontology works at the certain regime, it can be modified at more/less fundamental levels, and (ii) ontological inconsistency between theories defined at diverse energy/length scales can be tolerated, given the provisional and fallible character of scientific knowledge. Referring to this, it should be underlined that the possibility to have different ontological commitments at diverse energy/length scales is not a negative consequence of Bohm's proposal, but rather an advantage to understand the complexity of reality as well as of contemporary science. Indeed, many philosophers in the recent years proposed pluralist views in order to grasp the different notions at play in various physical theories. Interesting examples are given by <cit.> and <cit.> who argue for the ontological independence of chemistry with respect to quantum physics. Moreover, analyzing carefully the historical evolution of QM and the measurement techniques in particle physics, <cit.>, p. 38 observes that the unity of physics is a semantic rather than an ontological unity. Physics still has a unified language, namely the language of physical quantities, even though the unity of axiomatic theories and their objects has been lost. Taking into account the notions of e.g. “particle” or “field”, one cannot but note that they have complete different meanings when included in classical or quantum theories. Similar considerations can be made about the concept of spacetime. Referring to the tension existing between realism about spacetime in general relativity and a functionalist perspective in quantum gravity, Lam & Wütrich interestingly claim that a possible way to resolve it is to appeal to a local interpretation of theories, whose commitments may also be divergent: this piecemeal approach leads ourselves to incline towards a geometric structural realism about spacetime in GR, and spacetime functionalism in much of QG. Obviously, there is no requirement that these `locally optimal' interpretations are consistent across contexts. Taking naturalism seriously mandates local interpretations of theories, i.e., their reading needs to start from the theories themselves, rather than from a presupposed and fixed interpretative scheme or set of demands. Given that scientific revolutions may bring with themselves a shift in methods, aims, and values that constitute a scientific paradigm, naturalism prohibits an inflexible a priori commitment to a particular interpretative template. Thus, we believe that scientific realism tempered by naturalism must accept the possibility that our interpretative stances in GR and in QG diverge (<cit.>, pp. 349-350). Hence, the ontological unity hoped by a reductionist or mechanistic philosophy seems to be at odds with the current development of physics, making Bohm's philosophical reflections still relevant and interesting for the present day discussion about scientific realism.[<cit.> make an interesting case arguing that naturalism leads to a local interpretations of physical theories.]^,[Referring to this lack of unity, it should be noted that one may frame local realism in a moderate pluralist account, which contemplates the possibility that the ontological plurality currently present in contemporary physics will be resolved in the future. Indeed, conforming to moderate pluralism, the final goal of every scientific domain is to establish a unique and complete account of the natural phenomena lying within its scope, making it also compatible with other accounts of other scientific domains. Thus, according to this moderate view, different domains of physics may be integrated and synthesized without ontological inconsistencies in future developments of the discipline (cf. <cit.>). This project would however imply the rejection of infinitism, hence, it would not be completely adherent to Bohm's view.] Moreover, it is interesting to note that in the literature concerning scientific pluralism, several philosophers of science claim that incompatible or inconsistent theories can be simultaneously accepted since it is methodologically fruitful for scientific inquiry, as Bohm repeatedly stressed—cf. for instance <cit.>, <cit.> or <cit.>. Indeed, da Costa and French argue that the notion of “belief” used by scientists in accepting a given theoretical framework is not the philosophical concept of a true proposition, but a more vague notion reflecting epistemic fallibility. According to them, to accept a theory T is not to believe that T is true, but rather to reason as if it were true. Similarly, Rescher claims that a reason to accept inconsistency among theories is provided by the fact that scientific reasoning is plausible but always fallible, thus, “while consistency should play the role of a regulative ideal, it may at times be sacrificed for the sake of other cognitive values” (<cit.>, p. 12). Finally, let us emphasize that such a local realist view, although casted within a pluralist and infinitistic background, can be properly considered a genuine form of scientific realism. Indeed, it demands metaphysical clarity and explanatory power from physical theories: every framework—as for instance the pilot-wave theory, Newtonian mechanics, Maxwell's electromagnetic theory, etc.—provides a finite set of entities and laws which is responsible for the explanations and predictions of the theory at hand within its well-defined domain of application. Alternatively stated, the theoretical machinery of a certain theory does have a functional role for the explanation of a specific set of phenomena, and the entities appearing in its equations acquire a precise set of attributes only when inserted within a set of laws constraining their behaviour, as already underlined in the previous section (for more details cf. <cit.>, Chapters 1 and 2). In addition, the present form of local realism manifests the tridimensional character that is generally ascribed to scientific realism (cf. <cit.>, Section 1.2). Firstly, it implements metaphysical realism, i.e. the idea that there exists an external world independently of human observers—in particular, we have underlined several times already that Bohm was a metaphysical realist. Secondly, from a semantic perspective Bohm claimed that physical theories provide an approximately true description of natural phenomena, in the sense that scientific laws describe objective and necessary features of the world. The only limitation imposed by his version of local realism is that such descriptions are reliable only within a definite domain of application, i.e. they are not universally valid. Finally, the explanations of physical phenomena given by scientific theories do provide knowledge of the external world. Therefore, we can safely conclude that the metaphysical, semantical and epistemological aspects of scientific realism are conserved within Bohm's philosophical reflections about the aims and scope of physical theories. § CLOSING REMARKS In this essay we have argued that in order to explain the negative reception of David Bohm's pilot-wave theory one should take into due account a variety of factors, many of which are not strictly speaking scientific. Indeed, several objections against the causal interpretation stand on purely philosophical grounds, as in the cases of Rosenfeld and to a lesser extent Pauli. Moreover, the sociological and historical analyses mentioned in the introduction are key to understand the hostile attitude of many physicists who were reticent even to admit or to conceive the possibility of alternative formulations of quantum theory. A similar reception, unfortunately, affected also Hugh Everett's relative-state formulation of QM, as clearly illustrated in <cit.> and <cit.>. In particular, studying the correspondence between Bohm and Pauli and the difficult interactions that the former had with Rosenfeld, we have seen how these great scientists misunderstood the main goals and motivations that led Bohm to propose a new reading of the quantum formalism. In fact, we showed that to restore determinism or to anchor physics to outdated ideas were not aims in his agenda. On the contrary, Bohm's letters and published works explicitly show an open-minded thinker who wanted to avoid absolute determinism and the mechanistic philosophy attached to Newtonian physics in the first place. Thus, Bohm's contemporaries evidently had all the necessary material and information not to classify or to consider him a reactionary scientist. Secondly, we reconstructed in some detail the evolution of Bohm's ideas about (i) the structure of reality, and (ii) the ontologico-epistmological role of physical theories from the early fifties until the publication of his monograph Causality and Chance in Modern Physics. Reading his private and public production between 1951 and 1957 one cannot but note the originality and depth of his philosophy of science, which can surely be interesting for contemporary discussions about scientific realism and pluralism, as we have argued in the previous section. It is immediately clear in fact how accusations of dogmatism (still present today) miss completely the point of Bohm's scientific and philosophical research. In conclusion, let us underline that further work can be done in providing a coherent account of Bohm's complete philosophical trajectory, i.e. putting the ideas here presented in relation with later stages of his career, and thereby with his successive scientific productions. Moreover, it would be interesting to study the connections and relations existing between his philosophy of science and other important perspectives as for instance those advanced by Kuhn and Popper, whose work was very well known by Bohm.[Cf. <cit.> for an iinteresting discussion of Bohm's influence on Feyerabend.] Finally, it will be certain relevant to understand how the modern supporters of pilot-wave theory relate themselves to his heritage. All of this will be material for future research. apalike
http://arxiv.org/abs/2307.04247v1
20230709185338
VR Job Interview Using a Gender-Swapped Avatar
[ "Jieun Kim", "Hauke Sandhaus", "Susan R. Fussell" ]
cs.HC
[ "cs.HC", "H.5.m" ]
These authors contributed equally [email protected] 0000-0002-1530-8871 Cornell University Ithaca NY USA 14853 [1] [email protected] 0000-0002-4169-0197 Cornell University Ithaca NY USA 14853 [email protected] 0000-0001-8980-5232 Cornell University Ithaca NY USA 14853 Virtual Reality (VR) has emerged as a potential solution for mitigating bias in a job interview by hiding the applicants' demographic features. The current study examines the use of a gender-swapped avatar in a virtual job interview that affects the applicants' perceptions and their performance evaluated by recruiters. With a mixed-method approach, we first conducted a lab experiment (N=8) exploring how using a gender-swapped avatar in a virtual job interview impacts perceived anxiety, confidence, competence, and ability to perform. Then, a semi-structured interview investigated the participants’ VR interview experiences using an avatar. Our findings suggest that using gender-swapped avatars may reduce the anxiety that job applicants will experience during the interview. Also, the affinity diagram produced seven key themes highlighting the advantages and limitations of VR as an interview platform. These findings contribute to the emerging field of VR-based recruitment and have practical implications for promoting diversity and inclusion in the hiring process. <ccs2012> <concept> <concept_id>10003120.10003121.10011748</concept_id> <concept_desc>Human-centered computing Empirical studies in HCI</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Human-centered computing Empirical studies in HCI < g r a p h i c s > A two-story virtual office was created using Unity and VRchat, featuring realistic assets. The first floor has a mirror and whiteboard for interview preparation, while the second floor has an interview room for applicants and recruiters. 6 July 2023 VR Job Interview Using a Gender-Swapped Avatar Susan R. Fussell August 12, 2023 ============================================== § BACKGROUND Virtual Reality (VR) is an emerging platform that allows users to engage with the simulated environment from a first-person perspective. As it offers rich and immersive experiences that closely resemble face-to-face interactions, organizations across the board (e.g., US Navy, General Mills, and Jaguar) are increasingly turning to VR for interviewing job applicants <cit.>. Earlier studies viewed VR as a viable alternative to traditional interview settings. VR can help individuals to feel more focused and less distracted during interviews by controlling the external audio and visual cues irrelevant to the interview. On the other hand, the lack of nonverbal feedback in VR makes it difficult to discuss with the recruiter compared to face-to-face interviews <cit.>. While VR can simulate the real-life environment, using gender-stereotyped avatars can perpetuate the existing biases in the hiring process. Over the decades, gender has heavily influenced recruitment practices, leading to gender bias where one gender is perceived as more competent than the others <cit.>. Studies have shown that even when male and female candidates possess similar qualifications and experiences, recruiters consistently rate male candidates as more competent and hireable <cit.>. This bias also affects female applicants' chances of being selected for faculty <cit.> and industry positions <cit.>. We propose the design of gender-swapped avatars, which represent the gender incongruent with the users' self-identified gender, as the possible solution to address cognitive bias revealed by both applicants and recruiters in job interviews. Using gender-swapped avatars in the virtual interview can conceal the job applicants' physical identity, which can help mitigate (un)conscious bias in the hiring process. A recent study tested if using a gender-swapped avatar helps reduce implicit bias and increase empathy toward women in the STEM field <cit.>. The study found that male participants using a gender-swapped avatar (i.e., female avatars) chose the female candidate more often compared to when they were using a gender-congruent avatar (i.e., male avatars). This highlights the users’ ownership over the virtual body which affects their perceptions and actual behaviors in the virtual space. As VR is becoming a potential venue for job interviews, it is essential to understand how the applicants' avatar representation affects users’ perceptions and behaviors. As the first step to designing a gender-inclusive interview environment in VR, the current study investigates the applicants’ psychological responses when manipulating the avatars' gender which may affect their interview performance. With eight university senior students who prepare job applications and interviews, this study conducted a preliminary experiment investigating the impact of the avatar’s gender on VR experiences, followed by a semi-structured interview that discovers the advantages and disadvantages of VR as an interview platform. Employing a mixed-method approach, we explore three research questions: * RQ1. How does the applicants’ use of gender-swapped avatars affect their perceptions regarding (a) anxiety, (b) confidence, (c) competence, and (d) ability to perform? * RQ2. How does the applicants’ use of gender-swapped avatars affect the recruiters’ evaluation of the applicant’s perceived (a) anxiety, (b) confidence, (c) competence, and (d) ability to perform? * RQ3. How does using an avatar in VR affect the applicants' interview experiences? Our study revealed that participants using a gender-swapped avatar reported a significantly lower level of anxiety compared to those using a gender-matched avatar. Additionally, participants reported that using avatars during the virtual interview helped alleviate their concerns about being judged on their appearance, which enabled them to feel more comfortable and focus on their interview questions. By shedding light on the potential benefits and challenges of using VR in job interviews, our study can provide valuable insights to researchers and practitioners in the CSCW community for designing virtual job interviews. § METHOD §.§ Overview To answer three research questions, we first operated a lab-based between-subject experiment where participants conduct a simulated job interview in the VR using either gender-swapped or gender-matched avatars. The avatar's gender was manipulated by either swapping or aligning its visual features with the participant's self-identified gender (See details in Materials). After conducting the VR interview, the participants evaluated their interview experience in terms of their perceived anxiety, confidence, competence, and ability to perform (RQ1). They also watched the other applicants' recorded interview videos taking the role of recruiter to assess their perceived attitudes (RQ2). Following that, a semi-structured interview was conducted to gather insights on the participants' experience of using VR and avatars for job interviews (RQ3). §.§ Participants The university students who are looking for or preparing for job interviews (age; M=23.28, SD=4.02) were recruited from a university recruitment system that grants course credits for participation. Participants (N=8) were made up of five males and three females, and none of them were identified as transgender or non-binary gender. In the study, participants wore head-mounted devices (i.e., Oculus Quest 2) and used hand controllers to navigate the VR space. In terms of VR experience, one had “never” used it, two had “rarely” used it, two used it “sometimes”, and one used it “often”. The remaining two participants did not indicate their VR experience. §.§ Materials §.§.§ Virtual office in VRchat Using the Unity game engine and the VRchat SDK, we built a two-story virtual office (Fig. <ref>). We included decorative assets from the Unity asset store, such as coffee mugs, printers, notepads, and high-quality lights to make the office look realistic. On the first floor, the major interactive component in the office is a large-scale mirror next to a whiteboard that displays the interview questions. By putting a whiteboard in front of the mirror, we intentionally led the participants to see themselves in the mirror often while preparing for the interview, which will strengthen the manipulation of avatar embodiment. On the second floor, connected via a stair, the recruiter's office was furnished with chairs where applicants can sit in front of the recruiters during the interview. §.§.§ Avatar and voice manipulation The Ready Player Me platform allows the automatic creation of avatars with the users’ face images. This automated process eliminates potential biases from researchers when customizing participants' avatars. The platform simulates the avatar's visual features such as race, age, hair color, face shape, and gender to align with the user's characteristics, while allowing the researcher to modify or keep the assigned gender as per the experimental conditions. To minimize confounding factors unrelated to gender, the avatars' attractiveness and likeability were controlled through a pretest. Fig. <ref> shows examples of automatically generated avatars from profile photos <cit.> and gender-swapped avatars. Their full-body avatars simulate eye blinking, individual finger movements, idle body movements, and animate mouth movements. The voice of participants in the gender-swap conditions was manipulated through MorphVOX pro by increasing or lowering their pitch. A voice anonymization filter obfuscated all participants' voices. §.§.§ Job interview questions The job interview questions targeted ‘soft skills’ (e.g., organizational skills, leadership skills, teamwork skills, accountability skills) which are not associated with the participant’s particular job expertise or capabilities. To control the participants' preferences or perceived difficulties, those questions did not require them to reveal personal information and explicit knowledge for the right answers. The questions were refined in several sessions to control the confounding effects. §.§ Procedure Fig. <ref> summarizes the procedure that participants go through during the study. Before starting the VR interview simulation, participants were asked to submit their face photos to generate their avatars used in the VR space. The gender of each avatar was manipulated to be either congruent or incongruent with their self-identified gender. The platform also supports non-gendered avatar bodies, but in our sample, all participants self-identified with one of the binary genders. After being informed about how to use VR controllers and voice modulators, participants entered the virtual space and were asked to spend five minutes in front of a mirror to familiarize themselves with the avatar's appearance, gestures, and body movements. Also, they got additional five minutes to prepare the interview questions written on a virtual whiteboard by looking at themselves in the mirror. Then, they were instructed to go upstairs to join the recruiter in the interview room while maintaining their anonymity. The recruiter avatar was represented as a white male controlled by a confederate researcher. The recruiter followed a pre-written script and only provided standardized listening feedback. Participants answered five interview questions which were recorded via the confederate's headset. After exiting the virtual office, participants completed a post-survey to evaluate their anxiety, confidence, competence, and ability to perform. After that, participants watched two interview recordings of other participants randomly selected across the conditions and evaluated their interview performance regarding perceived anxiety, confidence, competence, and ability to perform. Finally, participants were asked about the overall experience of VR interviews, conducting a semi-structured interview for about 10 minutes. §.§ Measures §.§.§ Post-test survey The post-survey questionnaires were used both when the participants evaluate their own perceptions on VR interview and when they evaluate other participants’ interview performance after watching the recordings. The measures include anxiety <cit.>, confidence <cit.>, competence, and ability to perform <cit.> retrieved and adapted from the questionnaires of previous studies, using a 5-point Likert scale (1: Strongly disagree - 5: Strongly agree). §.§.§ Semi-structured interview We used open-ended questions for a semi-structured interview. These questions probe the following topics: specifics about VR experience, virtual reality versus face-to-face for a job interview, the influence of the avatar, and emotions experienced during the experiment. After recording all participants' post interviews answers, we transcribed them with AI software, followed by manual correction by two researchers. To find meaningful themes from the data, we used affinity diagramming to group excerpts from those transcripts into clusters and analyzed relationships between them (Appendix. Fig. <ref>). All clusters had mentions from more than half of the participants. § RESULTS §.§ Quantitative data analysis Given the small sample size (N=8), we employed the Mann-Whitney U test in our study. The Mann-Whitney U test is used to compare differences between two independent groups that are not normally distributed <cit.>. It has the great advantage of being used for small sample sizes of subjects, such as those with less than 15 participants <cit.>, while producing significant results comparable to those of the t-test <cit.>. Table. <ref> presents the Mean Rank and Sum of Ranks for each condition group along with the statistical significance of the participants' perceptions during the VR job interview. Our findings concerning RQ1 indicate that participants who used gender-swapped avatars tended to experience significantly lower anxiety than those who used gender-matched avatars, z=-2.23, p=.02. However, there were no significant differences in perceived confidence, competence, and ability to perform between the two groups. Regarding RQ2, the avatar did not significantly affect how participants evaluated other participants in terms of perceived anxiety, z=-.77, p=.43, confidence, z=-1,17 p=.24, competence, z=-1.85, p=.06, and ability to perform, z=-.47, p=.63. §.§ Qualitative data analysis Based on the interview responses, we identified 7 major clusters: (1) The main advantage of VR interviews people described was the ability to be assessed based on skills and qualifications, rather than appearance. Using an avatar allowed the users to remain good-looking and anonymous about their personality traits, such as gender, race, and age. Interestingly, some participants assumed that the recruiters would also be unaffected by the visual appearance of the avatar, as they would be aware that anyone can be anyone behind it. (2) Many people found the virtual interview environment to be immersive and similar to real-life interviews. (3) Anonymity and not being there in person made people feel less stressed and more comfortable than in face-to-face interviews. People who previously did job interviews preferred VR over face-to-face interviews. Some people also believed that VR could be a useful training platform. (4) However, some people struggled to adapt to the virtual environment and experienced motion sickness during prolonged use. (5) The lack of facial expressions in avatars discouraged some participants, as limited feedback from the recruiter avatar prevents them from gauging how the interviewer evaluates the applicant during the interview. (6) Participants highlighted the importance of gestures in establishing a connection with others and feeling present in VR. While some believed that the limited gestures might impact their performance negatively, others assumed that recruiters would take this limitation into account when evaluating the applicants. (7) Lastly, participants emphasized the importance of customizing their avatars based on their self-identity in order to feel present and immersed in the virtual environment. They wanted to dress their avatars appropriately for the job interview, and none of them wanted to create an avatar that did not match their physical appearance. § DISCUSSION AND LIMITATIONS Our findings show the trend that participants using gender-swapped avatars experience a lower level of anxiety than those using gender-matched avatars. Considering the results from our qualitative data analysis, we can assume that gender-swapped avatars might lead the participants to feel less anxious under the umbrella of anonymity. Using an avatar that less represents themselves might reduce their pressure under the interview setting. Another possibility is that gender-swapped individuals might feel less connected to their virtual environment and their avatars, which may lower their pressure to perform well. Therefore, the avatar's gender swap may have alleviated the anxiety they might have felt to act better in the interview. Moreover, the avatar's gender swap did not affect how the recruiter evaluate the applicants' attitudes and performance. This could prove beneficial in practice, demonstrating that even when the interview applicants are represented as a different gender, it does not make significant bias in the recruiting process. The qualitative interview data proposed the advantages and limitations of VR as an interview platform. Consistent with the findings reported in previous qualitative research <cit.>, users who conducted VR interviews felt comfortable and less conscious, enabling them to focus on their interview answers. While most of the participants were impressed by how similar the virtual interview space was to the real-life setting, still VR devices have limited capacity to simulate participants' real movements on a delicate level, such as a lack of nonverbal cues. Some participants mentioned that gestures are critical to providing natural and present interactions with the recruiter avatar. Also, VR limits understanding of the recruiter avatar's facial expression which indicates how the recruiter thinks about the applicants and facilitates VR users to make realistic and effective interactions. The technical limitations of VR contributing to motion sickness need to be addressed to create a comfortable and immersive VR experience for users As recruiting process is the first stage of an individual's professional career, initial biases formed during this process can cause pay disparities and other long-term effects along the road. Social constructs may lead candidates to believe that they are already at a disadvantage because of their social identities. In addition, a job interview tends to be a stressful setting, which can also hinder the candidate's performance. In this sense, virtual reality can be a promising tool for conducting job interviews once the technical limitations are overcome. As the study provides preliminary results with a small sample size, we could not identify gender biases in the job performance evaluation process. To identify these biases, the next step will be to expand the participation pool to a larger population that is not skewed toward university students. Another limitation of this study is that the participant did not include diverse gender groups (e.g., transgender and non-binary gender). Including different gender groups that are non-binary will provide another aspect of using gender-swapped avatars that varies users' interview experience. Future studies are required to examine how the embodiment of avatars that presents users' identities such as race, gender, age, and physical traits can form more inclusive experiences for diverse groups. ACM-Reference-Format figuresection § APPENDIX
http://arxiv.org/abs/2307.05891v1
20230712034224
PID-Inspired Inductive Biases for Deep Reinforcement Learning in Partially Observable Control Tasks
[ "Ian Char", "Jeff Schneider" ]
cs.LG
[ "cs.LG", "cs.AI" ]
SC-NeuS: Consistent Neural Surface Reconstruction from Sparse and Noisy Views Shi-Sheng Huang Beijing Normal University [email protected] Zi-Xin Zou Tsinghua University [email protected] Yi-Chi Zhang Beijing Institute of Technology [email protected] Hua Huangcorresponding author. Beijing Normal University [email protected] Received: date / Accepted: date ================================================================================================================================================================================================================================================================================================ Deep reinforcement learning (RL) has shown immense potential for learning to control systems through data alone. However, one challenge deep RL faces is that the full state of the system is often not observable. When this is the case, the policy needs to leverage the history of observations to infer the current state. At the same time, differences between the training and testing environments makes it critical for the policy not to overfit to the sequence of observations it sees at training time. As such, there is an important balancing act between having the history encoder be flexible enough to extract relevant information, yet be robust to changes in the environment. To strike this balance, we look to the PID controller for inspiration. We assert the PID controller's success shows that only summing and differencing are needed to accumulate information over time for many control tasks. Following this principle, we propose two architectures for encoding history: one that directly uses PID features and another that extends these core ideas and can be used in arbitrary control tasks. When compared with prior approaches, our encoders produce policies that are often more robust and achieve better performance on a variety of tracking tasks. Going beyond tracking tasks, our policies achieve 1.7x better performance on average over previous state-of-the-art methods on a suite of high dimensional control tasks. [Code available at <https://github.com/IanChar/GPIDE>] § INTRODUCTION Deep reinforcement learning (RL) holds great potential for solving complex tasks through data alone, and there have already been exciting applications of RL to playing video games <cit.>, fine tuning language models <cit.>, and control of robots <cit.>. Despite these successes, there still remain significant challenges in controlling real-world systems that stand in the way of realizing RL's full potential <cit.>. One major hurdle is the issue of partial observability, resulting in a Partially Observable Markov Decision Process (POMDP). In this case, the true state of the system is unknown and the policy must leverage its history of observations. Another hurdle stems from the fact that policies are often trained in an imperfect simulator, which is likely different from the true environment. Combining these two challenges necessitates striking a balance between extracting useful information from the history and avoiding overfitting to modelling error. Therefore, introducing the right inductive biases to the training procedure is crucial. The use of recurrent network architectures in deep RL for POMDPs was one of the initial proposed solutions <cit.> and remains a prominent approach for control tasks <cit.>. Theses architectures are certainly flexible; however, it is unclear whether they are the best choice for control tasks, especially since they were originally designed with other applications in mind such as natural language processing. In contrast with deep RL methods, the Proportional-Integral-Derivative (PID) controller remains a cornerstone of modern control systems despite its simplicity and the fact it is over 100 years old <cit.>. PID controllers are single-input single-output (SISO) feedback controllers designed for tracking problems, where the goal is to maintain a signal at a given reference value. The controller adjusts a single actuator based on the weighted sum of three terms: the current error between the signal and its reference, the integral of this error over time, and the temporal derivative of this error. PID controllers are far simpler than recurrent architectures and yet are still able to perform well in SISO tracking problems despite having no model for the system's dynamics. We assert that PID's success teaches us that in many cases only two operations are needed for successful control: summing and differencing. To investigate this assertion, we conduct experiments on a variety of SISO and multi-input multi-output (MIMO) tracking problems using the same featurizations as a PID controller to encode history. We find that this encoding often achieves superior performance and is significantly more resilient to changes in the dynamics during test time. The biggest shortcoming with this method, however, is that it can only be used for tracking problems. As such, we propose an architecture that is built on the same principles as the PID controller, but is general enough to be applied to arbitrary control problems. Not only does this architecture exhibit similar robustness benefits, but policies trained with it achieve an average of 1.7x better performance than previous state-of-the-art methods on a suite of high-dimensional control tasks. § PRELIMINARIES The MDP and POMDP We define the discrete time, infinite horizon Markov Decision Process (MDP) to be the tuple (, , r, T, T_0, γ), where is the state space, is the action space, r: ××→ is the reward function, T: ×→Δ() is the transition function, T_0 ⊂Δ() is the initial state distribution, and γ is the discount factor. We use Δ() to denote the space of distributions over . Importantly, the Markov property holds for the transition function, i.e. the distribution over a next state s' depends only on the current state, s, and current action, a. Knowing previous states and actions does not provide any more information. The objective is to learn a policy π: →Δ() that maximizes the objective J(π) = [ ∑_t = 0^∞γ^t r(s_t, a_t, s_t + 1) ], where s_0 ∼ T_0, a_t ∼π(s_t), and s_t + 1∼ T(s_t, a_t). When learning a policy, it is often key to learn a corresponding value function, Q^π: ×→ℝ, which outputs the expected discounted returns after playing action a at state s and then following π afterwards. In a Partially Observable Markov Decision Process (POMDP), the observations that the policy receives are not the true states of the process. In control this may happen for a variety of reasons such as noisy observations made by sensors, but in this work we specifically focus on the case where aspects of the state space remain unmeasured. In any case, the POMDP is defined as the tuple (, , r, T, T_0, Ω, , γ), where Ω is the space of possible observations, : ×→Δ(Ω) is the conditional distribution of seeing an observation, and the rest of the elements of the tuple remain the same as before. The objective remains the same as the MDP, but now the policy and value functions are not allowed access to the state. Crucially, the Markov property does not hold for observations in the POMDP. That is, where o_1:t+1 := o_1, o_2, …, o_t+1 are observations seen at times 1 through t+1, o_1:t-1⊥̸o_t+1 | o_t, a_t. A naive solution to this problem is to instead have the policy take in the history of the episode so far. Of course, it is usually infeasible to learn a policy that takes in the entire history for long episodes since the space of possible histories grows exponentially with the length of the episode. Instead, one can encode the information into a more compact representation. In particular, one can use an encoder ϕ which outputs an encoding z_t = ϕ(o_1:t, a_1:t-1, r_1:t-1) (note that encoders need not always take in the actions and rewards). Then, the policy and Q-value functions are augmented to take in (o_t, z_t) and (o_t, a_t, z_t), respectively. Tracking Problems and PID Controllers. We first focus on the tracking problem, in which there are a set of signals that we wish to maintain at given reference values. For example, in espresso machines the temperature of the boiler (i.e. the signal) must be maintained at a constant reference temperature, and a controller is used to vary the boiler's on-off time so the temperature is maintained at that value <cit.>. Casting tracking problems as discrete time POMDPs, we let o_t = (x^(1)_t, …, x^(M)_t, σ^(1)_t, …, σ^(M)_t) be the observation at time t, where x^(i)_t and σ^(i)_t are the i^th signal and corresponding reference value, respectively. The reward at time t is simply the negative error summed across dimensions, i.e. -∑_m=1^M | x_t^(m) - σ_t^(m)|. When dealing with a single-input single-output (SISO) system (with one signal and one actuator that influences the signal), one often uses a Proportional-Integral-Derivative (PID) controller: a feedback controller that is often paired with feedforward control. This controller requires no knowledge of the dynamics, and simply sets the action via a linear combination of three terms: the error (P), the integral of the error (I), and the derivative of the error (D). When comparing other architectures to the PID controller, we will use orange colored text and blue colored text to highlight similarities between the I and D terms, respectively. Concretely, the policy corresponding to a discrete-time PID controller is defined as π^PID(o_t) = K_P (x_t^(1) - σ_t^(1)) + K_I ∑_i = 1^t (x_i^(1) - σ_i^(1)) dt + K_D (x_t^(1) - σ_t^(1)) - (x_t -1^(1) - σ_t -1^(1))/dt where K_P, K_I, and K_D are scalar values known as gains that must be tuned. PID controllers are designed for SISO control problems, but many real-world systems are multi-input multi-output (MIMO). In the case of MIMO tracking problems, where there are M signals with M corresponding actuators, one can control the system with M separate PID controllers. However, this assumes there is a clear breakdown of which actuator influences which signal. Additionally, there are often interactions between the different signals, which the PID controllers do not account for. Beyond tracking problems, it is less clear how to use PID controllers without substantial engineering efforts. § METHODOLOGY To motivate the following, consider the task of controlling a tokamak: a toroidal device that magnetically confines plasma and is used for nuclear fusion. Nuclear fusion holds the promise of providing an energy source with few drawbacks and an abundant fuel source. As such, there has recently been a surge of interest in applying machine learning <cit.>, and especially RL <cit.>, for tokamak control. However, applying deep RL has the same problems as mentioned earlier; the state is partially observable since there are aspects of the plasma's state that cannot be measured in real time, and the policy must be trained before-hand on an imperfect simulator since operation of the actual device is extremely expensive. How should one choose a historical encoder with these challenges in mind? Previous works <cit.> suggest using Long Short Term Memory (LSTM) <cit.>, Gated Recurrent Units <cit.>, or transformers <cit.>. These architectures have been shown to be powerful tools in natural language processing, where there exist complicated relationships between words and how they are positioned with respect to each other. However, do the same complex temporal relationships exist in something like tokamak control? The fact that PID controllers have been successfully applied for feedback control on tokamaks suggests this may not be the case <cit.>. In reality, the extra flexibility of these architectures may become a hindrance when deployed on the physical device if they overfit to quirks in the simulator. In this section, we present two historical encoders that we believe have good inductive biases for control. They are inspired by the PID controller in that they only sum and difference in order to combine information throughout time. Following this, in Section <ref>, we empirically show the benefits of these encoders on a number of control tasks including tokamak control. The PID Encoder. Under the framework of a policy that uses a history encoder, the standard PID controller (<ref>) is simply a linear policy with an encoder that outputs the tracking error, the integral of the tracking error, and the derivative of the tracking error. This notion can be extended to MIMO problems and arbitrary policy classes, resulting in the PID-Encoder (PIDE). Given input o_1:t, this encoder outputs a 3M dimensional vector consisting of (x_t^(m) - σ_t^(m)), ∑_i = 1^t (x_i^(m) - σ_i^(m)) dt, and (x_t^(m) - σ_t^(m)) - (x_t -1^(m) - σ_t -1^(m))/dt ∀ m=1, …, M. For SISO problems, policies with this encoder have access to the same information as a PID controller. However, for MIMO problems the policy has access to all the information that each PID controller, acting in isolation, would have. Ideally a sophisticated policy would coordinate each actuator setting well. The Generalized PID Encoder. A shortcoming of PIDE is that it is only applicable to tracking problems since it operates over tracking error explicitly. A more general encoder should instead accumulate information over arbitrary features of each observation. With this in mind, we introduce the Generalized-PID-Encoder (GPIDE). GPIDE consists of a number of “heads”, each accumulating information about the history in a different manner. When there are H heads, GPIDE forms history encoding, z_t, through the following: v_i^h = f^h_θ(concatenate(o_i - 1, a_i-1, r_i-1, o_i - o_i - 1)) ∀ i ∈{1, …, t}, h ∈{1 …, H} w_t^h = ℓ^h(v_1:t^h) ∀ h ∈{1 …, H} z_t = g_θ(concatenate(w_t^1, w_t^2, …, w_t^h)) Here, GPIDE is parameterized by θ. For head h, f^h_θ is a linear projection of the previous observation, action, reward, and difference between the current and previous observation to ℝ^D, and ℓ^h is a weighted summation of these projections. g_θ is a decoder which combines all of the information from the heads. A diagram of this process is shown in Figure <ref>. Notice that the key aspects of the PID controller are present here. The difference in observations is explicitly taken before the linear projection f^h_θ. We found that this simple method works best for representing differences when the observations are scalar descriptions of the state (e.g. joint positions). Although we do not consider image observations in this work, we imagine a similar technique could be done by taking the differences in image encodings. Like the integral term of the PID, ℓ^h also accumulates information over time. In the following, we consider several possibilities for ℓ^h, and we will refer to these different choices as “head types” throughout this work. We omit the superscript h below for notational convenience. Summation. Most in line with PID, the projections can be summed, i.e. ℓ(v_1:t) = ∑_i=1^t v_i. Exponential Smoothing. In order to weight recent observations more heavily, exponential smoothing can be used. That is, ℓ(v_1:t) = (1- α)^t-1 v_1 + ∑_i=2^t α (1-α)^t-iv_i, where 0 ≤α≤ 1 is the smoothing parameter. Unlike summation, this head type cannot accumulate information in the same way because it is a convex combination. Attention. Instead of hard-coding a weighted summation of the projections, this weighting can be learned through attention <cit.>. Attention is one of the key components of transformers because of its ability to learn relationships between tokens. To implement this, two additional linear functions should be learned that project concatenate(o_i-1, a_i-1, r_i-1, o_i - o_i-1) to ℝ^D. These new projections are referred to as they key and query vectors, denoted as k_i and q_i respectively. The softmax between their inner products is then used to form the weighting scheme for v_1:t. We can rewrite the first two steps of GPIDE for a head that uses attention as v_i, k_i, q_i = f_θ(concatenate(o_i - 1, a_i-1, r_i-1, o_i - o_i - 1)) ∀ i ∈{1, …, t} w_1:t = ℓ(q_1:t, k_1:t, v_1:t) = softmax(q_1:t k_1:t^T/√(D))v_1:t Here, q_1:t, k_1:t, and v_1:t are treated as t × D dimensional matrices. Since it results in a convex combination, attention has the capacity to reproduce exponential smoothing but not summation. § RELATED WORK A control task may be partially observable for a myriad of reasons including unmeasured state variables <cit.>, sensor noise<cit.>, and unmeasured system parameters <cit.>. When there are unmeasured system parameters, this is usually framed as a meta-reinforcement learning (MetaRL) <cit.> problem. This is a specific subclass of POMDPs where there is a collection of MDPs, and each episode, an MDP is sampled from this collection. Although these works do consider system parameters varying between episodes, the primary focus of the experiments usually tends to be on the multi-task setting (i.e. different reward functions instead of transition functions) <cit.>. We consider not only differing system parameters but also the presence of unmeasured state variables; therefore, the class of POMDPs considered in this paper is broader than the one studied in MetaRL. Using recurrent networks has long been an approach for tackling POMDPs <cit.>, and is still a common way to do so in a wide variety of settings <cit.>. Moreover implementations are publicly available both for on-policy <cit.> and off-policy <cit.> algorithms, making it an easy pick for those wanting a quick solution. Some works <cit.> use recurrent networks to estimate the belief state <cit.>, which is a distribution over the agent's true state. However, <cit.> recently showed that well-implemented, recurrent versions of SAC <cit.> and TD3 <cit.> perform competitively with many of these specialized algorithms. In either case, we believe works that estimate the belief state are not in conflict with our own since their architectures can be modified to use GPIDE instead of a recurrent unit. Beyond recurrent networks, there has been a surge of interest in applying transformers to reinforcement learning <cit.>. However, we were unable to find many instances of transformers being used as history encoders in the online setting, perhaps because of their difficulty to train. <cit.> introduced a new architecture to remedy these difficulties; however, <cit.> applied transformers to MetaRL and asserted that careful weight initialization is the only thing needed for stability in training. We note that GPIDE with only attention heads is similar to a single multi-headed self-attention block that appears in many transformer architectures; however, we show that attention is the least important type of head in GPIDE and often hurts performance (see Section <ref>). Perhaps closest to our proposed architecture is PEARL <cit.>, which does a multiplicative combination of Gaussian distributions corresponding to each state-action-reward tuple. However, their algorithm is designed for the MetaRL setting specifically. Additionally, we note that the idea of summations and averaging has been shown to be powerful in prior works. Specifically, <cit.> introduced the Statistical Recurrent Unit, an alternative architecture to LSTMs and GRUs that leverages moving averages and performs competitively across several supervised learning tasks. Lastly, we note there are many facets of RL where improvements can be made to robustness, and many works focus on altering the training procedure. They use techniques such as optimizing the policy's worst-case performance <cit.> or using variational information bottlenecking (VIB) <cit.> to limit the information used by the policy <cit.>. In contrast, our work specifically focuses on how architecture choices of history encoders affect robustness, but we note our developments can be used in conjunctions with these other directions, possibly resulting in improved robustness. § EXPERIMENTS In this section, we experimentally compare PIDE and GPIDE against recurrent and transformer encoders. In particular, we explore the following questions: * How does the performance of a policy using PIDE or GPIDE do on tracking problems? In addition, how well can policies adapt to different system parameters and how robust to modelling error are they on these problems? (Section <ref>) * Going beyond tracking problems, how well does GPIDE perform on high dimensional locomotion control tasks (Section <ref>) * How important is each type of head in GPIDE? (Section <ref>) For the following tracking problems we use the Soft Actor Critic (SAC) <cit.> algorithm with each of the different methods for encoding observation history. Following <cit.>, we make two separate instantiations of the encoders for the policy and value networks, respectively. Since the tracking problems are relatively simple, we use a small policy network consisting of 1 hidden layer with 24 units; however, we found that we still needed to use a relatively large Q network consisting of 2 hidden layers with 256 units each to solve the problems. All hyperparameters remain fixed across baselines and tracking tasks; only the history encoders change. For the recurrent encoder, we use a GRU and follow the implementation of <cit.> closely. Our transformer encoder closely resembles the GPT2 architecture <cit.>, and it also includes positional encodings for the observation history. For GPIDE, we use H = 6 heads: one summation head, two attention heads, and three exponential smoothing heads (with α = 0.25, 0.5, 1.0). This choice was not optimized, but rather was picked so that all types of heads were included and so that GPIDE has roughly the same amount of parameters as our GRU baseline. For additional reference we compare each of these RL methods with a tuned PID controller. Not only do PID controllers have an incredibly small number of parameters compared to the other RL-based controllers, but the training procedure is also much more straightforward since it can be posed as a black-box optimization over the returns. All methods are built on top of the rlkit library <cit.>. More details about implementations and hyperparameters can be found in Appendices <ref> and <ref>, respectively. Implementations can be found at <https://github.com/IanChar/GPIDE>. §.§ Tracking Problems In this subsection we consider a number of tracking problems. For each environment, the observation consists of the current signals, the reference values, and additional information about the last action made. Unless stated otherwise, the reward is as described in Section <ref>. More information about environments can be found in Appendix <ref>. To make a fair comparison against PID controls, we choose to only encode the history of observations. For evaluation, we use 100 fixed settings of the environment (each setting consists of targets and system parameters). To avoid overfitting to these 100 settings, we used a separate set of 100 settings and averaged over 3 seeds when developing our methods. We evaluate policies throughout training, but report the average over the last 10% of evaluations as the final returns. We allow each policy to collect one million environment transitions, and all scores are averaged over 5 seeds. Lastly, each table shows scores formed by scaling the returns by the best and worst average returns across all methods in a particular variant of the environment, where scores of 0 and 100 correspond to the worst and best returns respectively. Mass Spring Damper Tracking The first tracking task is the control of a classic 1D toy physics system in which there is a mass attached to a wall by a spring and damper. The goal is then to apply a force to the mass in order to move it to a given reference location. There are three system parameters to consider here: the mass, spring constant, and damping factor. We also consider the substantially more difficult problem in which there are two masses sandwiched between two walls, and the masses are connected to the walls and each other by springs and dampers (see Appendix <ref> for a diagram of this). Overall there are eight system parameters (three spring constants, three damping factors, and two masses) and two actuators (a force applied to each mass). We refer to the first problem as Mass-Spring-Damper (MSD) and the second problem as Double-Mass-Spring-Damper (DMSD). Additionally, we test how adaptive these policies are by changing system parameters in a MetaRL-type fashion (i.e. for each episode we randomly select system parameters and then fix them for the rest of the episode). Similar to <cit.>, we train the policies on three versions of the environment: one with no variation in system parameters, one with a small amount of variation, and one with a large amount of variation. We evaluate all policies on the version of the environment with large system parameter variation to test generalization capabilities. Table <ref> shows the scores achieved for each of the settings. While GRU and transformers seem to do a good job at encoding history for the MSD environment, both are significantly worse on the more complex DMSD task when compared to our proposed encoders. This is true especially for GRU, which performs worse than two independent PID controllers for every configuration. Additionally, while it seems that GRU can generalize to large amounts of variation in system parameters when a small amount is present, it fails horribly when trained on fixed system parameters. On the other hand, transformers are able to generalize surprisingly well when trained on both fixed system parameters and with small variation. We hypothesize the autoregressive nature of GRU may make it particularly susceptible to overfitting. Comparing PIDE and GPIDE, we see that PIDE tends to shine in the straightforward cases where there is little change in system parameters, whereas GPIDE is able to adapt when there is a large variation in parameters since it has additional capacity. Navigation Environment To emulate the setting where the policy is trained on an imperfect simulator, we consider an environment in which the agent is tasked with moving itself across a surface to a specified 2D target as quickly and efficiently as possible. At every point in time, the agent can apply some force to move itself, but a penalty term proportional to the magnitude of the force is subtracted from the reward. Suppose that we have access to a simulator of the environment that is perfect except for the fact that it does not model friction between the agent and the surface. We refer to this simulator and the real environment as the “No Friction“ and “Friction” environment, respectively. In both environments, the mass of the agent is treated as a system parameter that is sampled for each episode; however, the Friction environment has a larger range of masses and also randomly samples the coefficient of friction each episode. Figure <ref> shows the average returns recorded during training for both navigation environments and when the policies trained in No Friction are evaluated in Friction. A table of final scores can be found in Appendix <ref>. One can see that GPIDE not only achieves the best returns in the environments it was trained in, but is also robust when going from the frictionless environment to the one with friction. On the other hand, PIDE has less capacity and therefore cannot achieve the same results; however, it is immediately more robust than the other methods, although it begins to overfit over time. It is also clear that using GRU is less sample efficient and less robust to changes in the test environment. Tokamak Control For our last tracking experiment we return to tokamak control. In particular, we focus on the DIII-D tokamak, a device operated by General Atomics in San Diego, California. We aim to control two quantities: β_N, the normalized ratio between plasma and magnetic pressure, and rotation, i.e. how fast the plasma is spinning around the toroid. These are important quantities to track because β_N serves as an approximate economic indicator and rotation control of the plasma has been suggested to be key for stability <cit.>. The policy has control over the eight neutral beams <cit.>, which are able to inject power and torque by blasting neutrally charged particles into the plasma. Importantly, two of the eight beams can be oriented in the opposite direction from the others, which decouples the total combined power and torque to some extent (see Figure <ref>). r0.4 < g r a p h i c s > Illustration of DIII-D from Above. Each beamline in the figure contains two independent beams (yellow boxes). The plasma is rotating counter-clockwise and the two beams in the bottom left of the figure are oriented in the counter-current direction, allowing power and torque to be decoupled. This figure gives a rough idea of beam positioning but is not physically accurate. To emulate the sim-to-real training experience, we create a simulator based on the equations described in <cit.> and <cit.>. This simulator has two major shortcomings: it assumes that certain states of the plasma (e.g. its shape) are fixed for entire episodes, and it assumes that there are no events that cause loss of confinement of the plasma. We make up for part of the former by randomly sampling plasma states each episode. The approximate “real” environment addresses these shortcomings by using a network trained on historical data as the transition function (an approach which has been shown to model the true system relatively accurately <cit.>). The network has access to a greater set of the plasma's state in order to predict β_N and rotation, and we “replay” historical data in order to emulate the evolution of the plasma's state for each episode. Furthermore, the additional information provided to the network is rich enough that loss of confinement events play a role in the dynamics. We consider two versions of this task: the first is a SISO task where total power is controlled to achieve a β_N target, and the second is a MIMO task where total power and torque is controlled to achieve β_N and rotation targets. The results for both of these tasks are shown in Table <ref>. Most of the RL techniques are able to do well if tested in the same environment they were trained in; the exception of this is PIDE, which curiously is unable to perform well in the simulator environment. While no reinforcement learning method matches the robustness of a PID controller, policies trained with GPIDE fare significantly better. §.§ High Dimensional Locomotion Moving past tracking problems, we evaluate GPIDE on the PyBullet <cit.> benchmark proposed by <cit.> and adapted in <cit.>. The benchmark has four robots: halfcheetah, hopper, walker, and ant. For each of these, either the current position information or velocity information is hidden from the agent. Except for GPIDE and transformer encoder, we use all of the performance traces given by <cit.>. In addition to SAC, they also train using PPO <cit.>, A2C <cit.>, TD3 <cit.>, and VRM <cit.>, a variational method that uses recurrent units to estimate the belief state. We reproduce as much of the training and evaluation procedure as possible, including using the same hyperparameters in the SAC algorithm and giving the history encoders access to actions and rewards. For more information see Appendix <ref>. Table <ref> shows the performance of GPIDE along with a subset of best performing methods (more results can be found in Appendix <ref>). These results make it clear that GPIDE is powerful in arbitrary control tasks besides tracking, as it dominates performance for every robot except HalfCheetah. The average score it achieves across all tasks is a 73% improvement over TD3-GRU, which we believe is the previous state-of-the-art. §.§ GPIDE Ablations r0.5 < g r a p h i c s > Averaged Attention Schemes for MSD-Small and HalfCheetah-P. Each y-position on the grid corresponds to an amount of history being recorded, and each x-position corresponds to a time point in that history. As such, each of the left-most points are the oldest observation in the history, and the diagonals correspond to the most recent observation. The darker the blue, the greater the weight that is assigned to that time point. To investigate the role of each type of head, we reran all experiments with three variants of GPIDE: one with six exponential smoothing heads (ES), one with five exponential smoothing heads and one summation head (ES + Sum), and one with six attention heads (see Appendix <ref> for details). Table <ref> shows the differences in the average scores for each environment. The first notable takeaway is that having summation is often important in some of the more complex environments. The other takeaway is that much of the heavy lifting is being done by the exponential smoothing. GPIDE fares far worse when only having attention heads, especially in DMSD and the PyBullet environments. We visualize some of the attention schemes learned by GPIDE for MSD with small variation and HalfCheetah (Figure <ref>). While the attention scheme learned for MSD could potentially be useful since it recalls information from near the beginning of the episode when the most movement is happening, it appears that the attention scheme for HalfCheetah is simply a poor reproduction of exponential smoothing, making it redundant and suboptimal. In fact, we found this phenomenon to be true across all attention heads and PyBullet tasks. We believe that the periodicity that appears here is due to the oscillatory nature of the problem and lack of positional encoding (although we found including positional encoding degrades performance). § DISCUSSION In this work, we introduced the PIDE and GPIDE history encoders to be used for reinforcement learning in partially observable control tasks. Although both are far simpler than prior methods of encoding, they often result in powerful yet robust controllers. We hope that this work inspires the research community to think about how pre-existing control methods can inform architecture choices. Limitations There are many different ways a control task may be partially observable, and we do not believe that our proposed methods are solutions to all of them. For example, we do not think GPIDE is necessarily suited for tasks where the agent needs to remember events (e.g. picking up a key to unlock a door). Additionally, this work focuses on cases where observations are in the form of scalar descriptors; we leave extending these ideas to images as future work. plainnat § IMPLEMENTATION DETAILS Code Release All code for implementations are provided in the supplemental material along with instructions for how to run experiments. The only experiment that cannot be run are the “real” cases for tokamak control. Architecture We use the same general architecture for each of the RL methods in this paper (see Figure <ref>). Each input to the history encoders, policy functions, and Q-value functions have corresponding encoders. This setup closely follows what was done in <cit.>. The encoders are simply linear projections; however, in the case of our GRU history encoder we do linear projections followed by a ReLU activation (as done in <cit.>). Although hypothetically the policy only needs to take in history encoding, z_t, since the current observation, we found it essential for the current observation to be passed in independently and have its own encoder. §.§ GPIDE Implementation Details In addition to what is mentioned in Section <ref>, we found that there were several choices that helped with training. First, there may be some scaling issues because o_t - o_t-1 may be small or the result of summation type heads may result in large encodings. To account for this, we use batch normalization layers <cit.> before each input encoding and after each ℓ^h. There are very few nonlinear components of GPIDE. The only one that remains constant across all experiments is that a tanh activation is used for the final output of the encoder. For tracking tasks, the decoder g_θ has 1 hidden layer with 64 units and uses a ReLU activation function. For PyBullet tasks, g_θ is a linear function. §.§ Recurrent and Transformer Baseline Details Recurrent Encoder. For the recurrent encoder, we tried to match as many details as <cit.> as possible. We double checked our implementation against theirs and confirmed that it achieves similar performance. Transformer Encoder. We follow the GPT2 architecture <cit.> for inspiration, and particularly the code provided in <cit.>. In particular, we use a number of multi-headed self-attention blocks in sequence with residual connections. We use layer normalization <cit.> before multi-headed attention and out projections; however, we do not use dropout. The out projection for each multi-headed self-attention block has one hidden layer with four times the number of units as the embedding dimension. Although <cit.> suggests using T-Fixup weight initialization, we found that more reliably high performance was achieved with the weight initialization of <cit.>. Lastly, we used the same representation for the history as GPIDE, i.e. (o_t - 1, a_t - 1, r_t - 1, o_t - o_t - 1), since it results in better performance. §.§ PID Baseline To tune our PID baseline, we used Bayesian Optimization over the three (for SISO) or six (for MIMO) dimensional space. Specifically we use the library provided by <cit.>. The output of the blackbox optimization is the average over 100 different settings (independent from the 100 settings used for testing). We allow the optimization procedure to collect as many samples as the RL methods. The final performance reported uses the PID controller with the best gains found during the optimization procedure. The bounds for each of the tracking tasks were eyeballed to be appropriate, which potentially preferably skews performance. § HYPERAPARAMETERS Because of resource restrictions, we were unable to do full hyperparameter tuning for each benchmark presented in this paper. Instead, we focused on ensuring that all history encoding methods were roughly comparable, e.g. dimension of encoding, number of parameters, etc. Tables <ref> show selected hyperparameters, and the following subsections describe how an important subset of these hyperparameters were picked. Any tuning that was done was over three seeds using 100 fixed settings (different from the 100 settings used for testing). §.§ Hyperparamters for Tracking Tasks For tracking tasks, we tried using a history encoding size of 32 and 64 for GRU, and we found that performance was better with 64. This is surprising since PIDE can perform well in these environments even though its history encoding is much smaller (3 or 6 dimensional). To make it a fair comparison, we set the history encoding dimension for GPIDE and transformer to be 64 as well. We use one layer for GRU. For the transformer-specific hyperparameters we choose half of what appears in the PyBullet tasks. §.§ Hyperparameters for PyBullet Task For the PyBullet tasks, we simply tried to emulate most of the hyperparameters found in <cit.>. For the transformer, we choose to use similar hyperparameters to those found in <cit.>. However, we found that, unlike the tracking tasks, positional encoding hurts performance. As such, we do not include it for PyBullet experiments. §.§ Hyperparameters for Ablations For the ablations of GPIDE, we use α = 0.01, 0.1, 0.25, 0.5, 0.9, 1.0 for the smoothing parameters when only exponential smoothing is used. When using exponential smoothing and summation, the α = 0.01 head is replaced with a summation head. The attention version of GPIDE replaces all six of these heads with attention. § ENVIRONMENT DESCRIPTIONS §.§ Mass Spring Damper For both MSD and DMSD, the observations include the current mass position(s), the target reference position(s), and the last action played. Each episode lasts for 100 time steps. For all RL methods, the action is a difference in force applied to the mass, but for the PID the action is simply the force to be applied to the mass at that time. The force is bounded between -10 and 10 N for MSD and -30 and 30 N for DMSD. Each episode, system parameters are drawn from a uniform distribution with bounds shown in Table <ref> (they are the same for both MSD and DMSD). Targets are drawn to uniformly at random to be -1.5 to 1.5 m offset from the masses' resting positions. §.§ Navigation Environment Like the MSD and DMSD environments, the navigation experiment lasts 100 time steps each episode. Additionally, the observation includes position signal, target locations, and the last action. For all methods we set the action to be the change in force, and the total amount of force is bounded between -10 and 10 N. The penalty on the reward is equal to 0.01 times the magnitude of the change in force. In addition, the maximum magnitude of the velocity for the agent is bounded by 1.0 m/s. The agent always starts at the location (0, 0), and the target is picked uniformly at random to be within a box of length 10 centered around the origin. Every episode, the mass, kinetic friction coefficient, and static friction coefficient is sampled, The friction is sampled by first sampling the total amount of friction in the system, and then sampling what proportion of the total friction is static friction. All distributions for the system parameters are uniform, and we show the bounds in Table <ref>. §.§ Tokamak Control Environment Simulator Our simulator version of the tokamak control is inspired by equations used by <cit.>. In particular, we use the following relations for stored energy, E, and rotation, v_rot: Ė = P - E/τ_E v̇_rot = C_rotT - v_rot/τ_m where P is the total power, T is the total torque, τ_E is the energy confinement time, τ_m is the momentum confinement time, and C_rot is a quantity relying on the ion density and major radius of the plasma. We treat τ_m and C_rot is constants with values of 0.1 and 80.0 respectively. We base the energy confinement time off of the ITERH-98 scaling <cit.>. This uses many measurements of the plasma, but we focus on a subset of these and treat the rest as constants. In particular, τ_E = C_E I^0.95 B^0.15 P^-0.69 where C_E is a constant value we set to be 200, I is the plasma current, and B is the toroidal magnetic field. To relate the stored energy to β_N we use the rough approximation β_N = C_β(a B/I) E where C_β is a constant we set to be 5, and a is the minor radius of the plasma. For a, I, and B, we sample these from the distribution described in Table <ref> for each episode. Lastly, we add momentum to the stored energy. That is, the stored energy derivative at time t, Ė_t, is Ė_t = 0.5 ( P_t - E_t/τ_E) + 0.5 Ė_t-1 The actions for all control methods is the amount of change for the power and torque. Because the total amount of power and torque injected rely on the beams, they are not totally disentangled. In Figure <ref>, we show the bounds for the action space. Furthermore, we bound the amount that power and torque can be changed by roughly 40 MW/s and 35 Nm/s, respectively. Each step is 0.025 seconds. Each episode lasts for 100 increments of 0.025 seconds. The observations are the current β_N and rotation values, their reference values, and the current power and torque settings. We make the initial β_N and rotation relatively small in order to simulate the plasma ramping up. We let the β_N and rotation targets be distributed as 𝒰(1.75, 2.75) and 𝒰(25.0, 50.0) rad/s, respectively. “Real” For the real versions of the tokamak control experiments, most of the previous (such as action bounds and target distributions) stays the same. The transition function is modeled as a recurrent neural network trained on 7,536 different runs of the DIII-D device. The network uses a GRU, has four hidden layers with 512 units each, and outputs the mean and log variance of a normal distribution describing how β_N and rotation will change. In addition to power and torque, it takes in measurements for the plasma current, the toroidal magnetic field, n1rms (a measurement related to the plasma' stability), and 13 other actuator requests for gas control and plasma shaping. In addition to sampling from the normal distribution outputted by the network, we train an ensemble of ten networks, and an ensemble member is selected every episode. We use five of these models during training and the other five during testing. Along with an ensemble member being sampled each episode, we also sample a historical run from the dataset, which determines the starting conditions of the plasma and how the other inputs to the neural network which are not modelled evolve over time. Recall that 100 fixed settings are used to evaluate the policy every epoch of training. In this case, a setting consists of targets, an ensemble member, and a historical run. § FURTHER RESULTS In this Appendix, we give further evaluation of the evaluation procedure. In addition, we give full tables of results for normalized and unnormalized scores for all methods. We also show performance traces. Note that the percentage changes in Table <ref> do not necessarily reflect tables in this section since they report all combinations of environment variants. §.§ Evaluation Procedure As stated in the main paper, for tracking tasks, we fix 100 settings (each comprised of targets, start state, and system parameters) that are used to evaluate the policy for every epoch of training (i.e. for every epoch the evaluation returns is the average over all 100 settings returns). We use a separate 100 settings when tuning. For the final returns, we average over the last 10% of recorded evaluations. For the PyBullet tasks, we use ten different rollouts for evaluation following <cit.>. We also average over the last 20% of recorded evaluations like they do. Normalized Table Scores. We now give an in-depth explanation of how the scores in the table are computed. Let π_(b, i) be the policy trained with baseline method b (e.g. with GPIDE, transformer, or GRU encoder) on environment variant i (e.g. fixed, small, or large). Let J_j(π_(b, i)) be the evaluation of policy π_(b, i), i.e. the average returns over all seeds and episodes. The normalized score for policy π_(b, i) on variant j is then J_j(π_(b, i)) - b', i'min J_j(π_(b', i'))/b', i'max J_j(π_(b', i')) - b', i'min J_j(π_(b', i')) Note that we only min and max over baseline methods presented in the table. For PyBullet tasks, we do the same procedure but normalize by the oracle policy's performance (sees both position and velocity) and the Markovian policy's performance (sees only position or velocity but has no history encoder). For both of these policies, we use what was reported from <cit.>. Note the our normalized scores differ slightly from those used in <cit.> since they normalize based on the best and worst returns of any policy; however, we believe our scheme gives a more intuitive picture of how any given policy is performing. §.§ MSD and DMSD Results §.§ Navigation Results §.§ Tokamak Control Results §.§ PyBullet Results For these results, SAC encodes observations, actions and rewards. TD3 encodes observations and actions since it is the best performing on average. Interestingly, we found that GPIDE policies often outperform the oracle policy on Hopper-P. While the oracle performance here was taken from <cit.>, we confirmed this also happens with our own implementation of an oracle policy. We hypothesize that this may be due to the fact the GPIDE policy gets to see actions and rewards and the oracle does not. §.§ Attention Scheme Visualizations We generate the attention visualizations (as seen in Figure <ref>) by doing a handful of rollouts with a GPIDE policy using only attention heads. During this rollout we collect all of the weighting schemes, i.e. softmax(q_1:tk_1:t^T/√(D)), generated throughout the rollouts and average them together. Below, we show additional attention visualizations. In all figures, each plot shows one of the different six heads. For each of these, the policies were evaluated on the same version of the environment they were trained on. §.§ Experiments Using VIB + GRU As shown in this work, there are often where using a GRU especially results in a policy that is not robust to changes in the dynamics. One may wonder whether using other robust RL techniques is able to mask this inadequacy of GRU. To test this, we look at adding Variational Information Bottlenecking (VIB) to our GRU baseline <cit.>. Previous works applying this concept to RL usually do not consider the same class of POMDPs as us <cit.>; however, <cit.> does have a baseline that uses VIB with a recurrent policy. To use VIB with RL, we alter the policy network so that it encodes input to a latent random variable, and the decodes into an action. Following the notation of <cit.>, let this latent random variable be Z and the random variable representing the input of the network be S. The goal is to learn a policy that maximizes J(π) subject to I(Z, S) ≤ I_C, where I(Z, S) is the mutual information between Z and S, and I_C is some given threshold. In practice, we optimize the Lagrangian. Where β is a Lagrangian multiplier, p(Z|S) is the conditional density of Z outputted by the encoder, and q(Z) is the prior, the penalizer is -β𝔼_S[D_KL(p(Z|S) || q(Z) )]. Like other works, we assume that q(Z) is a standard multivarite normal. We alter our GRU baseline for tracking tasks so that the policy uses VIB. This is not entirely straightforward since our policy network is already quite small. We choose to keep as close to original policy architecture as possible and set the dimension of the latent variable, Z, to be 24. Note that this change has no affect on the history encoder; this only affects the policy network. For our experiments, we set β=0.1, but we note that we may be able to achieve better performance through more careful tuning or annealing of β. In any case, we do see that VIB helps with robustness in many instances (see Table <ref>). However, the cases where there are improvements are instances where the GRU policy already did a good job at generalizing to the test environment. These are primarily the MSD and DMSD environments where the system parameters drawn during training time are simply a subset of those drawn during testing time. Interestingly, this notion of dynamics generalization matches the set up of the experiments presented in <cit.>. Surprisingly, in the navigation and tokamak control experiments, where there are more complex differences between the train and test environments, VIB can sometimes hurt the final performance. § COMPUTATION DETAILS We used an internal cluster of machines to run these experiments. We mostly leveraged Nvidia Titan X GPUs for this, but also used a few Nvidia GTX 1080s. It is difficult to get an accurate estimate of run time since job loads vary drastically on our cluster from other users. However, to train a single policy on DMSD to completion (1 million transitions collected, or 1,000 epochs) using PIDE takes roughly 4.5 hours, using GPIDE takes roughly 17.25 hours, using a GRU takes roughly 14.5 hours, and using a transformer takes roughly 21 hours. This is similar for other tracking tasks. For PyBullet tasks, using GPIDE took roughly 43.2 hours and using a transformer took roughly 64.2 hours. We note that our implementation of GPIDE is somewhat naive and could be vastly improved. In particular, for exponential smoothing and summation heads, w_t can be cached to save on compute, which is not being done currently. This is a big advantage in efficiency that GPIDE (especially one without attention heads) has over transformers.
http://arxiv.org/abs/2307.04447v1
20230710095733
Combinatorial Nullstellensatz and Turán numbers of complete $r$-partite $r$-uniform hypergraphs
[ "Alexey Gordeev" ]
math.CO
[ "math.CO" ]
Combinatorial Nullstellensatz and Turán numbers of complete r-partite r-uniform hypergraphs Alexey Gordeev =========================================================================================== In this note we describe how Lasoń's generalization of Alon's Combinatorial Nullstellensatz gives a framework for constructing lower bounds on the Turán number (n, K^(r)_s_1,…,s_r) of the complete r-partite r-uniform hypergraph K^(r)_s_1,…,s_r. To illustrate the potential of this method, we give a short and simple explicit construction for the Erdős box problem, showing that (n, K^(r)_2,…,2) = Ω(n^r - 1/r), which asymptotically matches best known bounds when r ≤ 4. § INTRODUCTION §.§ Turán numbers of complete r-partite r-uniform hypergraphs A hypergraph H = (V, E) consists of a set of vertices V and a set of edges E, each edge being some subset of V. A hypergraph is r-uniform if each edge in it contains exactly r vertices. An r-uniform hypergraph is r-partite if its set of vertices can be represented as a disjoint union of r parts with every edge containing one vertex from each part. The complete r-partite r-uniform hypergraph with parts of sizes s_1, …, s_r contains all s_1 ⋯ s_r possible edges and is denoted by K^(r)_s_1, …, s_r. Let H be an r-uniform hypergraph. The Turán number (n, H) is the maximum number of edges in an r-uniform hypergraph on n vertices containing no copies of H. A classical result of Erdős <cit.> implies that for s_1 ≤…≤ s_r, (n, K^(r)_s_1, …, s_r) = O( n^r - 1/s_1 ⋯ s_r - 1). In <cit.>, Mubayi conjectured that bound (<ref>) is asymptotically tight. Recently, Pohoata and Zakharov <cit.> showed that this is true whenever s_1, …, s_r ≥ 2 and s_r ≥ ((r - 1)(s_1 ⋯ s_r - 1 - 1))! + 1, extending earlier results of Alon, Kollár, Rónyai and Szabó <cit.> and Ma, Yuan and Zhang <cit.>. Nevertheless, the conjecture remains open even in a special case (n, K^(r)_2,…, 2), which is often referred to as the Erdős box problem. The best known lower bound is due to Conlon, Pohoata and Zakharov <cit.>, who showed that for any r ≥ 2, (n, K^(r)_2,…, 2) = Ω( n^r - ⌈2^r - 1/r⌉^-1). §.§ Generalized Combinatorial Nullstellensatz Let be an arbitrary field, and let f ∈[x_1,…,x_r] be a polynomial in r variables. A monomial x_1^d_1⋯ x_r^d_r is a monomial of a polynomial f if the coefficient of x_1^d_1⋯ x_r^d_r in f is non-zero. Recall the famous Combinatorial Nullstellensatz by Alon (see Theorem 1.2 in <cit.>). Let x_1^d_1⋯ x_r^d_r be a monomial of f, and let f ≤ d_1 + … + d_r. Then for any subsets A_1, …, A_r of with sizes |A_i| ≥ d_i + 1, f does not vanish on A_1×…× A_r, i.e. f(a_1,…,a_r) ≠ 0 for some a_i ∈ A_i. A monomial x_1^d_1⋯ x_r^d_r of f is maximal if it does not divide any other monomial of f. Lasoń showed the following generalization of Combinatorial Nullstellensatz (see Theorem 2 in <cit.>). It should be mentioned that an even stronger theorem was proved by Schauz in 2008 (see Theorem 3.2(ii) in <cit.>). Let x_1^d_1⋯ x_r^d_r be a maximal monomial of f. Then for any subsets A_1, …, A_r of with sizes |A_i| ≥ d_i + 1, f does not vanish on A_1×…× A_r, i.e. f(a_1,…,a_r) ≠ 0 for some a_i ∈ A_i. Notably, in most applications of Combinatorial Nullstellensatz the condition f ≤ d_1 + … + d_r from Theorem <ref> turns out to be sufficient and thus the more general Theorem <ref> is not needed. Below we give a rare example of an application in which the full power of Theorem <ref> is essential. § THE FRAMEWORK For subsets B_1, …, B_r of a field denote the set of zeros of a polynomial f ∈[x_1, …, x_r] on B_1 ×…× B_r as Z(f; B_1,…, B_r) := { (a_1, …, a_r) ∈ B_1 ×…× B_r | f(a_1, …, a_r) = 0 }. In the case B_1 = … = B_r = B we will write Z(f; B, r) instead of Z(f; B_1, …, B_r). The set Z(f; B_1,…, B_r) can be viewed as the set of edges of an r-partite r-uniform hypergraph H(f; B_1, …, B_r) with parts B_1, …, B_r. Our key observation is the following lemma which immediately follows from Theorem <ref>. Let x_1^d_1⋯ x_r^d_r be a maximal monomial of f. Then for any subsets B_1, …, B_r of the hypergraph H(f; B_1,…, B_r) is free of copies of K^(r)_d_1 + 1, …, d_r + 1. This lemma gives us a new tool for constructing lower bounds on (n, K^(r)_s_1, …, s_r). In Section <ref> we give a simple example of such construction for (n, K^(r)_2, …, 2) which asymptotically matches (<ref>) when r ≤ 4. Combining Lemma <ref> with (<ref>), we also get the following Schwartz–Zippel type corollary, which may be of independent interest. Let x_1^d_1⋯ x_r^d_r be a maximal monomial of f, where d_1 ≤…≤ d_r. Then for any subsets B_1, …, B_r of with sizes |B_i| = n, | Z(f; B_1, …, B_r) | = O( n^r - 1/(d_1 + 1) ⋯ (d_r - 1 + 1)). The described framework was also recently discussed in an article by Rote (see Section 8 in <cit.>). § CONSTRUCTION Here _p^r is the finite field of size p^r and _p^r^* = _p^r∖{0}. Let p be a prime number, and let f ∈_p^r[x_1, …, x_r] be the following polynomial: f(x_1, …, x_r) = x_1 ⋯ x_r + ∑_i = 1^r ∏_j = 1^r - 1 x_i + j^p^r - p^j, where indices are interpreted modulo n, i.e. x_r + 1 = x_1, x_r + 2 = x_2, etc. Then |Z(f; _p^r^*, r)| = p^r - 1 ( p^r - 1 )^r - 1. Note that for any a_1, …, a_r ∈_p^r^* we have a_i^p^r = a_i, so f(a_1, …, a_r) = a_1 ⋯ a_r ( 1 + ∑_i = 1^r ∏_j = 0^r - 1 a_i + j^-p^j) = a_1 ⋯ a_r ( 1 + ( a_1^-1 a_2^-p⋯ a_r^-p^r - 1) ), where (a) = a + a^p + … + a^p^r - 1 is the trace of the field extension _p^r / _p. Now let us fix a_2, …, a_r ∈_p^r^*. As a_1 runs over all values of _p^r^*, so does a_1^-1 a_2^-p⋯ a_r^-p^r - 1. There are exactly p^r - 1 elements a ∈_p^r^* for which (a) = -1, i.e. for any fixed a_2, …, a_r there are exactly p^r - 1 values of a_1 for which f(a_1, …, a_r) = 0. For any r ≥ 2, (n, K^(r)_2, …, 2) = Ω( n^r - 1/r). Note that x_1 ⋯ x_r is a maximal monomial of the polynomial f from Lemma <ref>. Thus, due to Lemma <ref>, a hypergraph H_p = H(f; _p^r^*, r) with r(p^r - 1) vertices and p^r - 1 ( p^r - 1 )^r - 1 edges is free of copies of K^(r)_2, …, 2 for every prime p, which gives the desired bound. § CONCLUDING REMARKS The construction from Section <ref> in the case r = 3 is structurally similar to the one given by Katz, Krop and Maggioni in <cit.>. Their construction can be generalized to higher dimensions giving an alternative proof of Theorem <ref> (private communication with Cosmin Pohoata; see also Proposition 11.2 in <cit.>). Our approach gives a simpler construction and a much shorter proof. Motivated by the ideas discussed in Section <ref>, Rote posed a problem (see Problem 1 in <cit.>), equivalent to asking how large can the set Z(f; B_1, B_2) be for a polynomial of the form f(x, y) = xy + P(x) + Q(y) and sets B_1, B_2 of size n each. Lemma <ref> answers this question asymptotically if sets B_1 and B_2 are allowed to be taken from the finite field _p^2. § ACKNOWLEDGEMENTS I would like to thank Danila Cherkashin and Fedor Petrov for helpful discussions, and Günter Rote for useful comments on a draft of this note. abbrv
http://arxiv.org/abs/2307.04368v2
20230710064918
ECS -- an Interactive Tool for Data Quality Assurance
[ "Christian Sieberichs", "Simon Geerkens", "Alexander Braun", "Thomas Waschulzik" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.SY", "eess.SY" ]
New results on the dynamics of critical collapse Cheng-Gang Shao2 August 12, 2023 ================================================ With the increasing capabilities of machine learning systems and there potential use in safety-critical systems, ensuring high-quality data is becoming increasingly important. In this paper we present a novel approach for the assurance of data quality. For this purpose, the mathematical basics are first discussed and the approach is presented using multiple examples. This results in the detection of data points with potentially harmful properties for the use in safety-critical systems. § INTRODUCTION The development of machine learning (ML) based systems has led to a widespread use in research, industry as well as in the everyday life. Even though ML systems show great performance in solving complex tasks, their use is mostly limited to domains, where wrong decisions only have minor consequences. The application of ML systems in high-risk domains currently is problematic due to the needed quality, lack of trustworthiness and the expected legal basis. To give a legal framework for the application of ML systems the European AI act <cit.> is at the moment under development. Simultaneously, multiple projects from research and industry are dealing with the topic of ML systems in high risk areas, such like "KI-Absicherung" <cit.> and "safetrAIn" <cit.>. All of those projects highlight the high requirements that are needed to protect humans from errors made by ML systems. High risk ML systems have to fulfill the requirements according to <cit.> Chapter 2 "REQUIREMENTS FOR HIGH-RISK AI SYSTEMS" Article 10 "Data and data governance" Point 3: "Training, validation and testing data sets shall be relevant, representative, free of errors and complete". In this paper we introduce a new approach that will contribute to the future fulfillment of this requirement. It is showcased how different relevant aspects of the data can be analysed and how relations between the given data can be used for quality assurance aspects. The presented approach is part of the QUEEN-method (Qualitätsgesicherte effiziente Entwicklung vorwärtsgerichteter künstlicher Neuronaler Netze, quality-assured efficient development of neural networks) <cit.> which is a comprehensive approach for the development of quality assured neural networks. In the scope of the QUEEN-method two data quality assurance methods were developed, namely (integrated quality indicator) <cit.> and ECS (equivalent classes sets) <cit.>. These methods were developed simultaneous in close cooperation. In this paper we want to show the mathematical basis and the use of ECS on the topic of quality assurance. The abilities and usage of the is covered in another submission <cit.>. The ECS is particularly used to analyse the local and global composition of data sets. Based on this a wide variate of data quality properties is addressed. Be it the identification of single data points like outliers, false annotations or isolated data or the identification of groups of data points like decision boundaries and local data point groups of identical output. The ECS makes it possible to identify all data points which do not match specifiable conditions. The method itself is thereby created in such a way that interactions between the user and the data are supported in order to simplify and speed up the quality assurance process. § RELATED WORK/STATE OF THE ART Despite the fact that data quality and quality assurance are widely necessary and researched, there exists no single general accepted definition. Instead, there are several attempts to define data quality based on current developments. One example is given by <cit.> who define data quality with respect to the intended use of the data. It is argued that data quality has to be a context dependent term to be appropriately used in the context of a given tasks. In addition, the term of "data quality" is split into multiple properties like accuracy, consistency, completeness, safety and more. In <cit.> many of these properties are listed and defined separately. In <cit.> data quality is additionally split into subjective and objective assessments of data quality. A general definition on data quality can thereby not be given. Instead is high data quality considered to be data which is fit for its intended purpose <cit.>. If data quality is used in standards it is typically split into the different properties which have to be analysed separately like in <cit.>. A first step to assure the data quality is the use of descriptive statistics <cit.>. Herein statistical methods are used to gain greater insights into the given data. Common methods are the visualization via scatter plots and histograms, often combined with the measurement of central tendencies, dispersion and location parameters. Our proposed method extends the descriptive statistical methods, enables the visualization of multiple quality assurance aspects in one plot and enables a direct interaction between the visualization of the quality indicator visualization and the data. When trying to assure the quality of data, another possible approach is the representation of given data points in lower dimensial space using methods of dimensinality reduction. Commonly used methods are PCA <cit.>, tSNE <cit.> or UMAP <cit.>. These methods often produce representations interpretable by humans if the output dimensionality is chosen to be low enough. However, such methods often result in considerable loss of information. The ECS on the other hand is computed on the original values and takes all the given information into account. Some approaches try to cover as many dimensions of the data quality as possible. One way to do this is by testing the data against predefined rules and assumptions. An example of such an approach is the pointblank R package <cit.> which is created for an agent based data quality assurance. In this package, specific elements of the data are tested against predefined functions. As part of this it can be tested if the data is greater, equal, lower and so on. Another method is given by DEEQU published in <cit.> and <cit.>. This package allows for assumption based unit tests which can be defined by the user. Tests on specific parameters of the data, similar to those already mentioned with regard to the pointblank package, are possible as well. A last method that should be mentioned here is shown in <cit.>. This approach showcases a probability-based method which calculates a value representing the probability that a data set is free of internal errors with respect to entered rules. The entered rules are based on the presence of data of certain values, comparable to the package pointblank. The main problem in using the mentioned approaches is the large amount of required knowledge about the data to create accurate assumptions. On top of this, the efficient creation of assumptions is only given if the user is aware that the data quality is influenced in some regards. Due to the reliance on the relationship between data points, our methods do not need any assumptions or rules that are to be specified by an user. Instead our approach can be used without any knowledge about the data. A different approach is the focus on just a single dimension of the data quality. On the topic of outlier detection these are for example density-based algorithms like <cit.>. In this approach, the amount of local neighbouring data points is calculated and the thereby generated local density is compared with the nearest neighbours. Another approach is using the DBSCAN algorithm <cit.> to cluster the given data. Based on this clustering the method proposed by <cit.> calculates values to identify clusters of minimal sizes. These clusters are then regarded as possible anomalies. Another data quality property is the detection of possible outliers, which can also be solved by density based clustering. One example of such an algorithm is given by <cit.>. This algorithm uses a fixed clustering to identify clusters followed by the computation of cluster distances. The clusters are classified as anomalous based on the inter-cluster distances and the deviation from the mean inter-cluster distance. Two quite similar approaches are <cit.> and <cit.>. Both approaches use a clustering of the given data in a first step. The first one uses the previously mentioned DBSCAN, the second one uses a cluster algorithm named OPTICS <cit.>. In a second step, anomalous clusters are identified, once based on inverse distance weighting (IDW) and once using the kringing method. The main advantage of all of the mentioned methods is the reliable calculation of their data quality property. However, due to the methods focus on one specific data quality property, they are only useful if the assumption exists that this property could contain errors. The advantage of our proposed method is that multiple data quality properties may be analyzed with one approach. § METHOD The ECS is based on the idea that a data set can to be split into input data and output data. The input data defines the dimensions of the data, henceforth are called features, which can be used to predict the output features. The amount of all possible inputs creates the input space I. Accordingly is the output space O created by all possible outputs. To use the ECS properly all feature values have to be numbers. Features which are not created by numbers have to be represented in some way as a number or a combination of numbers. To start the calculation of the ECS two metrics are needed. These metrics should be chosen in such a way that "similar" data points according to the semantics of the task that has to be solved have a relatively small distance to each other. At the same time "dissimilar" data points should have a relatively large distance. The distances between two data points can be calculated in the input space and in the output space independent from each other. By doing so, it is possible to use different metrics for the distances in I and in O. Which metric is best suited for the data set depends on the given type of data and the task to be be solved. In the following, the difference between data points in the input space is named input distance d_RI. Accordingly, the difference between data points in the output space is called output distance d_RO. To differentiate between "similar" and "dissimilar" data points, the distances can be separated into different groups. The minimal approach is to create two groups. One group for relatively small distances and another one for relatively large distances. Doing so requires a threshold, which is called δ_in for distances in the input space and δ_out for distances in the output space. These δ can be absolute distance values or a percentage of the maximum known distance between data points. They are set based on the data quality properties that should be identified and the used data type. By comparing two data points with each other, four possible scenarios can be distinguished: * small input distance - small output distance * small input distance - large output distance * large input distance - small output distance * large input distance - large output distance Each of these scenarios shows a relation between the data points. If for example the distances are both small, than the data points may showcase a common use case with a typical output. A small input distance in combination with a large output distance on the other hand could showcase complex areas in the input space or an outlier. Either way, the identification of data properties based on two data points is not enough. Due to this the following four ECS-sets are calculated. In these sets, the compared data points are saved, which are part of one of the above scenarios. ECS_EE(D) {d_c|d_c ∈ D^2 ∧ d_RI(B) ≤δ_in ∧ d_RA(B) ≤δ_out} ECS_EU(D) {d_c|d_c ∈ D^2 ∧ d_RI(B) ≤δ_in ∧ d_RA(B) > δ_out} ECS_UE(D) {d_c|d_c ∈ D^2 ∧ d_RI(B) > δ_in ∧ d_RA(B) ≤δ_out} ECS_UU(D) {d_c|d_c ∈ D^2 ∧ d_RI(B) > δ_in ∧ d_RA(B) > δ_out} Each of the four ECS-sets represents all comparisons between data points which result in one of the four scenarios. Thereby, an E showcases a small distance whereas a U showcases a large one. The first of the two letters of the ECS-sets represents the input distance and the second represents the output distance. Following this, ECS_EU contains all data point comparisons which result in a small input and a large output distance. The ECS_UE on the other hand contains comparisons which result in a large input and a small output distance. The information of the ECS-sets can be used to analyse the data points for each of the four scenarios. This way, it is possible to identify data points with specific properties. It would, for example, be possible to identify all data points which have the many dissimilar data points in close proximity. It would also be possible to identify data points which showcase small distances in the input and the output space. By doing so, certain areas of the input space can be identified which correlate with a certain outputs of the output space. It could also be possible to identify features which differentiate certain data points from each other. The ECS-sets contain all information of the data set which could be used for quality assurance. However, the formatting of the sets is difficult for humans to read. This is especially the case when entire data sets should be analysed and not just a small subset of the data. The solution is the comprehensive representation of the ECS-sets in such a way that interesting data points can easily be identified. Before this can be done, it has to be determined which combinations of data points are the most interesting ones. The expectation hereby would be that similar input data would create output data that is related in some way. Based on this, it can be assumed that a combination of data points with a small input distance also has a small output distance. On the other hand it would not be expected that data points with large input distances to each other would showcase similar output data. The most interesting combinations of data points would thereby be combinations which result in a small input distance. This comparisons can be display particularly by sorting the data point comparisons based on the input distance. An example of the sorted representation of the ECS_EE is shown on the right side of the figure <ref>. Listed on the x-axis is the comparison between data points. This comparison is data point based and showcases the comparisons of any data point with the kth smallest distance in the input space. On the y-axis, it is displayed how many of these comparisons are part of the current ECS-set, which is in this case the ECS_EE. In this process functions are created representing every data point. To showcase the entire data set these functions are superimposed over each other. Every function visually displays if and which of the nearest data points are part of the ECS_EE. The data set which was used to create the displayed ECS_EE is shown on the left side of figure <ref>. It is a simple data set created by two input features (a, b) and one output feature (color and shape). Increasing functions display that most of the data points with the kth smallest distance are part of the current ECS-set. Functions which do not increase display that the comparisons are part of another ECS-set. It should be emphasized here that the function in the kth position only increases for one of the four ECS-sets. The created ECS-histograms consist of a large number of functions. Areas in the ECS-histogram, in which large amounts of functions showcase the same behavior, are displayed darker. Accordingly smaller amounts of functions are display brighter. For the representation of the amount of functions, gamma correction is used. This way, even singular functions should stay visible. As stated before, it would be expected that a small input distance influences the output distance. An ideal data set would have a strong correlation between the position in the input and the output space. The resulting data point combinations would just have small input and output combinations for all the nearest neighbouring data points. This would result in a steep increase of all functions in the ECS_EE until all possible similar data points are combined with each other. From this point on the functions in the ECS_EE do not increase any further. The ECS_EE function created by a single data point in such an ideal data set is shown schematically in the figure <ref>. The main diagonal is thereby displaying the maximum speed at which a function is able to increase. Data sets or individual data points which are not ideal do create different functions. One extreme example would be that a function would not increase at all in the ECS_EE. This would be the case because there are no possible combinations with small input distance, small output distance or both. Which of these possibilities is actually the case, can be tested by using the other ECS-histograms. The benefit of the representation of the information of the data set in the form of the ECS-histogram is the data point based presented information. The neighbouring data points to each data point and there relation to each other are shown and can be compared to set expectations. This basis helps at identifying functions which do not live up to the expectations fairly easy. The reason why the functions are not behaving the way they should, is intrinsically given by the combination of the behaviour and the ECS-set. Additionally, it would be possible to define limits in the ECS-histogram which can be tested autonomously. § APPLICATION We want to showcase the abilities of the ECS by the application on two different data sets. Simultaneously we want to show how the ECS can be used explicitly to detect certain data quality properties. With this in mind we created a data set which is used as an example. The data set is created in such a way that properties detected by the ECS can be verified by displaying the critical data points. In addition is the ECS used on the MNIST data <cit.> set to display the detection of data quality properties on a commonly known example. The application on the data sets is focused on the the data quality properties created by outliers, isolated data points and local groups of data points with identical output values. §.§ ECS on point cloud To demonstrate the usage of the ECS, an artificial data set is created which is displayed at the left side of figure <ref>. This data set is similar to the one displayed in figure <ref>. The most important difference is, that the clustered data points can not clearly be separated from each due to the clusters overlapping partially. The data set contains 1000 data points which are grouped in four clusters. As in figure <ref> each cluster has a different amount and a different density of data points. The data set was created this way, because it demonstrates a simple classification task. At the same time, properties like outlier and local groups with identical output are present and can be visualized. In the following, it is shown how these properties can be identified by using the ECS. All the ECS-histograms are created using a δ_in of 0.3 times the maximum distance in the input space and a δ_out of 0 to differentiate between all differing outputs. §.§.§ outliers An outlier is a data point which has an unexpected output for the given input. This output is typically very different from an output that would be expected. Here an outlier is just be considered to vary in the output space. Unwanted variations in the input space are treated in the following section. The reasons for an outlier can be different. The output may for example be wrong or the data point showcases a rare but correct input. Due to their character, outlier appear in areas which are dominated by data points with a different output. Given this information, it can be stated that an outlier has close neighbours with large output distances. The ECS_EU is used to identify these cases. Functions in the ECS_EU are increasing if there are data point combinations with small input and large output distances. Such functions that already increase for the nearest neighbours can thus be regarded as outliers. By targeting these functions, the corresponding outliers can be identified. How many combinations for how many nearest neighbours should be part of the ECS_EU is dependent on the given data set. In the given point cloud example, 100 nearest neighbours were chosen to be enough to represent the local data points. If out of these 100 combinations more than 70 have a large output distance, then the data point is regarded as an outlier. The ECS_EU is shown in figure <ref>. The area of importance in which the functions of outliers appear is highlighted by a rectangle. It can be noticed that some data points in the point cloud are highlighted which means that they are considered to be outliers. It can also be noticed that most of the functions are not increasing by much. §.§.§ isolated data points Isolated data points are data points which have a large input distance to many or all of there nearest neighbours. This means that the data point showcases an input which is rare or possibly wrong. In the literature these type of data points are often referred to by "Out-of-distribution-data". The ECS_UE and the ECS_UU are used to identify data points which have large distances to the nearest neighbours. In both ECS-sets are combinations saved which have large input distances to each other. The difference between these two ECS-histograms is the differently sized output distance, which is not considered for isolated data points. The corresponding functions of isolated data points increase very early in the ECS-histograms. Most of the time, the functions increase in the ECS_UE as well as in the ECS_UU. This is the case, because the nearest neighbours themselves may have large input distances to each other and thereby showcase very different outputs. The sooner a function increases, the fewer data points are given in the local area of an isolated data point. The amount of neighbouring data points that should exist is on the given task and data set. Typically, this means that every data point should at least have a few neighbouring data points with small input distance. If many data points have no near neighbours, an adjustment of the parameter delta_out can be considered. By using the ECS_UE and the ECS_UU it can be stated that, there are no isolated data points in the current data set with less than 50 close neighbours. This can be confirmed by the fact that the sample data set used here was created with clusters of data points §.§.§ local groups of identical output A local group with identical output is a structure created by multiple data points. All data points in such a group have small distances to each other regarding the input and the output distances. There are no greater amounts of data points which showcase a large output distance, besides possible outliers or false data points. The identification of these groups showcase the ability of the used metric to differentiate between different outputs on the basis of the corresponding input. This means that the input data of the groups share similar features which in turn leads to the differentiation. It would be possible to solve the given task at least for these groups based on these similar features. The combination of small input distances and small output distances can be identified using the ECS_EE. The functions correlating with data points as part of a local group with identical output increase strongly. The functions will increase as long as there exist data points with small distances in the input and output space in the data set. These strongly increasing functions showcase every data point which is part of such a group of data points. Using the ECS_EE, there is the possibility to also identify groups of different amounts of data points. This can be done by choosing the function increasing the strongest for different amounts of neighbours. If the functions have increase up to the chosen amount, it means that there is a minimum of this amount of data points in the group. In the given case in figure <ref>, groups with 100 data points and identical output should be identified. The area of importance in the ECS_EE is marked by an rectangle in the upper right corner. It should be noticed that not just the function increasing the strongest were marked but also some functions which increase a little bit slower. This has be done to make the identified groups more robust against false data points and outliers. In the given case, this means that also functions with 95 out of 100 data points are regarded as local groups with identical output. In addition, it can be noticed that most of the functions in figure <ref> are increasing. This indicates that there are many data points arranged in local groups. This is logical because the point cloud was created this way as four groups of clustered data points. The detected data points which are part of a local group are marked on the left side of the figure <ref>. §.§ ECS on MNIST In contrast to the previous example, MNIST is a data set which was created to represent the specific task of classifying handwritten numbers. The data set consists of 60000 images of size 28*28 serving as the input and as many numbers between zero and nine for classifying the input. The most important difference between the previous used point cloud and the MNIST data set is the amount of data and input features. The much greater amount leads to much more functions in the ECS-histograms. The ECS-histograms thereby get more complicated. This can be counteracted by applying more specific metrics to the data. Here the pixel-wise euclidean distance is chosen as a metric. The euclidean distance is typically not used on images due to its bad performance. But in the case of MNIST, this metric is applicable, as the image pixels are given as centered grayscale values. It can be shown that the abilities of the ECS are still given using the euclidean metric. Another problem which appears by using data with many features is the curse of dimensionality through which all distances are getting closer to each other. As a result a larger δ_in of 0.75 times the maximum distance in the input space is used in the following. The δ_out is still 0 to differentiate between all differing outputs. To show the input data of the MNIST data set in a way, a representation is used in the following chapters. This representation is created by using UMAP <cit.>, a dimensionality reduction method. The cluster, which are created this way are marked with a number to showcase the corresponding output. The ECS is used based on the original MNIST input and output data. §.§.§ outliers As shown in the chaper "ECS on point cloud - outliers", the ECS_EU used to identify outliers. The ECS_EU of the MNIST data set for the nearest 200 neighbours is shown in figure <ref>. As mentioned before, this representation has much more functions. These are too dense for any singular function to be identified without an interaction. But it can be noticed that most functions do not increase by a lot, as indicated by the darker visualization. The amount of functions (|F|) which increase to a specific value of fulfillment (v_f) until the 200th neighbour are shown in the following table <ref>. r0.35 v_f |F| 101-200 6021 51-100 7337 11-50 14914 0-10 31728 0 16813 Amount of data point combinations which are part of the ECS_UE for the 200 nearest neighbours. It must be noticed that more than half of the given data points have a maximum of 10 combinations showcasing a small input distance and a large output distance. On the other hand, there are more than 6000 data points for which half of the nearest 200 neighbours have a large output distance. Not all of these are outliers, some may be positioned between classes others may have badly assigned distances. To identify outliers only functions performing worse than a random assignment of distances are used. This means that all data points having more than 180 data points with large output distance among the nearest 200 neighbours are interpreted as outliers. By choosing these functions 804 data points where identified as outliers. In figure <ref>, a random sample of nine of these outliers is displayed. It is noticeable that all of these data points do look strange. Most may also be mistaken with a different number. It would for example be possible to remove these data points from the MNIST data set to achieve a higher data quality. §.§.§ isolated data points The identification of isolated data points in MNIST is identical with the identification of data points in the point cloud. The ECS_UE and ECS_UU used are shown in figure <ref>. In the ECS_UE are 129 and in the ECS_UU 132 data points with less then 200 neighbouring data points. Most of the correlated data points appear in the ECS_UE as well as in the ECS_UU. Due to the relatively small amount of increasing functions the histogram is created darker. The earliest functions start increasing in the ECS_UE and ECS_UU for less than 10 neighbours. The input data of the earliest increasing functions is shown in figure <ref>. One function in the ECS_UE is noticeable do to its steep and early increase. The corresponding input data is shown in figure <ref> in the third image from left. This data point has a large distance to its closest neighbours. At the same time, most of these neighbours have the same output "4". This lead to the conclusion that the data point has still some of the most important features which are correlated with the output "4", even if the data point is very isolated. Overall it is noticeable that most isolated data points shown use many input pixels to display the number. This is not often the case in the MNIST data set. In addition, the pixel-wise euclidean distance used reacts especially on pixel-wise differences by assigning higher distances in the input space. §.§.§ local groups of identical output The ECS_EE, which is used for the identification of local groups of data points with identical output, is shown on the right side of figure <ref> for the nearest 500 neighbours. As it is the case for the detection of outlier, the amount of functions is much larger than in the point cloud example. It is also not possible to identify single functions but instead overall trends of the functions. It can be noticed that most of the functions are increasing very steeply. This means that most data points do have small input as well as output distances in combination with their nearest neighbours. This in turn means that most data points are located in local groups with identical output. The amount of data points which should be part of the groups can be changed by using different amounts of neighbours in the ECS_EE. In table <ref>, the amount of data points (|dp|) which belong to a local group of different size (gs) is shown. These amounts where created by allowing a maximum of 5 data points a different output which could exist due to outliers. Noticeable are most of the data points located in groups of a few hundred data points. But still more than 4000 data points could be detected which are part of local groups with more than 1500 data points. r0.45 gs |dp| 100 38383 200 27351 500 14745 1000 7851 1500 4329 Amount of data points which are part of the different sized local groups with identical output. The entire data set contains 60000 data points. The position of these data points is highlighted in dark in the UMAP representation in figure <ref>. It can be noticed, that especially data points with an output of 1, but also of 0 and 6 show local groups of identical output. This means, that the used metric has the ability to differentiate these data points from each other. The local groups which can be identified can than be used to solve the given task, based on there location in the input space. § CONCLUSION In this paper we presented a novel approach for the data quality assurance based on local similarities. It was shown how the ECS is calculated and can be used on an artificial example. The thereby presented procedure was used to detect data quality properties on the MNIST data set. Besides the possibility to detect outlier, isolated data points and local groups of similar output has the versatile applicability of the ECS been shown. ECS could also be used to validate quantitative data set requirements for data quality properties. These can state the minimum amount of elements per group, the amount of outliers or a maximum amount of local groups. Some of these properties, like the amount of accepted outliers in local groups, may be dependent from associated safety requirements and the required safety integrity level.
http://arxiv.org/abs/2307.05541v1
20230708192609
High Fidelity 3D Hand Shape Reconstruction via Scalable Graph Frequency Decomposition
[ "Tianyu Luan", "Yuanhao Zhai", "Jingjing Meng", "Zhong Li", "Zhang Chen", "Yi Xu", "Junsong Yuan" ]
cs.CV
[ "cs.CV" ]
High Fidelity 3D Hand Shape Reconstruction via Scalable Graph Frequency Decomposition Tianyu Luan^1 Yuanhao Zhai^1 Jingjing Meng^1 Zhong Li^2 Zhang Chen^2 Yi Xu^2 Junsong Yuan^1 ^1State University of New York at Buffalo        ^2OPPO US Research Center, InnoPeak Technology, Inc. {tianyulu,yzhai6,jmeng2,jsyuan}@buffalo.edu {zhong.li,zhang.chen,yi.xu}@oppo.com =================================================================================================================================================================================================================================================================================================== Despite the impressive performance obtained by recent single-image hand modeling techniques, they lack the capability to capture sufficient details of the 3D hand mesh. This deficiency greatly limits their applications when high-fidelity hand modeling is required, , personalized hand modeling. To address this problem, we design a frequency split network to generate 3D hand mesh using different frequency bands in a coarse-to-fine manner. To capture high-frequency personalized details, we transform the 3D mesh into the frequency domain, and propose a novel frequency decomposition loss to supervise each frequency component. By leveraging such a coarse-to-fine scheme, hand details that correspond to the higher frequency domain can be preserved. In addition, the proposed network is scalable, and can stop the inference at any resolution level to accommodate different hardware with varying computational powers. To quantitatively evaluate the performance of our method in terms of recovering personalized shape details, we introduce a new evaluation metric named Mean Signal-to-Noise Ratio (MSNR) to measure the signal-to-noise ratio of each mesh frequency component. Extensive experiments demonstrate that our approach generates fine-grained details for high-fidelity 3D hand reconstruction, and our evaluation metric is more effective for measuring mesh details compared with traditional metrics. The code is available at <https://github.com/tyluann/FreqHand>. § INTRODUCTION High-fidelity and personalized 3D hand modeling have seen great demand in 3D games, virtual reality, and the emerging Metaverse, as it brings better user experiences, , users can see their own realistic hands in the virtual space instead of the standard avatar hands. Therefore, it is of great importance to reconstruct high-fidelity hand meshes that can adapt to different users and application scenarios. Despite previous successes in 3D hand reconstruction and modeling<cit.>, few existing solutions focus on enriching the details of the reconstructed shape, and most current methods fail to generate consumer-friendly high-fidelity hands. When we treat the hand mesh as graph signals, like most natural signals, the low-frequency components have larger amplitudes than those of the high-frequency parts, which we can observe in a hand mesh spectrum curve (<ref>). Consequently, if we generate the mesh purely in the spatial domain, the signals of different frequencies could be biased, thus the high-frequency information can be easily overwhelmed by its low-frequency counterpart. Moreover, the wide usage of compact parametric models, such as MANO <cit.>, has limited the expressiveness of personalized details. Even though MANO can robustly estimate the hand pose and coarse shape, it sacrifices hand details for compactness and robustness in the parameterization process, so the detail expression ability of MANO is suppressed. To better model detailed 3D shape information, we transform the hand mesh into the graph frequency domain, and design a frequency-based loss function to generate high-fidelity hand mesh in a scalable manner. Supervision in the frequency domain explicitly constrains the signal of a given frequency band from being influenced by other frequency bands. Therefore, the high-frequency signals of hand shape will not be suppressed by low-frequency signals despite the amplitude disadvantage. To improve the expressiveness of hand models, we design a new hand model of 12,337 vertices that extends previous parametric models such as MANO with nonparametric representation for residual adjustments. While the nonparametric residual expresses personalized details, the parametric base ensures the overall structure of the hand mesh, , reliable estimation of hand pose and 3D shape. Instead of fixing the hand mesh resolution, we design our network architecture in a coarse-to-fine manner with three resolution levels U-net for scalability. Different levels of image features contribute to different levels of detail. Specifically, we use low-level features in high-frequency detail generation and high-level features in low-frequency detail generation. At each resolution level, our network outputs a hand mesh with the corresponding resolution. During inference, the network outputs an increasingly higher resolution mesh with more personalized details step-by-step, while the inference process can stop at any one of the three resolution levels. In summary, our contributions include the following. * We design a high-fidelity 3D hand model for reconstructing 3D hand shapes from single images. The hand representation provides detailed expression, and our frequency decomposition loss helps to capture the personalized shape information. * To enable computational efficiency, we propose a frequency split network architecture to generate high-fidelity hand mesh in a scalable manner with multiple levels of detail. During inference, our scalable framework supports budget-aware mesh reconstruction when the computational resources are limited. * We propose a new metric to evaluate 3D mesh details. It better captures the signal-to-noise ratio of all frequency bands to evaluate high-fidelity hand meshes. The effectiveness of this metric has been validated by extensive experiments. We evaluate our method on the InterHand2.6M dataset <cit.>. In addition to the proposed evaluation metrics, we also evaluate mean per joint position error (MPJPE) and mesh Chamfer distance (CD). Compared to MANO and other baselines, our proposed method achieves better results using all three metrics. § RELATED WORK Parametric hand shape reconstruction. Parametric models are a popular approach in hand mesh reconstruction. Romero <cit.> proposed MANO, which uses a set of shape and pose parameters to control the movement and deformation of human hands. Many recent works <cit.> combined deep learning with MANO. They use features extracted from the RGB image as input, CNN to get the shape and pose parameters, and eventually these parameters to generate hand mesh. These methods make use of the strong prior knowledge provided by the hand parametric model, so that it is convenient to train the networks and the results are robust. However, the parametric method limits the mesh resolution and details of hands. Non-parametric hand shape reconstruction. Non-parametric hand shape reconstruction typically estimates the vertex positions of a template with fixed topology. For example, Ge  <cit.> proposed a method using a graph convolution network. It uses a predefined upsampling operation to build a multi-level spectrum GCN network. Kulon <cit.> used spatial GCN and spiral convolution operator for mesh generation. Moon <cit.> proposed a pixel-based approach. However, none of these works paid close attention to detailed shapes. Moon <cit.> provided an approach that outputs fine details, but since they need the 3D scanned meshes of the test cases for training, their model cannot do cross-identity reconstruction. In our paper, we design a new hand model that combines the strength of both parametric and non-parametric approaches. We use this hand model as a basis to reconstruct high-fidelity hands. Mesh frequency analysis. Previous works mainly focused on the spectrum analysis of the entire mesh graph. Chung. <cit.> defines the graph Fourier transformation and graph Laplacian operator, which builds the foundation of graph spectrum analysis. <cit.> extends commonly used signal processing operators to graph space. <cit.> proposes a spectrum graph convolution network based on graph spectrum characteristics. The spectral decomposition of the graph function is used to define graph-based convolution. Recent works such as <cit.> widely use spectrum GCN in different fields. However, these works mainly focus on the analysis of the overall graph spectrum. In this paper, we use spectrum analysis as a tool to design our provided loss function and metric. § PROPOSED METHOD We propose a scalable network that reconstructs the detailed hand shape, and use frequency decomposition loss to acquire details. <ref> shows our network architecture. We design our network in the manner of a U-net. First, we generate a MANO mesh from image features from EfficientNet <cit.>. Based on the MANO mesh, we use a graph convolution network (green, yellow, and red modules in <ref>) to recover a high-fidelity hand mesh. In order to obtain high-frequency information, we use image features from different layers of the backbone network as a part of the GCN inputs. Specifically, at the low-resolution level, we take high-level image features as part of the input, and use a low-resolution graph topology to generate a low-resolution mesh. At medium and high-frequency levels, we use lower-level image feature through the skip connection to produce a high-resolution mesh. Note that at every resolution level, the network will output the intermediate hand mesh, so it would naturally have the ability for scalable inference. During the training process, we supervise both intermediate meshes and the final high-resolution mesh. We discuss the details in the following. §.§ High Fidelity 3D Hand Model We design our hand representation based on MANO <cit.>. MANO factorizes human hands into a 10-dimensional shape representation β and a 35-dimensional pose representation θ. MANO model can be represented as { M(θ, β) = W(T_P(θ, β), θ, w) T_P(θ, β) = T + B_S(β) + B_P(θ) . where W is the linear blend skinning function. Model parameter w is the blend weight. B_S and B_P are another two parameters of MANO named shape blend shape and pose blend shape, which are related to pose and shape parameters, respectively. MANO can transfer complex hand surface estimation into a simple regression of a few pose and shape parameters. However, MANO has limited capability in modeling shape detail. It is not only limited by the number of pose and shape dimensions (45), but also by the number of vertices (778). In our work, we designed a new parametric-based model with 12,338 vertices generated from MANO via subdivision. The large vertex number greatly enhances the model's ability to represent details. Subdivided MANO. To address this problem. We design an extended parametric model that can better represent details. First, we add detail residuals to MANO as M^'(θ, β, d) = W(T_P^'(θ, β, d), θ, w^'), T_P^'(θ, β, d) = T^' + B_S^'(β) + B_P^'(θ) + d, where, w^', T^', B_S^'(β), and B_P^'(θ) are the parameters our model, and d is the learnable per-vertex location perturbation. The dimension of d is the same as the number of vertices. Besides vertex residuals, we further increase the representation capability of our hand model by increasing the resolution of the mesh. Motivated by the traditional Loop subdivision<cit.>, we propose to design our parametric hand model by subdividing the MANO template. Loop subdivision can be represented as T^' = 𝐋_𝐬T, where, T is original template mesh with n vertices and m edges. T^' is the subdivided template mesh with n+m vertices. 𝐋_𝐬∈ℝ^(n+m)× m is the linear transformation that defines the subdivision process. The position of each vertex on the new mesh is only determined by the neighbor vertices on the original mesh, so 𝐋_𝐬 is sparse. We use similar strategies to calculate B_S and B_P. The MANO parameters map the input shape and pose into vertex position adjustments. These mappings are linear matrices of dimension x × n. Therefore, we can calculate the parameters as w^' = (𝐋_𝐬w^⊤)^⊤, B_S^' = (𝐋_𝐬B_S^⊤)^⊤, B_P^' = (𝐋_𝐬B_P^⊤)^⊤. We repeat the procedure twice to get sufficient resolution. <ref> shows example meshes from the new model in different poses (d is set to 0). We can see that our representation inherits the advantages of the parametric hand model. It has a plausible structure with no visual artifacts when the hand poses change. §.§ Hierachical Graph Convolution Network Our GCN network utilizes a multiresolution graph architecture that follows the subdivision process in Section <ref>. Different from the single graph GCNs in previous works <cit.>, our GCN network uses different graphs in different layers. At each level, each vertex of the graph corresponds to a vertex on the mesh and the graph topology is defined by the mesh edges. Between two adjunct resolution levels, the network uses the 𝐋_𝐬 in <ref> for upsampling operation. This architecture is designed for scalable inference. When the computing resources are limited, only the low-resolution mesh needs to be calculated; when the computing resources are sufficient, then we can calculate all the way to the high-resolution mesh. Moreover, this architecture allows us to explicitly supervise the intermediate results, so the details would be added level-by-level. §.§ Graph Frequency Decomposition In order to supervise the output mesh in the frequency domain and design the frequency-based metric, we need to do frequency decomposition on mesh shapes. Here, we regard the mesh as an undirected graph, and 3D locations of mesh vertices as signals on the graph. Then, the frequency decomposition of the mesh is the spectrum analysis of this graph signal. Following <cit.>, given an undirected graph 𝒢 = {𝒱, ℰ} with a vertices set of 𝒱= {1,2,...,N } and a set of edges ℰ= {(i, j) }_i,j ∈𝒱, the Laplacian matrix is defined as 𝐋:=𝐃 - 𝐀, where 𝐀 is the N × N adjacency matrix with entries defined as edge weights a_ij and 𝐃 is the diagonal degree matrix. The ith diagonal entry di = ∑_ja_ij. In this paper, the edge weights are defined as a_ij:={ 1 , (i,j) ∈ℰ 0 , otherwise . which means all edges have the same weights. We decompose 𝐋 using spectrum decomposition: 𝐋=𝐔^⊤Λ𝐔. Here, Λ is a diagonal matrix, in which the diagonal entries are the eigenvalues of 𝐋. 𝐔 is the eigenvector set of 𝐋. Since the Laplacian matrix 𝐋 describes the fluctuation of the graph signal, its eigenvalues show how "frequent" the fluctuations are in each eigenvector direction. Thus, the eigenvectors of larger eigenvalues are defined as higher frequency bases, and the eigenvectors of smaller eigenvalues are defined as lower frequency bases. Since the column vectors of 𝐔 is a set of orthonormal basis of the graph space, following <cit.>, we define transform F(x) = 𝐔^⊤x to be the Fourier transform of graph signal, and F'(x) = 𝐔x to be reverse Fourier transform. This means, given any graph function x ∈ℝ^N× d, we can decompose x in N different frequency components: x=∑_i=1^N𝐔_𝐢(𝐔_𝐢^⊤x), where 𝐔_𝐢∈ℝ^N × 1 is the ith column vector of 𝐔. d is the dimension of the graph signal on each vertex. 𝐔_𝐢^⊤x is the frequency component of x on the ith frequency base. Having <ref>, we can decompose a hand mesh into frequency components. <ref> shows an example of a groundtruth mesh and its frequency decomposition result. The x-axis is the frequencies from low to high. The y-axis is the amplitude of each component in the logarithm. It is easy to observe that the signal amplitude generally decreases as the frequency increases. <ref> shows the cumulative frequency components starting from frequency 0. We can see how the mesh shape changes when we gradually add higher frequency signals to the hand mesh. In general, the hand details increase as higher frequency signals are gradually included. §.§ Frequency Decomposition Loss Frequency decomposition loss. Conventional joint and vertex loss, such as the widely used pre-joint error loss <cit.> and mesh pre-vertex error loss <cit.> commonly used in human body reconstruction, and Chamfer Distance Loss <cit.> commonly used in object reconstruction and 3D point cloud estimation, all measure the error in the spatial domain. In that case, the signals of different frequency components are aliased together. As shown in <ref>, the amplitudes of low-frequency signals of hand shape are much larger than high-frequency signals, so when alias happens, the high-frequency signals will get overwhelmed, which means direct supervision on the spatial domain would mainly focus on low-frequency signals. Thus, spatial loss mostly does not drive the network to generate high-frequency details. Our experiments in <ref> also demonstrate this. To generate detailed information without being overwhelmed by low-frequency signals, we designed a loss function in the frequency domain. Specifically, we use graph frequency decomposition (<ref>) to define our frequency decomposition loss as L_F = 1/F∑_f=1^Flog(𝐔_f^⊤V̂-𝐔_f^⊤V_gt^2/𝐔_f^⊤V̂𝐔_f^⊤V_gt + ϵ + 1), where F=N is the number of total frequency components, 𝐔_f is the fth frequency base, · is L2 norm, ϵ = 1 × 10^-8 is a small number to avoid division-by-zero, V̂∈ℝ^N × 3 and V_gt∈ℝ^N × 3 are the predicted and groundtruth vertex locations, respectively. During training, for every frequency component, our loss reduces the influence of the amplitude of each frequency component, so that information on different frequency components would have equivalent attention. In <ref>, we demonstrate the effectiveness of the frequency decomposition loss. Total loss function. We define the total loss function as: L = λ_JL_J + ∑_l=1^3[ λ_v^(l)L_v^(l) + λ_F^(l)L_F^(l)], where l is the resolution level. l=1 is the lowest-resolution level and l=3 is the highest resolution level. L_J^(l) is 3D joint location error, L_v^(l) is per vertex error, and L_F^(l) is the frequency decomposition loss. λ_J^(l), λ_v^(l), and λ_F^(l) are hyper-parameters. For simplicity, we refer L_J^(l), L_v^(l), and L_F^(l) as L_J, L_v, and L_F for the rest of the paper. Following previous work <cit.>, we define 3D joint location error and per vertex loss as: L_J = 1/N_J∑_j=1^N_JĴ_̂ĵ-J_gt,j, L_v = 1/N∑_i=1^Nv̂_i-v_gt,i, where Ĵ_j and J_gt,j are the output joint location and groundtruth joint location. N_J is the number of joints. v̂_i and v_gt,i are the estimated and groundtruth location of the ith vertex, and N is the number of vertices. § EXPERIMENTS §.§ Datasets Our task requires detailed hand meshes for supervision. Because of the difficulty of acquiring 3D scan data, this supervision is expensive and hard to obtain in a large scale. One alternative plan is to generate meshes from multiview RGB images using multiview stereo methods. Considering the easy access, we stick to this plan and use the generated mesh as groundtruth in our experiments. We do all our experiments on the InterHand2.6M dataset <cit.>, which is a dataset consisting of multiview images, rich poses, and human hand pose annotations. The dataset typically provides 40-100 views for every frame of a hand video. Such a large amount of multiview information would help with more accurate mesh annotation. Finally, we remesh the result hand mesh into the same topology with our 3-level hand mesh template, respectively, so that we can provide mesh supervision for all 3 levels of our network. We use the resulting mesh as groundtruth for training and testing. In this paper, we use the mesh results provided in <cit.>, which are generated using multiview methods of <cit.>, and only use a subset of InterHand2.6m, due to the large number of data in the original dataset. The remeshing method and more dataset details can be found in supplementary material Section 4. In <ref> (last column, “groundtruth"), we show a few examples of the generated groundtruth meshes. Although these meshes are not the exact same as real hands, it is vivid and provides rich and high-fidelity details of human hands. This 3D mesh annotation method is not only enough to support our solution and verify our methods, but is also budget-friendly. §.§ Implementation Details. We follow the network architecture in <cit.> to generate intermediate MANO results. We use EfficientNet <cit.> as a backbone. The low-level, mid-level, and high-level features are extracted after the 1st, 3rd, and 7th blocks of EfficientNet, respectively. For each image feature, we use 1 × 1 convolutions to deduce dimensions. The channel numbers of 1 × 1 convolution are 32, 32, and 64 from low-level to high-level, respectively. After that, we project the initial human hand vertices to the feature maps, and sample a feature vector for every vertex using bilinear interpolation. The GCN graph has 778, 3093, and 12337 vertices at each resolution level. In the training process, we first train <cit.> network, and then use the pretrained result to train our scalable network. For training <cit.>, we use their default hyper-parameters, set the learning rate to 1 × 10 ^-4, and set batch size to 48. When training GCN network, we set λ_J to be 1, set λ_v^(1) and λ_F^(1) to be 1 and 60, set λ_v^(2) and λ_F^(2) to be also 1 and 60, and set λ_v^(3) and λ_F^(3) to be 1 and 100. The learning rate is set to 5 × 10 ^-4 for GCN and 1e-4 for the rest of the network. The batch size is set to 28. The training process takes about 25 hours on 1 NVIDIA GTX3090Ti GPU for 150 epochs. In reference, we use a smooth kernel to post-process the mesh to reduce sharp changes. More details of post-processing will be found in Supplementary Materials Section 3. §.§ Quantitative Evaluation We use mean per joint position error (MPJPE) and Chamfer distance (CD) to evaluate the hand pose and coarse shape. Besides, to better evaluate personalized details, we also evaluate our mesh results using the proposed mean signal-to-noise ratio (MSNR) metric. Mean Signal-to-Noise Ratio (MSNR). Previous metrics for 3D hand mesh mostly calculate the Euclidean distance between the results and the groundtruth. Although in most cases, Euclidean distance can roughly indicate the accuracy of the reconstruction results, it is not consistent with human cognitive standards: it is more sensitive to low-frequency errors, but does not perform well in personalized detail distinction or detailed shape similarity description. Thus, we propose a metric that calculates the signal-to-noise ratio in every frequency base of the graph. We define our Mean Signal-to-Noise Ratio (MSNR) metric as MSNR =1/F∑_f=1^Flog(𝐔_f^⊤V̂/𝐔_f^⊤V̂ - 𝐔_f^⊤V_gt + ϵ), where F=N is the total number of frequency components and S_f is the signal-to-noise ratio of the fth frequency component. 𝐔_f, V̂, and V_gt have the same meaning as in <ref>. ϵ=1 × 10 ^-8 is a small number to avoid division-by-zero. Thus, the maximum of S_f is 8. By this design, the SNR of different frequency components would not influence each other, so we can better evaluate the high-frequency information compared to the conventional Euclidean Distance. We designed an experiment on InterHand2.6m to validate the effectiveness of our metric in evaluating high-frequency details. We add errors of 8 different frequency bands to the hand mesh. For each frequency band, the error amplitude is set under 10 different uniform distributions. As shown in <ref>, we measure the MPVE and MSNR for every noise distribution on every frequency band, to see how the measured results of the two metrics change with the noise amplitude in each frequency band. The result shows that in the low-frequency part, MPVE increases fast when the noise amplitude increases (the upper lines), but in high-frequency bands, the measured result changes very slowly when the noise amplitude increases. MSNR behaves completely differently from MPVE. It is more sensitive to noise in the high-frequency band than in the low-frequency band. Thus, compared to Euclidean distance, MSNR better measures the error in high-frequency details. <ref> shows a few examples of noisy meshes. Evaluation on InterHand2.6M dataset. We report mean per joint position error (MPJPE), Chamfer distance (CD), and mean signal-to-noise ratio (MSNR) to evaluate the overall accuracy of reconstructed hand meshes. <ref> shows the comparison among 3 levels of our proposed method and MANO. As shown in the table, the proposed method improves the accuracy of hand surface details by a large margin (as indicated by MSNR). We also observe that, while our method generates better shape details in a scalable manner, the joint locations and the overall shape of the output meshes also become slightly more accurate (as indicated by MPJPE and CD). Here, the MSNR of MANO, Ours-level 1, and Ours-level 2 are calculated after subdividing their meshes into the same resolution as Ours-level 3. §.§ Ablation Study We conduct several experiments to demonstrate the effectiveness of the feature skip connection design (in <ref>). and different loss functions. The results are shown in <ref>. From the result, we observe that our projection-to-feature-map skip connection design leads to performance improvement in all three metrics. For the loss functions, we observe MSNR degrades when the frequency decomposition loss is removed, indicating inferior mesh details. Removing the per-vertex error loss dramatically increases the Chamfer distance, indicating that the overall shape is not well constrained. The visualization results of the latter 2 experiments are shown in <ref>, if we do not use frequency decomposition loss, the mesh result we get tends to be smoother with less personalized details. If we do not use per-vertex error loss, the mesh's low-frequency information is not well-learned. The mesh we generate will have an overall shape deformation. Scalable design. We also demonstrate the scalable design of the proposed network by analyzing the resource needed at each resolution level (<ref>). In general, higher resolution levels require more computational resources in the network, and more resources to store and render the mesh. Still, our approach supports scalable reconstruction and can be applied to scenarios with limited computational resources. Here, “baseline" means only generating the MANO mesh in our network. Visualization Results. The qualitative reconstruction results are shown in <ref>. We observe that even when MANO is upsampled to 200k vertices, it still does not capture personalized details while our results provide better shape details. More qualitative results can be found in the Supplementary Material Section 5. § CONCLUSION We provided a solution to reconstruct high-fidelity hand mesh from monocular RGB inputs in a scalable manner. We represent the hand mesh as a graph and design a scalable frequency split network to generate hand mesh from different frequency bands. To train the network, we propose a frequency decomposition loss to supervise each frequency component. Finally, we introduce a new evaluation metric named Mean Signal-to-Noise Ratio (MSNR) to measure the signal-to-noise ratio of each mesh frequency component, which can better measure the details of 3D shapes. The evaluations on benchmark datasets validate the effectiveness of our proposed method and the evaluation metric in terms of recovering 3D hand shape details. § ACKNOWLEDGMENTS This work is supported in part by a gift grant from OPPO. ieee_fullname SAD § DETAILED NETWORK ARCHITECTURE We proposed a detailed network architecture of our approach in <ref>. The green boxes are the features, in which we note the feature dimensions. The blue boxes represent blocks of EfficientNet <cit.>. The red boxes represent GCN blocks. The GCN residual blocks in the network are designed following the manner of <cit.>. Details of the residual blocks are shown on the right of the figure. The gray boxes are the feature skip-connection part. To get multi-level image features from feature maps, we project the vertices into the feature maps, and use a bilinear interpolation technique to sample features. We will illustrate the process more in <ref>. The purple boxes are the sub-network used to generate MANO mesh. The orange boxes indicate the annotation we used. The green arrows are feature streams and the red lines are skip connections. We fetch skip-connected features from the output of EfficientNet Block 1, Block 3, and Block 7. The features are used as parts of the input of the GCN. The GCN has 3 levels. At each level, the input features go through a 10-layer GCN Residual Block, then output a feature vector and a 3D location at each vertex. The 3D locations are used as intermediate output and for supervision. The features are used as a part of the input for the next level. At the third level, we only output the 3D location of each vertex as the final mesh. § SKIP-CONNECTED FEATURE SAMPLING In <ref>, the features fetched from EfficientNet are feature maps. We want to transfer them into feature vectors and put them on the vertices without losing spatial information. Thus, we design a feature sampling strategy to put the local image feature on each graph vertex. As shown in <ref>, we use orthodox projection to find the feature vector for each vertex on the feature map. For every vertex P, we calculate the projection point P^' on the feature map. Then, we extract the feature vector x ∈𝐑^c using bilinear interpolation at point P^', where c is the feature map channel number. The total output feature dimension is N × c, where N is the number of graph vertices. § MESH POST-PROCESSING We do a post-process on the third-level mesh. Due to the flaws of groundtruth mesh (shown in <ref>), some of our output mesh also have similar structure flaws. To tackle this problem, we designed a smooth mask to reduce the flaws. <ref> shows the output of the network, our smooth mask, and our final mesh result. As we can see, the flaws are highly reduced. Note that, this flaw is caused by the noisy groundtruth, so it can also be reduced by a better remeshing of the training data in the future. § REMESHING PROCEDURE We try to use the multiview stereo (MVS) generated mesh provided in <cit.>. However, the MVS mesh has about 500k vertices on each mesh. The large vertex number mesh with high redundancy makes our training process much slower. Moreover, without a fixed topology, the choices of shape supervision are limited. For example, we would not be able to use the per vertex loss and frequency decomposition loss for training. Thus, we designed a remeshing technic to transfer the mesh generated in the multiview stereo (MVS) method into a unified topology. The algorithm is shown in <ref>a. First, we align the MVS mesh with a parametric template mesh. Here, we use template meshes designed in the main paper Section 3.2. Second, we use an optimization approach to calculate a set of pose and shape parameters, so that the template mesh becomes a coarse approximation of the MVS mesh. Finally, we use the closet point on the MVS mesh as a substitute for each vertex on the parametric mesh. This procedure would preserve the detailed shape and the topology of the parametric template at the same time. In our experiments, we generate 3 resolution levels of groundtruth mesh for supervision, and use the third level for testing. However, despite the good attributes of the groundtruth meshes, some of them still have flaws. <ref>b shows an example of the mesh flaws inside the mesh (red rectangle). It happens because some of the vertices on the parametric mesh find the wrong corresponding vertices on the MVS mesh. These groundtruth mesh flaws will eventually cause defects in generated mesh (shown in <ref>). We have largely reduced the flaws of our mesh using the mesh post-processing method mentioned in <ref>. § MORE VISUALIZATION RESULTS We show more visualization results of our proposed method in <ref>. § FAILURE CASES We show in <ref> a few failure cases where our method generates hand meshes with flaws. Most of these flaws are caused by groundtruth flaws in remeshing (shown in <ref>b). § FUTURE WORKS AND DISCUSSIONS In future works, our backbone can be replaced with more recent work such as those in <cit.>. The object detection and segmentation-related network can be helpful for hand-related tasks. We would also improve the remeshing procedure to reduce the artifacts. Besides, we would also improve our method to tackle the in-the-wild hand reconstruction problem. Moreover, the frequency decomposition approach can be easily expanded to improve the details of human body reconstruction works such as <cit.>.
http://arxiv.org/abs/2307.04085v1
20230709023446
Vector Commitments with Efficient Updates
[ "Ertem Nusret Tas", "Dan Boneh" ]
cs.CR
[ "cs.CR" ]
Age of FGK Dwarfs Observed with LAMOST and GALAH: Considering the Oxygen Enhancement Jinghua Zhang Received August 12, 2023; accepted August 12, 2023 ==================================================================================== Dynamic vector commitments that enable local updates of opening proofs have applications ranging from verifiable databases with membership changes to stateless clients on blockchains. In these applications, each user maintains a relevant subset of the committed messages and the corresponding opening proofs with the goal of ensuring a succinct global state. When the messages are updated, users are given some global update information and update their opening proofs to match the new vector commitment. We investigate the relation between the size of the update information and the runtime complexity needed to update an individual opening proof. Existing vector commitment schemes require that either the information size or the runtime scale linearly in the number k of updated state elements. We construct a vector commitment scheme that asymptotically achieves both length and runtime that is sublinear in k, namely k^ν and k^1-ν for any ν∈ (0,1). We prove an information-theoretic lower bound on the relation between the update information size and runtime complexity that shows the asymptotic optimality of our scheme. While in practice, the construction is not yet competitive with Verkle commitments, our approach may point the way towards more performant vector commitments. § INTRODUCTION A Vector Commitment (VC) scheme <cit.> enables a committer to succinctly commit to a vector of elements. Later, the committer can generate an opening proof to prove that a particular position in the committed vector is equal to a certain value. VCs have found many applications in databases and blockchains <cit.> as they enable a storage system to only store a commitment to the vector instead of the entire vector. The data itself can be stored elsewhere along with opening proofs. In a multiuser system, every user might store only one position of the vector along with the opening proof for that position. Dynamic VCs <cit.> are vector commitments that support updates to the vector. Suppose the committed vector is of length N and some k < N positions in the vector are updated, so that a new vector commitment is published. Then, every user in the system will need to update their local opening proof to match the updated commitment, and this is done with the help of some global update information U that is broadcast to all users. This information is typically generated and published by a manager who maintains the entire vector. Applications of dynamic VCs include verifiable databases, zero-knowledge sets with frequent updates <cit.> and stateless clients for blockchains <cit.>. The challenge is to design a VC scheme that minimizes the size of the update information U as well as the computation work by each user to update their local opening proof. For example, consider stateless clients on a blockchain as an important application for dynamic VCs. The state of the chain can be represented as a vector of length N, where position i corresponds to the state of account number i. Every user will locally maintain its own state (corresponding to some position in the vector) along with an opening proof that enables the user to convince a third party as to its current state. Whenever a new block is published, the state of the chain changes. In particular, suppose k out of the N positions in the vector need to be updated. The block proposer will publish the update information U along with the new block, and every user will update their opening proof to match the new committed state of the chain. Thus, users can ensure that their opening proofs are up to date with respect to the latest committed state of the chain. We stress that in this application, the data being updated, namely the updated positions and diffs, is published as part of the block. The update information U only contains additional information that is needed to update the opening proofs. When we refer to the size of U, we refer to its size, excluding the updated data (i.e., excluding the updated positions and diffs). In this paper, we investigate the trade-off between the length |U| of the update information and the time complexity of proof updates. Dynamic VCs can be grouped into two categories in terms of these parameters (Table <ref>). Tree-based VCs <cit.> enable users to update their proofs in time O(N). Each opening proof typically consists of (N) inner nodes, and the update information U contains the changes in the inner nodes affected by the message updates. Each user calculates its new opening proof by downloading the relevant inner nodes published as part of U. When k positions are updated, a total of O(k log(N)) inner nodes in the tree are affected in the worst case. Thus, when each inner node has length Θ(λ), proportional to the security parameter λ, the update information consists of O(k log(N)λ) bits. In contrast, algebraic VCs <cit.> enable users to update their opening proofs with only knowledge of the updated data. They do not require any additional update information U to be published beyond the indices and the `diffs' of the updated data. Thus, the length of the update information needed to update the opening proofs is O(1). However, algebraic VCs typically require each user to read all of the changed messages and incorporate the effect of these changes on their proofs, resulting in Θ(k) work per proof update. To summarize, while tree-based VCs support efficient calculation of the new opening proofs by publishing a large amount of update information, linear in k, algebraic VCs do not require any additional update information beyond the updated data, but suffer from a large runtime for proof updates, linear in k. We formalize the dichotomy of VCs in Section <ref>. §.§ Our Results We propose a family of VCs that can support sublinear update, where both the length |U| of the update information and the complexity of proof updates are sublinear in k. More specifically, our VCs can attain |U| = Θ(k^νλ), ν∈ (0,1), with a proof update complexity of Θ(k^1-ν) operations. Our candidate construction with sublinear update is a homomorphic Merkle tree, first developed by <cit.>, where each inner node can be expressed as a sum of the partial digests of the messages underneath (Section <ref>). The algebraic structure of these trees enable each user to calculate the effect of a message update on any inner node without reading other inner nodes or messages. We identify homomorphic Merkle tree constructions based on lattices, from the literature <cit.>. In Section <ref>, we provide the update algorithms (Alg. <ref>) for homomorphic Merkle trees, parameterized by ν∈ (0,1). Our algorithm identifies a special subset of size Θ(k^ν) of the inner nodes affected by the message updates, and publish their new values as U; so that the users need not calculate these values. These inner nodes are selected carefully to ensure that any inner node outside of U is affected by at most Θ(k^1-ν) updated messages. Thus, to modify its opening proof, each user has to calculate the partial digests of at most Θ(k^1-ν) updated messages per inner node within its proof (that consists of Θ(log(N)) inner nodes). Moreover, to calculate these partial digests, the user only needs the `diffs' of the updated messages. This brings the asymptotic complexity of proof updates to Θ(k^1-ν) operations, while achieving an update information size of Θ(k^νλ) as opposed to Θ(kλ) on Merkle trees using SHA256. In Section <ref>, we prove an information theoretic lower bound on the size of the update information given an upper bound on the runtime complexity of proof updates. The bound implies the asymptotic optimality of our scheme with sublinear update. Its proof is based on the observation that if the runtime complexity is bounded by O(k^1-ν), a user that wants to update its proof cannot read beyond O(k^1-ν) updated messages. Then, to calculate the effect of the remaining k-O(k^1-ν) messages on its opening proof, the user has to download parts of the structured update information U. Finally, to obtain the lower bound on |U|, we use Shannon entropy and lower bound the number of bits, namely O(k^νλ), required to capture the total information that will be downloaded by the users; while maintaining the security of the VC with parameter λ. §.§ Applications We identify three main applications for VCs with sublinear update. §.§.§ Stateless clients for PoS Ethereum Ethereum is the largest decentralized general purpose computation platform by market cap. Ethereum state (, user accounts) is currently stored in the form of a Merkle tree <cit.> and grows approximately by half every year <cit.>. Stateless clients <cit.> were proposed to mitigate the problem of state bloat and prevent the state storage and maintenance from becoming a bottleneck for decentralization. Stateless clients maintain an opening proof to their account balances within the Ethereum state, thus can effortlessly prove the inclusion of their accounts within the latest state. This enables the other Ethereum clients to verify the transactions that come with opening proofs without having to download the full state and check the validity of the claimed account balances. Since block verification now requires downloading the proofs for the relevant state elements, Verkle trees <cit.> were proposed as a replacement for Merkle trees due to their short proof size. Each new Ethereum block contains transactions that update the state elements and their opening proofs. Archival nodes and block producers still maintain the full state so that they can inform the stateless clients about their new opening proofs. For this purpose, block producers must broadcast enough information to the clients over the peer-to-peer gossip network of Ethereum. As minimizing the proof size was paramount to decentralizing verification for blocks, minimizing the update information size becomes necessary for decentralizing the role of the block producer who has to disseminate this information. However, reducing the length of the update information must not compromise the low overhead of stateless clients by requiring larger number of operations per proof update. Therefore, the ideal VC scheme for stateless clients must strike a delicate balance between the size of the update information and the runtime complexity of proof updates. In Section <ref>, we provide the update algorithms (Algs. <ref> and <ref>) for Verkle trees. We observe that Verkle trees do not support sublinear update, and fall under the same category as tree-based VCs with update information length Θ(k λ). Despite this fact, Verkle trees are highly practical in terms of updates. In Section <ref>, we estimate that the update information size after a typical Ethereum block does not exceed |U| ≈ 100 kBytes (compared to the typical block size of <125 kBytes). Moreover, each Verkle proof can be updated within approximately less than a second on commodity hardware. In contrast, even the most efficient homomorphic Merkle tree construction <cit.> requires an update information size of 110.88 MBytes and an update time of 32.6 seconds when the trade-off parameter ν is 1/2, despite its asymptotic optimality (Section <ref>). The large update information size is due to the lattice-based construction of these VCs. Designing dynamic VCs that are both asymptotically optimal and practically efficient remains an open problem. §.§.§ Databases with frequent membership changes VCs with sublinear update can support databases with frequent membership changes. When a user first registers, a message is updated to record the membership of the user. The user receives this record and its opening proof, using which it can later anonymously prove its membership. When the user leaves the system, the message is once again updated to delete the record. In all these steps, membership changes result in updates to the opening proofs of other members. When these changes are frequent, it becomes infeasible to distribute new proofs after each change. VCs with sublinear update offer an alternative and efficient way to update the opening proofs of the users in the event of such changes. §.§ Related Work There are many VC constructions, each with different guarantees regarding the proof, commitment and public parameter sizes, verification time, updatability and support for subvector openings <cit.> (cf <cit.> for an SoK of VCs). First formalized by <cit.>, almost all VCs allow some degree of updatability. Whereas <cit.> enable updating the commitment and the opening proofs with only the knowledge of the old and the new messages, most VCs require some structured update information beyond the messages when the users do not have access to the internal data structures. Among the lattice-based accumulators, vector commitments and functional commitments <cit.>, constructions amenable to sublinear update are presented in <cit.>. Homomorphic Merkle trees were formalized and instantiated by <cit.> in the context of streaming authenticated data structures and parallel online memory checking. The construction presented in <cit.> offers an alternative VC with sublinear update as it is not a Merkle tree, yet has the property that each inner node can be expressed as a sum of the partial digests of individual messages. For dynamic accumulators that support additions, deletions and membership proofs, Camacho and Hevia proved that after k messages are deleted, Ω(k) bits of data must be published to update the proofs of the messages in the initial accumulated set <cit.>. Their lower bound is information-theoretic and follows from a compression argument (Appendix <ref>). Christ and Bonneau subsequently used a similar method to prove a lower bound on the global state size of a revocable proof system abstraction <cit.>. As revocable proof systems can be implemented by dynamic accumulators and vector commitments, their lower bound generalizes to these primitives, , after k messages are updated in a dynamic VC, at least Ω(k) bits of data must be published to update the opening proofs (Appendix <ref> for the proof). They conclude that a stateless commitment scheme must either have a global state with linear size in the number of accounts, or require a near-linear rate of local proof updates. In our work, we already assume a linear rate of local proof updates, , after every Ethereum block or k messages in our parameterization, and that the message updates are publicized by the blockchain. We instead focus on the trade-off between the global structured update information size (beyond the published messages) and the runtime complexity of proof updates. § PRELIMINARIES §.§ Notation We denote the security parameter by λ. An event is said to happen with negligible probability, if its probability, as a function of λ, is o(1/λ^d) for all d>0. An event happens with overwhelming probability if it happens except with negligible probability. We denote the set {0,1,2,…,N-1} by [N]. When y = O(h(x) (x)), we use the shorthand y=O(h(x)) (similarly for Θ(.) and Θ(.)). The function H(.) ℳ→{0,1}^λ represents a collision-resistant hash function. We denote the binary decomposition of an integer x by (x), and for c>2, its base c decomposition by _c(x). A vector of N elements (n_0, …, n_N-1) is shown as (n_i)_i. The notation 𝐱[i:j] denotes the substring starting at the i^th index and ending at the j^th index within the sequence 𝐱. In the subsequent sections, k will be used to denote the number of updated messages. For a prime p, let 𝔽_p denote a finite field of size p. We use 𝔾 to denote a cyclic group of prime order p with generator g. The Lagrange basis polynomial for a given x ∈𝔽_p is denoted as L_x(X): C L_x(X) = ∏_i ∈𝔽_p i ≠x X-i/x-i We will use |G| and |H| to denote the maximum size of the bit representation of a single group element and a single hash value respectively. We will use T_G and T_f to denote the time complexity of a single group operation and a single function evaluation for the hash functions in Section <ref>. §.§ Vector Commitments A vector commitment (VC) represents a sequence of messages such that each message can be proven to be the one at its index via an opening proof. A dynamic vector commitment allows updating the commitment and the opening proofs with the help of an update information when the committed messages are changed. Dynamic (updateable) vector commitments can be described by the following algorithms: KeyGen(1^λ, N) → pp Given the security parameter λ and the size N=(λ) of the committed vector, the key generation algorithm outputs public parameters pp, which implicitly define the message space ℳ. Commit_pp(m_0, …, m_N-1) → (C, ) Given a sequence of N messages in ℳ and the public parameters pp, the commitment algorithm outputs a commitment string C and the data required to produce the opening proofs for the messages. Here, contains enough information about the current state of the VC's data structure (, the current list of committed messages) to help generate the opening proofs. Open_pp(m, i, ) →π_i The opening algorithm is run by the committer to produce a proof π_i that m is the i^th committed message. Verify_pp(C, m, i, π_i) →{0,1} The verification algorithm accepts (, outputs 1) or rejects a proof. The security definition will require that π_i is accepted only if C is a commitment to some (m_0, …, m_N-1) such that m = m_i. Update_pp(C, (i, m_i)_i ∈ [N], (i, m'_i)_i ∈ [N], ) → (C', U, ') The algorithm is run by the committer to update the commitment C when the messages (m_i_j)_j ∈ [k] at indices (i_j)_j ∈ [k] are changed to (m'_i_j)_j ∈ [k]. The other messages in the vector are unchanged. It takes as input the old and the new messages, their indices and the data variable . It outputs a new commitment C', update information U and the new data variable '. ProofUpdate_pp(C, p((i, m_i)_i ∈ [N], (i, m'_i)_i ∈ [N]), π_j , m', i, U) →π_j' The proof update algorithm can be run by any user who holds a proof π_j for some message at index j and a (possibly) new message m' at that index. It allows the user to compute an updated proof π'_j (and the updated commitment C') such that π'_j is valid with respect to C', which contains m'_i, i ∈ N, as the new messages at the indices i ∈ N (and m' as the new message at index i). Here, p(.) specifies what portion of the old and the new messages is sufficient to update the opening proof. For instance, the proof update algorithm often does not need the old and the new messages in the open; but can carry out the proof update using only their differences. In this case, p((i, m_i)_i ∈ [N], (i, m'_i)_i ∈ [N]) = (i, m'_i-m_i)_i ∈ N. Correctness of a VC requires that ∀ N = (λ), for all honestly generated parameters pp KeyGen(1^λ, N), given a commitment C to a vector of messages (m_0, …, m_N-1) ∈ℳ^N, generated by Commit_pp (and possibly followed by a sequence of updates), and an opening proof π_i for a message at index i, generated by Open_pp or ProofUpdate_pp, it holds that Verify_pp(C, m_i, i, π_i)=1 with overwhelming probability. Security of a VC is expressed by the position-binding property: A VC satisfies position-binding if ∀ i ∈ [N] and for every PPT adversary 𝒜, the following probability is negligible in λ: C [Verify_pp(C, m, i, π_i) = 1 Verify_pp(C, m', i, π'_i) = 1 m ≠m' pp KeyGen(1^λ, N) (C, m, m', π_i, π'_i) 𝒜(pp)] We relax the succinctness assumption of <cit.> and denote a value to be succinct in x if it is (x). §.§ KZG Polynomial Commitments The KZG commitment scheme <cit.> commits to polynomials of degree bounded by ℓ using the following algorithms: KeyGen(1^λ, ℓ) → pp outputs pp = (g, g^τ, g^(τ^2), …, g^(τ^ℓ)) as the public parameters, where g is the generator of the cyclic group 𝔾 and τ is a trapdoor (pp[i] = g^τ^i). Commit(pp, ϕ(X)) → (C, ) The commitment to a polynomial ϕ(X) = ∑_i=0^ℓ-1 a_i X^i is denoted by [ϕ(X)], and is computed as [ϕ(X)] = ∏_i=0^ℓ (pp[i])^a_i. The commitment algorithm outputs C = [ϕ(X)] and = ϕ(X). Open_pp(m, i, ) →π: outputs the opening proof π_i that ϕ(i) = m, calculated as the commitment to the quotient polynomial (ϕ(X)-ϕ(i)) / (X-i). Verify(C, m, i, π) accepts if the pairing check e(C/g^m, g) = e(π, pp[1]/g^i ) holds. We refer to <cit.> for the security analysis of this scheme. §.§ Merkle Trees Merkle Tree is a vector commitment using a collision-resistant hash function. In a Merkle tree, hashes of the committed messages constitute the leaves of a c-ary tree of height h = log_c(N), where each inner node is found by hashing its children. The depth of the root is set to be 0 and the depth of the leaves is ⌈log_c(N) ⌉. The commitment function outputs the Merkle root as the commitment C and the Merkle tree as . The opening proof for a message m_x at some index x is the sequence of h(c-1) hashes consisting of the siblings of the inner nodes on the path from the root to the hash of the message m_x. We hereafter consider binary Merkle trees (c=2) and assume N=c^h = 2^h unless stated otherwise. Let u_b_0, b_1,…,b_i-1, b_j ∈{0,1}, j ∈ [i], denote an inner node at depth i-1 that is reached from the root by choosing the left child at depth j if b_j=0 and the right child at depth j if b_j=1 (b_0= and u_ is the root). By definition, for a message m_x at index x, H(m_x) = u_,(x). §.§ Verkle Trees A Verkle tree <cit.> is similar to a Merkle tree except that each inner node is calculated as the hash of the KZG polynomial commitment to its children. Let b_j ∈ [c], j=1, …, h, denote the indices of the inner nodes on the path from the root to a leaf at index x, _c(x) = (b_1, …, b_h), relative to their siblings. Define f_b_0,…,b_j, j ∈ [h], as the polynomials determined by the children of the inner nodes on the path from the root to the leaf, where f_b_0=f_ is the polynomial determined by the children of the root. Let C_b_0,…,b_j = [f_b_0,…,b_j], j ∈ [h], denote the KZG commitments to these polynomials. By definition, u_b_0,…,b_j = H(C_b_0,…,b_j), and the value of the polynomial f_b_0,…,b_j at index b_j+1 is u_b_0,…,b_j+1 for each j ∈ [h]. Here, u_b_0 = H(C_b_0) is the root of the tree, and u_b_0,…,b_h equals the hash H(m_x) of the message at index x. For consistency, we define C_b_0,…,b_h as m_x. For example, given h = 3 and c = 4, the inner nodes from the root to the message m_14 have the indices b_0 = 0, b_1 = 3 and b_2 = 2, and they are committed by the polynomials f_, f_,0 and f_,0,3 respectively. The commitment function Commit_pp(m_0, …, m_N-1) outputs the root u_b_0 as the commitment C and the Verkle tree itself as . The Verkle opening proof for the message m_x, (x) = (b_1, …, b_h), consists of two parts: (i) the KZG commitments (C_b_0,b_1, …, C_b_0,…, b_h-1) on the path from the root to the message, and (ii) a Verkle multiproof. The goal of the Verkle multiproof is to show that the following evaluations hold for the inner nodes from the root to the message: f_b_0,…,b_j(b_j+1)=u_b_0,…,b_j+1 = H(C_b_0,…,b_j+1), j ∈ [h]. It has two components: (i) the commitment [g(X)] and (ii) the opening proof π' for the polynomial h(X)-g(X) at the point t=H(r,[g(X)]), where C g(X)=∑_j=0^h-1 r^j f_b_0,…,b_j(X)-u_b_0,…,b_j+1/X-b_j+1, h(X)=∑_j=0^h-1 r^j f_b_0,…,b_j(X)/t-b_j+1, and r=H(C_b_0,..,C_b_0,…,b_h-1,u_b_0,b_1,..,u_b_0,…,b_h,b_1,..,b_h). Thus, Open_pp(m, i, ) outputs ((C_b_0,b_1, …, C_b_0,…,b_h-1), ([g(X)], π')). To verify a Verkle proof π = ((C_b_0,b_1, …, C_b_0,…,b_h), (D,π')), the algorithm Verify_pp(C, m, x, π) first computes r and t using u_b_0,…,b_j = H(C_b_0,…,b_j), j ∈ [h], and u_b_0,…,b_h = H(m). Then, given the indices (x) = (b_1, …, b_h) and the commitments (C_b_0,b_1, …, C_b_0,…,b_h), it calculates C y = ∑_j=0^h-1 r^j C_b_0,…,b_j/t-b_j+1 E = ∑_j=0^h-1 r^j/t-b_j+1 C_b_0,…,b_j. Finally, it returns true if the pairing check e(E-D-[g(X)],[1]) = e(π', [X-t]) is satisfied. As the degree c of a Verkle tree increases, size of the opening proofs and the runtime of the verification function decreases in proportion to the height h = log_cN of the tree. This enables Verkle trees to achieve a short opeining proof size for large number of messages (as in the case of the Ethereum state trie) by adopting a large degree (, c=256). In comparison, each Merkle proof consists of (c-1) log_cN inner nodes, which grows linearly as c increases. § FORMALIZING THE DICHOTOMY OF VCS We first analyze the trade-off between the number of operations required by proof updates and the size of the update information U by inspecting different types of dynamic VCs. Recall that the number of updated messages is k ≤ N. §.§ Updating KZG Commitments and Opening Proofs In the subsequent sections, we assume that each user has access to a dictionary of KZG commitments to the Lagrange basis polynomials L_i(X), i ∈𝔽_p, and for each polynomial, its opening proofs at each point j ∈𝔽_p, j < N. With the help of this table, one can instantiate a KZG based VC to the messages (m_i)_i ∈ [N], by treating them as the values of the degree N polynomial ϕ(X) at inputs i ∈𝔽_p, i<N. We next analyze the complexity of the update information and the proof updates in this VC. The update and proof update algorithms are described by Alg. <ref> in Appendix <ref>. §.§.§ Update Information Suppose the vector (i, m_i)_i ∈ [N] is updated at some index i such that m'_i m_i + δ for some δ∈𝔽_p. Then, the polynomial ϕ(X) representing the vector is replaced by ϕ'(X) such that ϕ'(X) = ϕ(X) if X ≠ i, and ϕ'(i) = ϕ(i) + δ at X = i. Thus, the new KZG commitment C' to ϕ'(X) is constructed from the commitment C to ϕ(X) as follows: rCl C' = [ϕ'(X)] = [ϕ(X)+δL_i(X)] = [ϕ(X)][L_i(X)]^δ = C ·[L_i(X)]^δ = C ·[L_i(X)]^m'_i-m_i. If the vector is modified at k different indices i_1,...,i_k from message m_i_j to m'_i_j, j ∈ [k], then the new commitment C' = [ϕ'(X)] becomes rCl [ϕ(X)+∑_j=1^k (m'_i_j-m_i_j) L_x_i_j(X)] = [ϕ(X)] ∏_j=1^k[L_i_j(X)]^(m'_i_j-m_i_j) = C ∏_j=1^k[L_i_j(X)]^(m'_i_j-m_i_j). Thus, the commitment can updated given only the old and the new messages at the updated indices, besides the table. §.§.§ Proof Update Let π_x denote the opening proof of a polynomial ϕ(X) at a point (x,m_x). When k messages are updated, the new opening proof π'_x can be found as a function of the old proof π_x and the opening proofs π_i_j,x of the Lagrange basis polynomials L_i_j(X), j ∈ [k], at the index x (m'_x = m_x+∑_j=1^k (m'_i_j-m_i_j) · 1_x=i_j is the new value of m_x after the k updates): rCl π'_x = [ϕ'(X)-m_x-∑_j=1^k δ_j ·1_x=i_j/X-x] = π_x ∏_j=1^k [L_i_j(X)-L_i_j(x)/X-x]^m'_i_j-m_i_j = π_x ∏_j=1^k π^m'_i_j-m_i_j_i_j,x Thus, the proof can updated given only the old and the new messages at the updated indices, besides the table. The update information is set to be the empty set, , U = ∅. §.§.§ Complexity The size of the update information is constant, , Θ(1). Each user can update its proof after k accesses to the dictionary, and in the worst case, Θ(k log|ℳ|) = Θ(k) group operations as log(m'_i-m_i)≤log|ℳ| for all i ∈ [N]. §.§ Updating Merkle Trees and Opening Proofs We next consider a Merkle tree and analyze the complexity of the update information size and the runtime for proof updates. A simple update scheme would be recalculating the new Merkle tree given all of the old messages or the old inner nodes of the Merkle tree, and the message updates. However, this implies a large complexity for the runtime of the proof update algorithm that scales as Ω(k) when users keep track of the inner nodes, and as Ω(N) when the users recalculate the tree from scratch at each batch of updates. Moreover, in many applications, the users do not have access to any messages or inner nodes besides those that are part of the Merkle proof held by the user. Hence, in the following sections, we describe update and proof update algorithms that reduce the runtime complexity of the proof updates at the expanse of larger update information (Alg. <ref> in Appendix <ref>). §.§.§ Update Information Suppose the vector (i, m_i)_i ∈ [N] is updated at some index x, (b_1,…,b_h) = (x), to m'_x. Then, the root C=u_b_0 and the inner nodes (u_b_0,b_1, …, u_b_0,b_1,…,b_h), (b_1,…,b_h) = (i), must be updated to reflect the change at that index. Given the old inner nodes, the new values for the root and these inner nodes, denoted by C'=u'_b_0 and (u'_b_0,b_1, …, u'_b_0,b_1,…,b_h), are calculated recursively as follows: rCl u'_b_0,b_1,…,b_h H(m'_x), u'_b_0,b_1,…,b_j ^ H(u'_b_0,b_1,…,b_j,0, u_b_0,b_1,…,b_j,1) if b_j+1 = 0, j<h H(u_b_0,b_1,…,b_j,0, u'_b_0,b_1,…,b_j,1) if b_j+1 = 1, j<h When the messages are modified at k different points i_j, j ∈ [k], the calculation above is repeated k times for each update. As the updated inner nodes are parts of the Merkle proofs, the update information consists of the new values at the inner nodes listed from the smallest to the largest depth in the canonical left to right order. For instance, U = ((, u'_), (0, u'_0), (1, u'_1), (00, u'_00), (10, u'_10), …) implies that the root u_ and the inner nodes u_0, u_1, u_00 and u_10 were updated after k messages were modified at the leaves of the Merkle tree. We reference the updated inner nodes using their indices (, U[b_0, b_1 … b_j] = v, when (b_1 … b_j, v) ∈ U). §.§.§ Proof Update The Merkle proof π_x for a message at index x, (b_1, …, b_h) = (x), is the sequence (u_b_1, u_b_1,b_2, …, u_b_1,b_2,…,b_h). When k messages are updated, some of the inner nodes within the proof might have changed. A user holding the Merkle proof for index x can find the new values of these inner nodes by querying the update information with their indices. §.§.§ Complexity Upon receiving the update information U, each user can update its proof in Θ(log^2(N)+|H| log(N)) = Θ(1) time by running a binary search algorithm to find the updated inner nodes within U that are part of its Merkle proof, and reading the new values at these nodes. Since modifying each new message results in h = log(N) updates at the inner nodes and some of the updates overlap, |U| = Θ(k log(N/k) (log(N)+|H|)) = Θ(k)|H|, as each updated inner node is represented by its index of size Θ(log(N)) and its new value of size |H| in U. §.§ Dichotomy of VCs In the case of KZG commitments, |U| = Θ(1), and there is no information overhead on top of the message updates. For Merkle trees with an efficient proof update algorithm, |U| = Θ(k)|H|, thus there is an extra term scaling in Θ(k)|H| = Θ(k)λ, since |H| = Ω(λ) for collision-resistant hash functions. In contrast, for KZG commitments, each user has to do Θ(k) group operations to update its opening proof; whereas in Merkle trees, each user can update its proof in Θ(1) time, which does not depend on k. Hence, KZG commitments outperform Merkle trees in terms of the update information size, whereas Merkle trees outperform KZG commitments in terms of the time complexity of proof updates. Table <ref> generalizes this observation to a dichotomy between algebraic VC schemes and tree-based ones favoring shorter runtimes for proof updates. The algebraic and tree-based ones outperform each other in terms of the update information size and runtime complexity respectively. § VECTOR COMMITMENTS WITH SUBLINEAR UPDATE We would like to resolve the separation in Table <ref> and obtain a vector commitment, where both the size of the update information and the complexity of proof updates have a sublinear dependence on k. In particular, |U| = Θ(g_1(k)λ) in the worst case, and the proof update algorithm requires at most Θ(g_2(k)) operations, where both g_1(k) and g_2(k) are o(k). We say that such a VC supports sublinear update. In this section, we describe a family of VCs with sublinear update, parameterized by the values ν∈ (0,1) and characterized by the functions (g_1,g_2) = (k^ν, k^1-ν). §.§ Homomorphic Merkle Trees We first introduce homomorphic Merkle trees where messages placed in the leaves take values in a set ℳ. We will use two collision-resistant hash functions f̃𝒟×𝒟→ℛ and f ℳ→ℛ, where both ℳ and 𝒟 are vector spaces over some field 𝔽, and ℛ is an arbitrary finite set. We will also need an injective mapping g: ℛ→𝒟, which need not be efficiently computable. We use g^-1: 𝒟→ℛ to denote the inverse of g, meaning that g^-1(g(x)) = x for all x ∈ℛ. We require that g^-1 be efficiently computable. Now, for j ∈ [h], where h is the height of the tree, every node u_b_0,…,b_j∈𝒟 of the homomorphic Merkle tree is characterized by the following expressions: llCl a leaf node: g^-1(u_b_0,(i)) = f(m_i) an internal node: g^-1(u_b_0,…,b_j) = f̃(u_b_0,…,b_j,0, u_b_0,…,b_j,1) for j < h The homomorphic property of the Merkle tree refers to the fact that there are efficiently computable functions h_i,j: 𝒟→𝒟 for i ∈ [N] and j ∈ [h], such that every inner node u_b_0,…,b_j∈𝒟 can be expressed as rCl u_b_0 = ∑_i ∈[N] h_i,0(m_i) u_b_0,…,b_j = ∑_i(i)[0:j-1]=(b_1,…,b_j) h_i,j(m_i). We refer to the function h_i,j as a partial digest function and refer to h_i,j(m_i) as the partial digest of m_i. In a homomorphic Merkle tree, every internal node is the sum of the partial digests of the leaves under that node. We will show in Section <ref> that each function h_i,j can be expressed as an iterated composition of the functions f and f̃. Evaluating h_i,j requires evaluating the functions f and f̃ exactly h-j times. Opening proof for a message consists of both children of the internal nodes on the path from the message to the root (as opposed to Merkle opening proofs that contain only the siblings of the internal nodes on the path). For instance, the opening proof for the message m_i at leaf index i, with (i) = (b_1,…,b_h), is (i, (u_b_0,…,b_j,0,u_b_0,…,b_j,1)_j=0,…,h-1). Opening proofs are verified using the functions f and f̃ (not by using the functions h_i,j). To verify an opening proof (i, (u_b_0,…,b_j,0,u_b_0,…,b_j,1)_j=0,…,h-1) for a message m_i with respect to the root u_b_0, the verifier checks if the following equalities hold: llCl for the leaf: g^-1(u_b_0,(i)) = f(m_i) for the internal nodes: g^-1(u_b_0,…,b_j) = f̃(u_b_0,…,b_j,0, u_b_0,…,b_j,1) for j = h-1, …, 0. If so, it accepts the proof, and otherwise it outputs reject. As an example, consider a homomorphic Merkle tree that commits to four messsages m_0,m_1,m_2,m_3. Then, its root u_ and inner nodes u_,0, u_,1, u_,0,0, u_,0,1, u_,1,0, u_,1,1 can be calculated as follows: rClrCl u_ = h_0,0(m_0) + h_1,0(m_1) + h_2,0(m_2) + h_3,0(m_3) ; u_,0,0 = h_0,2(m_0) u_,0 = h_0,1(m_0) + h_1,1(m_1) ; u_,0,1 = h_1,2(m_1) u_,1 = h_2,1(m_2) + h_3,1(m_3) ; u_,1,0 = h_2,2(m_2) u_,1,1 = h_3,2(m_3) The opening proof for m_3 is given by (3, ((u_,0, u_,1), (u_,1,0, u_,1,1))), and verified by checking the following equations: llCl for u_,1,1: g^-1(u_,1,1) = f(m_i) for u_,1: g^-1(u_,1) = f̃(u_,1,0, u_,1,1) for u_: g^-1(u_) = f̃(u_,0, u_,1) It now follows that when a message m_i is updated to m'_i, each inner node on the path from the leaf to the root can be updated from u_b_0,…,b_j to u'_b_0,…,b_j using the functions h_i,j as follows: u'_b_0,…,b_j = h_i,j(m'_i) + ∑_x ≠ i (x)[0:j-1]= (b_1,…,b_j) h_x,j(m_x) = u_b_0,…,b_j + h_i,j(m'_i) - h_i,j(m_i) When the partial digest functions are linear in their input, the expression h_i,j(m'_i) - h_i,j(m_i) can be written as h_i,j(m'_i) - h_i,j(m_i) = sign(m'_i-m_i)h_i,j(|m'_i-m_i|). This lets us calculate the updated internal node using only the knowledge of the message diff m_i'-m_i. We provide examples of homomorphic Merkle tree constructions in Section <ref> with linear partial digest functions h_i,j. Homomorphic Merkle proofs in these constructions consist of the two siblings of the inner nodes on the path from the proven message to the root (Section <ref>). Unlike in Section <ref>, homomorphic Merkle trees enable calculating the new inner nodes after message updates using only the new and the old updated messages, in particular using only their difference. Hence, we can construct a tree that achieves the same complexity for the update information size as algebraic VCs, albeit at the expanse of the proof update complexity, without requiring the users to keep track of the old messages or to calculate the tree from scratch given all messages (Appendix <ref> for further discussion). This is in contrast to Merkle trees based on SHA256. The update and proof update algorithms of such a homomorphic Merkle tree with no structured update information and the same asymptotic complexity as algebraic VCs is described in Appendix <ref>. Since the homomorphic Merkle trees can achieve both extremes in terms of update information size and update runtime (Table <ref>), with a smart structuring of the update information, they can support sublinear update. We show how in the next subsection. §.§ Structuring the Update Information We now describe the new update and proof update algorithms that enable homomorphic Merkle trees to achieve sublinear complexity as a function of the parameter ν (Alg. <ref>). §.§.§ Update Information When the messages (i_j, m_i_j)_j ∈ [k] change to (i_j, m'_i_j)_j ∈ [k], the update information U is generated recursively using the following algorithm: * Start at the root u_b_0. Terminate the recursion at an inner node if there are k^1-ν or less updated messages under that node. * If there are more than k^1-ν updated messages with indices ≥ N/2, , under the right child, then publish the new right child of the root as part of U, and apply the same algorithm to the subtree rooted at the right child, with u_b_0 and N replaced by u_b_0,1 and N/2 respectively. * If there are more than k^1-ν updated messages with indices less than N/2, , under the left child, then publish the new left child of the root as part of U, and apply the same algorithm to the subtree rooted at the left child, with u_b_0 and N replaced by u_b_0,0 and N/2 respectively. The new values of the inner nodes included in U are again listed from the smallest to the largest depth in the canonical left to right order. §.§.§ Proof Update When the messages (i_j, m_i_j)_j ∈ [k] are updated to (i_j, m'_i_j)_j ∈ [k], a user first retrieves the inner nodes within its Merkle proof that are published as part of the update information. It then calculates the non-published inner nodes within the proof using the partial digests. For instance, consider a user with the proof (u_b_1, u_b_1,b_2, …, u_b_1,b_2,…,b_h) for some message m_x, (b_1, …, b_h) = (x). To update the proof, the user first checks the update information U and replaces the inner nodes whose new values are provided by U: u'_b_1,…,b_d U[b_1 …b_d], d ∈ [h], if U[b_1 …b_d] ≠. Otherwise, the user finds the new values at the nodes u_b_1,…,b_d, d ∈ [h], using the functions h_x,d: rCl u'_b_1, …, b_d-1,b_d = u_b_1, …, b_d-1,b_d + ∑_j ∈[k] 1_(i_j)[:d] = (b_1, …, b_d-1,b_d) (sign(m'_i_j-m_i_j)h_i_j,d(|m'_i_j -m_i_j|))) §.§.§ Complexity Finally, we prove bounds on the complexity given by these algorithms: Complexity of the update information size and the runtime of proof updates are as follows: g_1(k) = k^ν and g_2(k) = k^1-ν. We finally show that this VC publishes O(k^ν) new inner nodes in the worst case. Let 𝒰 denote the subset of the inner nodes published by the algorithm as part of U such that no child of a node u ∈𝒰 is published. Then, there must be over k^1-ν updated messages within the subtree rooted at each node u ∈𝒰. Since there are k updated messages, and by definition of 𝒰, the subtrees rooted at the nodes in 𝒰 do not intersect at any node, there must be less than k/k^1-ν = k^ν inner nodes in 𝒰. Since the total number of published inner nodes is given by 𝒰 and the nodes on the path from the root to each node u ∈𝒰, this number is bounded by k^νlog(N) = Θ(k^ν). Hence, |U| = Θ(k^νlog(N)(log(N)+|H|)) = Θ(k^ν)|H| = Θ(k^ν) λ, which implies g_1(k) = k^ν. For each inner node in its Merkle proof, the user can check if a new value for the node was provided as part of U, and replace the node if that is the case, in at most Θ(log(N)+|H|) time by running a binary search algorithm over U. On the other hand, if the new value of a node in the proof is not given by U, the user can calculate the new value after at most k^1-νlog(N) function evaluations. This is because there can be at most k^1-ν updated messages within the subtree rooted at an inner node, whose new value was not published as part of U. This makes the total time complexity of a proof update at most C Θ(log(N)(log(N)+|H|+k^1-νlog(N)T_f)) = Θ(k^1-ν) T_f, which implies g_2(k) = k^1-ν. §.§ Constructions for Homomorphic Merkle Trees Homomorphic Merkle trees were proposed by <cit.>. These hash functions are lattice-based, and their collision-resistance is proven by reduction to the hardness of the gap version of the shortest vector problem (𝖦𝖠𝖯𝖲𝖵𝖯_γ), which itself follows from the hardness of the small integer solution problem. We next describe the construction introduced by <cit.>, which is similar to those proposed by later works <cit.> (an alternative construction is provided in Appendix <ref>). Its correctness and security follow from <cit.>. Let L(𝐌) denote the lattice defined by the basis vectors 𝐌⊂ℤ^k × m_q for appropriately selected parameters k,m,q, where m = 2 k log q. Consider vectors u ∈{0, …, t}^k log q, where t is a small integer. The homomorphic hash functions f ℤ^k log q→ L(𝐌) and f̃ℤ^k log q×ℤ^k log q→ L(𝐌) used by <cit.> are defined as f(x) = 𝐌x and f̃(x,y) = 𝐌𝐔 x + 𝐌𝐃 y respectively. Here, 𝐔 and 𝐃 are special matrices that double the dimension of the multiplied vector and shift it up or down respectively. The remaining entries are set to zero. For convenience, we define 𝐋 = 𝐌𝐔 and 𝐑 = 𝐌𝐃. Since the domain and range of the hash functions are different, to ensure the Merkle tree's homomorphism, authors define a special mapping g ℤ^k_q →ℤ^k logq_q from the range of the hash functions to their domain. Here, g(.) takes a vector 𝐯∈ℤ_q as input and outputs a radix-2 representation for 𝐯. However, as there can be many radix-2 representations of a vector, to help choose a representation that yields itself to homomorphism, authors prove the following result: for any x_1, x_2, …, x_t ∈ℤ_q, there exists a short radix-2 representation g(.) such that g(x_1 + x_2 + … + x_t q) = b(x_1) + b(x_2) + … + b(x_t) q ∈{0, …, t}^k log q, where the function b ℤ^k_q →{0,1}^klogq returns the binary representation of the input vector. This equality enables the mapping g(.) to preserve the hash functions' original homomorphic property. Then, given an inner node u_b_0,…,b_j as input, the homomorphic Merkle tree uses the short radix-2 representation g(.) that enforces the following equality: g(u_b_0,…,b_j) = g(𝐋 u_b_0,…,b_j,0 + 𝐑 u_b_0,…,b_j,1 q) = b(𝐋 u_b_0,…,b_j,0) + b(𝐑 u_b_0,…,b_j,1) q. Finally, this enables calculating the value of each inner node as a sum of the partial digests h_i,j(.) of the messages m_i under the node u_b_0,…,b_j (, (m_i)_(i)[0:j] = (b_0,…,b_j)) as outlined in Section <ref>, i.e., u_b_0,…,b_j equals rCl 𝐋g(u_b_0,b_1,…,b_j,0) + 𝐑g(u_b_0,b_1,…,b_j,1) = 𝐋g(𝐋g(u_b_0,…,b_j,0,0) + 𝐑g(u_b_0,…,b_j,0,1)) + 𝐑g(𝐋g(u_b_0,…,b_j,1,0) + 𝐑g(u_b_0,…,b_j,1,1)) = 𝐋b(𝐋g(u_b_0,…,b_j,0,0)) + 𝐋b(𝐑g(u_b_0,…,b_j,0,1)) + 𝐑b(𝐋g(u_b_0,…,b_j,1,0)) + 𝐑b(𝐑g(u_b_0,…,b_j,1,1)) = ∑_i(i)[0:j-1]=(b_1,…,b_j) h_i,j(m_i), where h_i,j(.) is expressed in terms of the bits (i)[j:h-1] = (b'_1, …, b'_h-j): C h_i,j(m_i) = f_b'_1(f_b'_2(…f_b'_h-j(f(m_i)))) Here, f_0(.) and f_1(.) are defined as 𝐋b(.) and 𝐑b(.) respectively. Since b(.), binary expansion, is a linear operation and matrix multiplication is linear, h_i,j(.) is linear in its input. Opening proof of a message m consists of its index and g(α_i) and g(β_i), i ∈ [h], h = log(N), where α_i and β_i are the children of the inner nodes on the path from m to the root. The proof can be verified in log(N) time by iteratively checking if f(m) = g^-1(α_h) (or = g^-1(β_h)) and f̃(g(α_i),g(β_i)) = g^-1(α_i-1) (or =g^-1(β_i-1) depending on the message index), where g^-1 returns a number given its radix-2 representation <cit.>. Note that both f and f̃ are homomorphic hash functions <cit.>. Other examples of homomorphic hash functions include Pedersen hashes and KZG commitments. However, the homomorphic property of the hash function is not sufficient for constructing a homomorphic Merkle tree when the function is combined with the output of other functions in a serial manner as in Merkle trees. For the lattice-based function, this was possible because of repeated linearity <cit.>, which refers to the existence of a linear mapping g(.) from the range to the domain of the hash function. This mapping enabled the iterative hashing within the Merkle tree to preserve the linearity of the hash function. Such repeated linearity does not exist for Pedersen hashes and KZG commitments as a linear mapping from the range to the domain would imply the violation of the discrete log assumption. That is why Verkle trees based on KZG commitments are not homomorphic and cannot support sublinear update. §.§ A Concrete Evaluation Suppose the Ethereum state is persisted using the homomorphic Merkle tree construction of <cit.> with the trade-off parameter ν = 1/2. We next estimate the size of the update information and the proof update time after observing an Ethereum block with ERC20 token transfers. Suppose the block has the target size of 15 million gas <cit.>, and each token transfer updates the balance of two distinct accounts stored at separate leaves of the homomorphic Merkle tree. Since each ERC20 token transfer consumes approximately 65,000 gas, there are ∼ 230 such transactions in the block, and the block updates k = 460 accounts. Suppose the homomorphic Merkle tree has degree 2 and commits to N = 256^3 = 2^24 accounts. For comparison, 256^3 ≈ 16.7 million, matching in magnitude the total number of cumulative unique Ethereum addresses, which is 200 million as of 2023 <cit.>. Each opening proof consists of 2log(N) = 48 inner nodes. When 460 accounts are updated, in the worst case, the update information consists of ⌈√(k)⌉log(N) = 528 inner nodes. To evaluate its size, we use the parameters calculated by <cit.> for secure instantiations of the homomorphic Merkle trees from both their paper and <cit.>. Since the parameters for <cit.> result in a large inner node size on the order of hundreds of MBs, our evaluation takes the size of an inner node as that of <cit.>, namely |H| = 0.21 MB (which is equal to the key size in <cit.>). This implies an update information size of |U| = 110.88 MBytes and an opening proof size of |π| = 10.08 MBytes. As for update time, in the worst case, each user has to calculate the partial digests of 44 updated messages at each height of the homomorphic Merkle tree, , the effect of these updated messages on each inner node of its opening proof. Calculating the partial digest of a message at height h measured from the leaves requires h evaluations of the hash function. This implies a proof update complexity of 2 ∑_i=0^logN-1 i min(⌈√(k)⌉, 2^i) = 11,900 hash evaluations. To find numerical upper bounds for the update time, we use the hash function evaluation times, namely T_f = 26.84 and T_f = 2.74 ms, published by <cit.> for both the hash function in <cit.> and their new and more performant function (these times are for commodity hardware; <cit.> for the details). This gives an upper bound of 319.4 and 32.6 seconds for the update time using the hash functions in <cit.> and <cit.> respectively. Based on the benchmarks for the practical hash function introduced in <cit.>, Table <ref> compares the number of published inner nodes ⌈ k^ν⌉log(N), the total update information size ⌈ k^ν⌉log(N) |H| (assuming that the size of each inner node is |H| upper bounded by 0.21 MBytes), the number of hash function evaluations per proof update 2 ∑_i=0^logN-1 i min(⌈ k^1-ν⌉, 2^i) and the proof update time 2 ∑_i=0^logN-1 i min(⌈ k^1-ν⌉, 2^i) T_f (assuming that each hash evaluation takes less than T_f = 2.74 ms) at ν = 0, 1/4, 1/2, 3/4, 1. The degree of the homomorphic Merkle tree and the opening proof size are fixed at 2 and 48 inner nodes (|π| = 10.08) respectively. § UPDATING VERKLE TREES AND OPENING PROOFS We now describe the update and proof update functions for Verkle trees (Algs. <ref> and <ref> respectively). Since Verkle trees were proposed to support stateless clients, we describe an update scheme that minimizes the runtime complexity of proof updates and does not require the users to download the updated messages or have access to old inner nodes. As Verkle trees do not support sublinear update, we numerically estimate the size of the update information and the complexity of proof updates in Section <ref>. §.§ Update Information Suppose the vector (i, m_i)_i ∈ [N] is modified at some index x, (b_1, …, b_h) = (x) to be m'_x. Since each inner node is the hash of a KZG commitment, the new inner nodes u'_b_0,…,b_j = H(C'_b_0,…,b_j), j ∈ [h], can be found as a function of the old commitments at the nodes and the powers of the Lagrange basis polynomials as described in Section <ref>: C C'_b_0,…,b_h m'_x, C'_b_0,…,b_j C_b_0,…,b_j [L_b_j+1]^(u'_b_0,…,b_j+1-u_b_0,…,b_j+1) When k messages are updated, the above calculation is repeated k times for each update. Update information U consists of the new values of the KZG commitments on the path from the updated messages to the Verkle root akin to the Merkle trees, ordered in the canonical top-to-bottom and left-to-right order. §.§ Verkle Proofs Let π_x denote the Verkle proof of some message m_x at index x, (b_1,…,b_h) = (x): π_x = ((C_b_0,b_1, …, C_b_0,…,b_h-1), ([g(X)], π)). We define π^f_x as the opening proof for index x within polynomial f. We observe that the commitment [g(X)] and the proof π can be expressed as functions of the opening proofs of the inner nodes u_b_0,b_1, …, u_b_0,…,b_h at the indices b_1,…,b_h within the polynomials f_b_0, …, f_b_0,…,b_h-1, respectively: rCl [g(X)] = [∑_j=0^h-1 r^j f_b_0,…,b_j(X)-u_b_0,…,b_j+1/X-b_j+1] = ∏_j=0^h-1 [f_b_0,…,b_j(X)-u_b_0,…,b_j+1/X-b_j+1]^r^j = ∏_j=0^h-1 (π^f_b_0,…,b_j_b_j+1)^r^j. Similarly, the opening proof π=π^(h-g)_t for index t within the polynomial h(X)-g(X) can be expressed as follows (Appendix <ref>): rCl [h(X)-g(X)-(h(t)-g(t))/X-t] = ∏_j=0^h-1 [f_b_0,…,b_j(X)-u_b_0,…,b_j+1/X-b_j+1]^r^j/t-b_j+1 = ∏_j=0^h-1 (π^f_b_0,…,b_j_b_j+1)^r^j/t-b_j+1 We assume that each user holding the Verkle proof π_x for some index x, (b_1,…,b_h) = (x), also holds the opening proofs π^f_b_0,…,b_j_b_j+1, j ∈ [h], in memory. As we will see in the next section, the user also holds the KZG commitments at the children of the inner nodes on the path from the root to the message m_x, C_b_0,…,b_j,i for all j ∈ [h] and i ∈ [c] in memory. These opening proofs and KZG commitments are not broadcast as part of any proof; however, they are needed for the user to locally update its Verkle proof after message updates. §.§ Proof Update When the messages (i_j, m_i_j)_j ∈ [k] are updated to (i_j, m'_i_j)_j ∈ [k], to calculate the new Verkle proof π'_x, the user must obtain the new commitments C'_b_0, …, C'_b_0,…,b_h-1 on the path from the root to message m_x, the new commitment [g'(X)] and the new opening proof π' for the polynomial h'(X)-g'(X) at index t'= H(r',[g'(X)]). Message updates change the commitments at the inner nodes, which in turn results in new polynomials f_b_0,…,b_j, j ∈ [h]. Suppose each polynomial f_b_0,…,b_j, j ∈ [h], is updated so that C f'_b_0,…,b_j(X) = f_b_0,…,b_j(X) + ∑_i=0^c-1(f'_b_0,…,b_j(i)-f_b_0,…,b_j(i)) L_i(X), where, by definition, f'_b_0,…,b_j(i)-f_b_0,…,b_j(i) = u'_b_0,…,b_j,i-u_b_0,…,b_j,i = H(C'_b_0,…,b_j,i)-H(C_b_0,…,b_j,i). Then, given the new and the old commitments (C_b_0,…,b_j,i,C'_b_0,…,b_j,i) for i ∈ [c] and j ∈ [h], the table of Lagrange basis polynomials, and using the technique in Section <ref>, the new opening proofs π̃^f_b_0,…,b_j_b_j+1 after the message updates can be computed as follows for j ∈ [h]: C π̃^f_b_0,…,b_j_b_j+1 = π^f_b_0,…,b_j_b_j+1 ∏_i=0^c-1[L_i(X)-L_i(b_j+1)/X-b_j+1]^(H(C'_b_0,…,b_j,i)-H(C_b_0,…,b_j,i)), where [L_i(X)-L_i(b_j+1)/X-b_j+1] is the opening proof of the Lagrange basis polynomial L_i(X) at index b_j+1. Once the new opening proofs are found, the new commitment [g'(X)] and the new proof π' become C [g'(X)] = ∏_j=0^h-1 (π̃^f_b_0,…,b_j_b_j+1)^r'^j, π' = ∏_j=0^h-1 (π̃^f_b_0,…,b_j_b_j+1)^r'^j/t'-b_j+1 where r'=H(C'_b_0,b_1,..,C'_b_0,…,b_h-1,u'_b_0,b_1,..,u'_b_0,…,b_h,b_1,..,b_h) and t'=H(r',[g'(X)]). Note that both r' and t' can be calculated by the user given the new KZG commitments C'_b_0,…,b_j,i for all i ∈ [c] and j ∈ [h]. Finally, to retrieve the new KZG commitments C'_b_0,…,b_j,i for all i ∈ [c] and j ∈ [h], the user inspects the commitments published as part of the update information U: C'_b_0,b_1,…,b_j-1,i U[b_0,b_1,…,b_j-1,i] if U[b_0,b_1,…,b_j-1,i] ≠ and C'_b_0,b_1,…,b_j-1,i C_b_0,b_1,…,b_j-1,i otherwise, for all i ∈ [c] and j ∈ [h]. In Verkle trees, the user cannot calculate the effect of an updated message on an arbitrary inner node without the knowledge of the inner nodes on the path from the message to the target node. For instance, suppose U[b_0,b_1,…,b_j-1,i] = for some i ∈ [c] and j ∈ [h], and the user wants to calculate the effect of an update from m_x to m'_x on C'_b_0,…,b_j-1,i,b̃_j+1,…,b̃_h, (x) = (b_1,…,b_j-1,i,b̃_j+1,…,b̃_h) and b̃_j = i. Then, for each ℓ∈{j,…,h-1}, the user have to find rCl C'_b_0,…,b̃_j,…,b̃_h m'_x C'_b_0,…,b̃_j,…,b̃_ℓ C_b_0,…,b̃_j,…,b̃_ℓ [L_b̃_ℓ+1]^(u'_b_0,…,b̃_j,…,b̃_ℓ+1-u_b_0,…,b̃_j,…,b̃_ℓ+1), where C'_b_0,…,b̃_j,…,b̃_ℓ are the commitments on the path from the target commitment C_b_0,b_1,…,b_j-1,i to the message m_x. Hence, the user has to know the original commitments on the path from the message to the target commitment, , keep track of inner nodes, which contradicts with the idea of stateless clients. This shows the necessity of publishing all of the updated inner nodes as part of the update information. §.§ Complexity Suppose each KZG commitment is of size |G| and each hash H(C) of a KZG commitment, each inner node, has size |H|. Then, updating a single message results in one update at each level of the Verkle tree and requires Θ(h|H|) group operations. Thus, when k messages are updated, the new Verkle root can be found after Θ(kh|H|) group operations. As U consists of the published KZG commitments at the inner nodes and their indices, |U| = Θ(k log_c(N)(log(N)+|G|)) = Θ(k)|G|, which implies g_1(k) = k. The user can replace each KZG commitment at the children of the inner nodes from the root to its message in Θ(log(N)+|G|) time by running a binary search algorithm over U. Since there are ch such commitments to be updated, , C_b_0,…,b_j,i, i ∈ [c] and j ∈ [h], updating these commitments takes Θ(c h (log(N)+|G|)) = Θ(1) time. Upon obtaining the new commitments C'_b_0,…,b_j-1,i, i ∈ [c], j ∈ [h], with access to the table of Lagrange basis polynomials, the user can update each opening proof π_b_j+1 (for the function f_b_0,…,b_j), j ∈ [h], with Θ(c|H|) group operations. Since there are h such proofs, updating them all requires Θ(c h |H|) group operations. Given the new proofs, computing the new commitment [g'(X)] and proof π' requires Θ(h |H|) group operations. This makes the total complexity of updating a Verkle proof Θ(c h + 2 h) |H| T_G + Θ(c h (log_c(N)+|G|)). For a constant c and h = log_c(N), this implies a worst-case time complexity of Θ(1) |H| T_G for Verkle proof updates, , g_2(k) = 1. §.§ A Concrete Evaluation We now estimate the size of the update information and the number of group operations to update an opening proof after observing an Ethereum block consisting of ERC20 token transfers. As in Section <ref>, suppose the block has the target size of 15 million gas <cit.>, and each token transfer updates the balance of two distinct accounts stored at separate leaves of the Verkle tree. Then, there are ∼ 230 such transactions in the block, and the block updates k = 460 accounts. We assume that the Verkle tree has degree 256 ( <cit.>) and commits to 256^3 accounts as in Section <ref>. Then, each proof consists of 2 KZG commitments, C_,b_1 and C_,b_1,b_2 and a multiproof consisting of the commitment [g(X)] and proof π'. These components are elements of the pairing-friendly elliptic curve BLS12_381 and consist of |G| = 48 bytes <cit.>. This implies a proof size of (log_c(N)+1)|G| = 192 bytes (excluding the message at the leaf and its hash value; adding those makes it 272 bytes). When 460 accounts are updated, in the worst-case, the update information has to contain k log_c(N) (log(N)+|G|) = 460 × 3 × (24+48) Bytes, , 99.4 kBytes. This is comparable to the size of the Ethereum blocks, which are typically below 125 kBytes <cit.>. Hence, even though the update information of Verkle trees is linear in k, it does not introduce a large overhead beyond the block data. Note that the runtime of the proof updates are constant and do not scale in the number of updated messages k, or the Ethereum block size. On the other hand, in the worst case, an opening proof can be updated after c log(c) |H| + 2 log_c(N) |H| group operations. Then, with |H|=256, the number of bits output by SHA256, as many as c log_c(N) |H| + 2 log_c(N) |H| = (c + 2) log_c(N) |H| = 774 × 2256 ≈ 200,000 elliptic curve multiplications might have to be made. Following the benchmarks published in <cit.> for the specified curve, these operations can take up to (c + 2) log_c(N) 0.000665471 ns = 0.52 seconds on commodity hardware, given a runtime of 665471 nanoseconds per exponentiation of a group element with a message hash value. This is again comparable to the 12 second inter-arrival time of Ethereum blocks. Table <ref> compares the Verkle proof size |π| = (log_c(N)+1) |G|, update information size |U| = k log_c(N) (log_cN+|G|), the upper bound (c + 2) log_cN |H| on the number of group operations needed for a single proof update and the estimated time it takes to do these operations on a commodity hardware for different values of c, the Verkle tree degree, while keeping the number of accounts and the updated accounts fixed at 2^24 and 460 respectively. The table shows the trade-off between the Verkle proof and update information size on one size and update complexity on the other. Comparing Table <ref> with Table <ref> shows that the Verkle tree with any given degree c, 1 < c ≤ 256, significantly outperforms the existing homomorphic Merkle trees in Section <ref> in terms of almost all of proof size, update information size and proof update time. § LOWER BOUND Finally, we prove the optimality of our VC scheme with sublinear update by proving a lower bound on the size of the update information given an upper bound on the complexity of proof updates. The lower bound is shown for VCs that satisfy the following property. It formalizes the observation that for many dynamic VCs (, Merkle trees <cit.>, Verkle trees <cit.>, KZG commitments <cit.>, RSA based VCs <cit.>), the opening proof for a message at some index can often act as a commitment to the vector of the remaining messages. A VC scheme is said to be if the following probability is negligible in λ for all PPT adversaries 𝒜: C [Verify_pp(C, m_i^*, i^*, π) = 1 Verify_pp(C', m_i^*, i^*, π) = 1 pp KeyGen(1^λ, N); π, m_i^*, (m_0, …, m_i^*-1, m_i^*+1, …, m_N-1), (m'_0, …, m'_i^*-1, m'_i^*+1, …, m'_N-1) 𝒜(pp); (m_0, …, m_i^*-1, m_i^*+1, …, m_N-1) ≠(m'_0, …, m'_i^*-1, m'_i^*+1, …, m'_N-1); Commit_pp(m_0, …, m_i^*-1, m_i^*, m_i^*+1, …, m_N-1) = C; Commit_pp(m'_0, …, m'_i^*-1, m_i^*, m'_i^*+1, …, m'_N-1) = C' ] To prove the lower bound, we first show that implies that (i^*, m_i^*, π) is a binding commitment to the rest of the vector. Consider a dynamic and VC, where π is the correctly generated opening proof for the message m_i at some index i. Then, for any i ∈ [N], it holds that the tuple (i, m_i, π) is a binding commitment to the vector of messages m_j, j ∈ [N], j ≠ i. Since the VC is , with overwhelming probability, no PPT adversary 𝒜 can find an opening proof π^*, an index i^*, a message m^* and two sequences of messages such that C (m_1, …, m_i^*-1, m_i^*+1, …, m_N-1) ≠(m'_1, …, m'_i^*-1, m'_i^*+1, …, m'_N-1) and Verify_pp(C, m_i^*, i^*, π) = Verify_pp(C', m_i^*, i^*, π) = 1, where C and C' are commitments to the message sequences (m_1, …, m_i^*-1, m_i^*, m_i^*+1, …, m_N-1) and (m'_1, …, m'_i^*-1, m_i^*, m'_i^*+1, …, m'_N-1). Thus, it holds that the tuple (i, m_i, π) is a binding commitment to the vector of messages m_j, j ∈ [N], j ≠ i, with the following new commitment function: C NewCommit_pp((m_j)_j ∈[N], j ≠i) = (i, m_i, Open_pp(m_i, i, )), where = Commit_pp(m_0, …, m_N-1).. The following lemma shows that all randomized VCs can be derandomized to obtain a deterministic and secure VC as we do not use hiding commitments in this work. Consider a VC , where the commitment is a random function of the public parameters pp and the committed messages. Let ' denote the VC that is the same as , except that the randomness is fixed. Then, ' is a correct and secure VC with at most the same upper bound on the error probability. Let R denote the sequence of bits sampled uniformly at random from the set ℛ to instantiate the VC . Since is binding, no PPT adversary 𝒜 can find two different sequences of messages 𝐦 and 𝐦' such that (𝐦, R) = (𝐦', R') for some R,R' ∈ℛ, except with negligible probability. This implies that for any fixed R^* ∈ℛ, no PPT adversary 𝒜 can find two different sequences of messages 𝐦 and 𝐦' such that (𝐦, R^*) = (𝐦', R^*), except with negligible probability. Hence, the commitment scheme '(.) = (., R^*) is a position-binding, , secure VC. Its correctness follows from the correctness of . Finally, equipped with Lemmas <ref> and <ref>, we can prove the following lower bound for dynamic and VCs. Consider a dynamic and VC such that for every PPT adversary 𝒜, it holds that C [Verify_pp(C, m, i, π_i) = 1 Verify_pp(C, m', i, π'_i) = 1 m ≠m' pp KeyGen(1^λ, N) (C, m, m', π_i, π'_i) 𝒜(pp)] ≤e^-Ω(λ). Then, for this VC, if g_2(k) = O(k^1-ν), then g_1 = Ω(k^ν) for all ν∈ (0,1). Suppose the messages m_i_j, j ∈ [k], are updated to m'_i_j. Define 𝒮 as the sequence (m'_i_j)_j ∈ [k], and let m'_i = m_i for i ∉{i_j j ∈ [k]}. Let 𝒫_i, i ∈ [N], denote the user that holds the opening proof π_i for the message m_i at index i, and aims to calculate the new proof π'_i for the message m'_i using π_i, the update information U and the old and the new sequences of messages m_i, m'_i, i ∈ [N]. Suppose g_2 = O(k^1-ν). Then, there exists a constant α such that each user can read at most α k^1-ν of the updated messages while updating its opening proof. Let 𝒮_i ⊆ (m'_i_j)_j ∈ [k] denote the sequence of updated messages and their indices, which were not observed by 𝒫_i, and 𝒮_i = 𝒮∖𝒮_i denote the sequence read by 𝒫_i. Here, |𝒮| denotes the number of messages within the sequence 𝒮. Since 𝒫_i is assumed to know m'_i, it must be that m'_i ∈𝒮_i. We next show that each user 𝒫_i that successfully updates its opening proof must download enough bits of U to generate a binding, deterministic commitment to the set 𝒮_i. By Lemma <ref>, the tuple (i, m'_i, π'_i) is a binding commitment to the sequence of messages (m'_j)_j ∈ [N], j ≠ i. This implies that the tuple (i, 𝒮_i, π'_i) is a binding commitment to the sequence 𝒮_i. By Lemma <ref>, the commitment (i, 𝒮_i, π'_i) can be de-randomized to obtain a deterministic commitment C_i to the sequence 𝒮_i (with at most the same upper bound on the error probability). Let denote the deterministic VC scheme such that C_i = (𝒮_i). Since is a deterministic function given the public parameters, and the updated messages are sampled independently and uniformly at random, then I(𝒮_i;{m_i}_i ∈ N,𝒮_i|pp) = 0, where I(.;.) is the mutual information. Moreover, as π_i is a function of the old messages {m_i}_i ∈ N and the randomness of the original VC, I(C_i; π_i|pp) = 0. Hence, C_i = f(U, i, {m_i}_i ∈ N, π) is a deterministic function of the update information U. For all i ∈ [k], it holds that |𝒮_i| ≥ k - α k^1-ν and m'_i ∉𝒮_i. Given these constraints, the minimum number of distinct sequences 𝒮_i is k/α k^1-ν = k^ν/α. For an appropriately selected β that will be defined later, without loss of generality, let 𝒮_0, …, 𝒮_M-1 denote the first C M = min(⌊k^ν/β - α/β - λ/βk^1-ν ⌋, k^ν/α) distinct sequences. Since C_i is a deterministic function of U for all i ∈ N, it holds that the Shannon entropy H(.) of U satisfies the following expression: C H(U) ≥H(C_0, …, C_M-1) ≥H(C_0) + ∑_i=1^M-1 H(C_i | C_0, …, C_i-1) As g_2(k) = O(k^1-ν), there exists a constant β such that each user can download at most β k^1-ν bits of data from U. Then, for all i ∈ [k], it must be that H(C_i) ≤ H(U) ≤β k^1-ν since C_i is a deterministic function of U for each i ∈ [N]. Finally, we show that H(C_0), H(C_i | C_0, …, C_i-1) = Ω(λ) for all i=1, …, M-1. Towards contradiction, suppose ∃ i^* H(C_i^* | C_0, …, C_i^*-1) = o(λ). Note that rCl H(C_0, …, C_i^*-1) ≤ ∑_i=0^M-1 H(C_i) ≤ min(k^ν/β - α/β - λ/βk^1-ν, k^ν/α) βk^1-ν ≤k-αk^1-ν-λ. Now, consider an adversary 𝒜 that tries to break the binding property of the VC scheme . Due to the upper bound on the entropy of (C_0, …, C_i^*-1), it holds that H(𝒮_i^* | C_0, …, C_i^*-1) ≥λ; since H(𝒮_i^*) ≥ k-α k^1-ν, and rCl H(𝒮_i^*) - H(𝒮_i^* | C_0, …, C_i^*-1) = I(𝒮_i^*; (C_0, …, C_i^*-1)) ≤ H(C_0, …, C_i^*-1) ≤k-αk^1-ν-λ. However, when H(C_i^* | C_0, …, C_i^*-1) = o(λ), for sufficiently large λ, given (C_0, …, C_i^*-1), the adversary can find a collision such that (𝒮_i^*)=(𝒮'_i^*) for two 𝒮_i^*≠𝒮'_i^*, with probability 2^-o(λ). As this is a contradiction, it must be that H(C_0) and H(C_i | C_0, …, C_i-1) = Ω(λ) for all i < M, and thus, H(U) = Ω(k^νλ) and g_1(k) = Ω(k^ν). Theorem <ref> shows that the update information length scales as Θ(k^νλ) when the runtime complexity for proof updates is Θ(k^1-ν) and the error probability for the security of the VC is e^-Ω(λ) for a PPT adversary. When the error probability is just stated to be negligible in λ, then the same proof can be used to show that the update information length must scale as Ω(k^ν(λ)) for any polynomial function of log(λ). § CONCLUSION Dynamic VCs with sublinear update are the key to reducing the size of the global update information while minimizing the runtime of clients synchronizing with the latest commitment. In this work, we propose a construction that can achieve an update information size of Θ(k^ν) and a proof update time of Θ(k^1-ν) in the number of changed messages k. Our construction combines a novel update algorithm (Alg. <ref>) with homomorphic Merkle trees <cit.> that allow each inner node to be expressed as a linear function of the underlying messages. It achieves the smallest asymptotic complexity for the update information size and proof update time. We also provide update algorithms for the Verkle trees proposed for stateless clients on Ethereum. The existing instantiations of homomorphic Merkle trees are based on lattices and require relatively large parameters for security. Consequently, despite the appealing asymptotic complexity of our construction, its performance for concrete parameters is dominated by Verkle trees. As such, designing asymptotically optimal and practically efficient dynamic VCs remains an open problem. An interesting direction is to design a more preferment homomorphic Merkle tree system. Acknowledgments. This work was partially funded by NSF, DARPA, the Simons Foundation, and NTT Research. Additional support was provided by the Stanford Center for Blockchain Research. Opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. plain § LOWER BOUND ON THE SIZE OF THE UPDATE INFORMATION Consider a dynamic accumulator, where k out of N messages m_i,j are updated to m'_i,j≠ m_i_j, j ∈ [k]. Suppose |M| = (λ). Then, Ω(k (log(N|ℳ|))) bits of information must be published to enable updating the opening proofs after these k updates. The proof idea is very similar to those presented in <cit.>. Namely, the update information must contain a minimum amount of bits for the VC to remain correct and secure after the update. Consider a game between a platform 𝒫 maintaining the data structures of the VC and an adversary 𝒜. The platform 𝒫 updates k out of N messages m_i,j to m'_i,j≠ m_i_j, j ∈ [k], in a way not known to user 𝒜, and publishes the update information U along with the new commitment value C' (let m'_i = m_i for i ∉{i_j j ∈ [k]}). Before receiving the update information, 𝒜 knows the old sequence of messages m_i, i ∈ [N], and their opening proofs π_i. Upon receiving the update information, 𝒜 updates the opening proofs for each message to π'_i. Then, it must be that for all j ∈ [k], Verify_pp(C', m'_i_j, i_j, π'_i_j) = 1, and for all i ∉{i_j j ∈ [k]}, Verify_pp(C', m_i, i, π'_i) = 1. Otherwise there would be messages among m'_i, i ∈ [N], for which an updated witness cannot be computed, violating correctness. Similarly, for all j ∈ [k], Verify_pp(C', m̃, i_j, π'_i_j) = 0 for any m̃≠ m'_i_j; as otherwise the position binding property, thus security, would be violated. Hence, by calling the function Verify_pp(C', m_i, i, π'_i_j) for each index, 𝒜 can figure out the indices i_j, j ∈ [k], where the messages were updated. Similarly, by evaluating the function Verify_pp(C', m̃, i_j, π'_i_j) for the |ℳ| possible messages m̃ for each j ∈ [k], 𝒜 can identify the new value m'_i_j of the message at each such index i_j. Hence, the adversary can recover the sequence (i_j, m'_i_j)_j ∈ [k]. As there are N!/(N-k)!log^k|ℳ| possible sequences (i_j, m'_i_j)_j ∈ [k], it holds that |U| ≥log(N!/(N-k)!log^k|ℳ|) = Ω(k (logN+log|ℳ|)). When |M| = Ω((λ)), the minimum number of bits to be published depends on the error probability for the security of the PPT adversary. As in Remark <ref>, if |ℳ| = Θ(2^λ) and the error probability is e^-Ω(λ), then the same proof can be used to show that Ω(kλ) bits of information must be published. When the error probability is just stated to be negligible in λ, then the number of bits must scale as Ω(kλ) for any polynomial function of log(λ). § HOMOMORPHIC MERKLE TREES WITH NO UPDATE INFORMATION §.§ Update Information When k messages are updated, the new commitment, the new Merkle root can be calculated just like any other inner node, by incorporating the effect of the old and the new messages, (i_j, m'_i_j-m_i_j)}_j ∈ [k]: rCl C' = u'_b_0 = u_b_0 + ∑_j ∈[k] (h_i_j,0(m'_i_j) - h_i_j,0(m_i_j)) = C + ∑_j ∈[k] sign(m'_i_j-m_i_j)h_i_j,0(|m'_i_j - m_i_j|) As in KZG commitments, the update information is U=∅. §.§ Proof Update When the messages are modified at k points, each user holding a Merkle proof π_x for index x can calculate the new values of the inner nodes within the proof using the old and the new messages and modify the proof respectively. §.§ Complexity Calculating each partial digest h_x,j takes at most logN function evaluations. Then, each user can update each inner node within its Merkle proof after at most klog(N) operations, making the total number of operations Θ(klog^2N) = Θ(k) in the worst case. The size of the update information U is Θ(1). Hence, this scheme matches the algebraic VCs in terms of complexity. § WHY ARE HOMOMORPHIC MERKLE TREES NEEDED? Merkle trees based on SHA256 can also achieve complexity sublinear in k, for both the update information and the runtime of proof updates, if the users have access to the old messages and inner nodes of the Merkle tree. In this case, homomorphism is not needed since the nodes can find the effect of the updated messages on the inner nodes within their Merkle proofs by hashing these messages together with the old inner nodes. However, this is possible for only a single batch of updates. Indeed, if this scheme is to be repeated, the assumption of having access to the old inner nodes requires the users to keep track of changes throughout the Merkle tree, by calculating the effect of all updated messages on all inner nodes. This implies a runtime linear in k per proof updates. In contrast, homomorphic Merkle trees can maintain a sublinear complexity for future proof updates since they do not require access to the old messages and inner nodes for finding the partial digests of the updated messages. § AN ALTERNATIVE CONSTRUCTION An alternative tree-based VC is proposed by <cit.>, where each inner node is itself a lattice-based VC to its children (akin to Verkle trees <cit.>). Opening proof for a message consists of the inner nodes (commitments) on the path from the message to the root, along with the opening proofs for these inner nodes with respect to their parent nodes. The construction again enables expressing each inner node as a sum of partial digests of the messages underneath. Using the public parameters and the updated inner nodes, users can then derive their updated opening proofs at different heights of the tree. This construction supports trees of large degrees c without a linear increase in the proof size as would be the case for Merkle trees; this however comes at the cost of a larger runtime complexity for proof updates, proportional to the degree. Section <ref> describes similar steps in the context of Verkle trees, and exposes the dependence of the runtime complexity of proof updates on the tree degree c. § DERIVATION OF THE OPENING PROOF Π^(𝐡-𝐠)_𝐭 Since rCl h(X)-g(X) = ∑_j=0^h-1 r^j (f_b_0,…,b_j(X)/t-b_j+1 - f_b_0,…,b_j(X)-u_b_0,…,b_j+1/X-b_j+1) = ∑_j=0^h-1 r^j (X-t)f_b_0,…,b_j(X)+u_b_0,…,b_j+1(t-b_j+1)/(t-b_j+1)(X-b_j+1), the opening proof π=π^(h-g)_t for index t within the polynomial h(X)-g(X) is rCl [h(X)-g(X)-(h(t)-g(t))/X-t] = [∑_j=0^h-1 r^j/X-t ((X-t)f_b_0,…,b_j(X)+u_b_0,…,b_j+1(t-b_j+1)/(t-b_j+1)(X-b_j+1)-u_b_0,…,b_j+1/t-b_j+1) ] = [∑_j=0^h-1 r^j/t-b_j+1 f_b_0,…,b_j(X)-u_b_0,…,b_j+1/X-b_j+1] = ∏_j=0^h-1 [f_b_0,…,b_j(X)-u_b_0,…,b_j+1/X-b_j+1]^r^j/t-b_j+1 = ∏_j=0^h-1 (π^f_b_0,…,b_j_b_j+1)^r^j/t-b_j+1 § UPDATE AND PROOF UPDATE ALGORITHMS FOR KZG COMMITMENTS AND MERKLE TREES
http://arxiv.org/abs/2307.04964v2
20230711015524
Secrets of RLHF in Large Language Models Part I: PPO
[ "Rui Zheng", "Shihan Dou", "Songyang Gao", "Yuan Hua", "Wei Shen", "Binghai Wang", "Yan Liu", "Senjie Jin", "Qin Liu", "Yuhao Zhou", "Limao Xiong", "Lu Chen", "Zhiheng Xi", "Nuo Xu", "Wenbin Lai", "Minghao Zhu", "Cheng Chang", "Zhangyue Yin", "Rongxiang Weng", "Wensen Cheng", "Haoran Huang", "Tianxiang Sun", "Hang Yan", "Tao Gui", "Qi Zhang", "Xipeng Qiu", "Xuanjing Huang" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.LG" ]
Wasserstein Distributionally Robust Regret-Optimal Control under Partial Observability The authors are affiliated with the Department of Electrical Engineering at Caltech. Emails: {jhajar,tkargin,hassibi}@caltech.edu. Joudi Hajar Taylan Kargin Babak Hassibi August 12, 2023 =========================================================================================================================================================================================================================== thefnmarkfootnotetext Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include reward models to measure human preferences, Proximal Policy Optimization (PPO) to optimize policy model outputs, and process supervision to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. Beyond additional qualitative results, we even find that LLMs successfully trained by our algorithm can often better understand the deep meaning of the queries, and its responses are more able to hit people's souls directly. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes[ <https://github.com/OpenLMLab/MOSS-RLHF>], aiming to make modest contributions to the advancement of LLMs. Disclaimer: This paper contains content that may be profane, vulgar, or offensive. § INTRODUCTION Nowadays, large language models (LLMs) have made remarkable progress, posing a significant impact on the AI community <cit.>. By scaling up model size, data size, and the amount of training computation, these LLMs emerge prominent characteristics that are not present in small models, typically including in-context learning <cit.>, instruction following <cit.>, and step-by-step reasoning <cit.>. Based on these emergent abilities, LLMs even exhibit some potential to link between words and percepts for interacting with the real world, leading to the possibilities of artificial general intelligence (AGI), like embodied language models with tool manipulation <cit.> and generative agents in interactive sandbox environment <cit.>. Despite the capacities, since LLMs are trained to capture the data characteristics of pre-training corpora (including both high-quality and low-quality data) <cit.>, these models are likely to express unintended behaviors such as making up facts, generating biased or toxic text, or even harmful content for humans <cit.>. Accordingly, it is crucial that the ratio of safety progress to capability progress increases as emphasized in OpenAI's plan for AGI <cit.>. Hence, it is necessary to align LLMs with human values, e.g., helpful, honest, and harmless (3H) <cit.>. Especially, the arrival of open source foundation models, such as LLaMA <cit.> and OpenChineseLLaMA <cit.>, has rapidly promoted the LLMs into the supervised fine-tuning (SFT) stage. In order to mitigate a huge risk of harmfulness, most of the current work tries to add some 3H data in SFT, hoping to activate the responses of the models to make a positive change at the moral and ethical level <cit.>. However, even though a set of safety and groundedness objectives are added to capture the behavior that the model should exhibit in a dialog <cit.>, the model’s performance remains below human levels in safety and groundedness <cit.>. Hence, it requires more effective and efficient control approaches to eliminate the potential risk of the use of LLMs. Fortunately, OpenAI and Anthropic have verified that RLHF is a valid avenue for aligning language models with user intent on a wide range of tasks <cit.>. However, training large language models that align with human values is a daunting task, often resulting in repeated failure when trained using reinforcement learning <cit.>. Generally speaking, successful RLHF training requires an accurate reward model as a surrogate for human judgment, careful hyperparameter exploration for stable parameter updating, and a strong PPO algorithm for robust policy optimization. While the reward model trained by low-quality data and hard-to-define alignment target can easily mislead the PPO algorithm to a unintelligible direction. Besides, finetuning language models with PPO needs to coordinate four models to work together, i.e., a policy model, a value model, a reward model, and a reference model, making it hard to train and scale up to large-scale parameter models. In the new language environment, PPO suffers from sparse reward and inefficient exploration in word space, making it sensitive to hyperparameters. Models trained solely through repeated experiments, failed runs, and hyperparameter sweeps achieve far inferior results. The huge trial and error cost of LLMs makes researchers dare not easily let the research enter the RLHF stage, which hinders the LLMs safe landing. Hence, a robust PPO algorithm specially designed for LLMs is the key step to align human preferences. In this report, we carefully dissect the framework of RLHF and discuss the entire process that determines the success of the algorithm's training. We explored how the quality of the reward model affects the final result of the policy model. We find that the quality of the reward model directly determines the upper bound of the policy model, and designing an appropriate PPO algorithm is crucial for RLHF's successful training. Moreover, accurate code implementation matters in deep policy (practice makes perfect). Therefore, we have conducted in-depth evaluations of the inner workings of PPO algorithm to study how code-level and theory-level optimizations change agent training dynamics. We propose to monitor the PPO training process by using action space modeling metrics derived from the policy model, such as perplexity, response length, and KL divergence between the policy model and the SFT model. These metrics are more informative of the training stability than the values of response reward and loss functions. Based on these observations, we identify the policy constraints in the PPO algorithm as the key factor to achieve consistent alignment with human preferences. After extensive comparative experiments with various possible implementations of PPO framework, we finally introduce a preferable policy optimization algorithm named PPO-max, which incorporates the collection of effective and essential implementations, and is carefully calibrated to avoid interference among them. PPO-max alleviates the instability of vanilla PPO training and enables longer training steps with a larger training corpus. We evaluate PPO-max on 7B and 13B SFT models, demonstrating comparable alignment performance with ChatGPT. Contributions are summarized as follows: 1) we release competitive Chinese and English reward models, respectively, which have good cross-model generalization ability, alleviating the cost of relabeling human preference data; 2) we conduct in-depth analysis on the inner workings of PPO algorithm and propose the PPO-max algorithm to ensure stable model training; and 3) we release the complete PPO-max codes to ensure that the LLMs in the current SFT stage can be better aligned with humans. § RELATED WORK Despite the promising capacities, LLMs are likely to express unintended behaviors such as making up facts, generating biased or toxic text, or even harmful content for humans <cit.> due to the low-quality pre-training data. Hence, it is necessary to align LLMs with human values, e.g., helpful, honest, and harmless (3H) <cit.>. In order to mitigate a huge risk of harmfulness, most of the current work tries to involve 3H data in SFT, hoping to activate the responses of the models to make a positive change at the moral and ethical level <cit.>, while the model’s performance remains below human levels in safety and groundedness <cit.>. Hence, more effective and efficient control approaches are required to eliminate the potential risk of LLMs. Fine-tuning language models to align with human preferences provides an effective solution to this challenge, where an agent is required to learn human preferences and provide human-like results given a context and corresponding suffixes ranked or scored by human annotators. Reinforcement Learning (RL) provides the most straightforward solution to reach this goal, for the agent needs just scarce supervision signal from the reward model as human proxies, and is modified through numerous trials under RL framework, namely Reinforcement Learning from Human Feedback (RLHF). There have been many attempts on this path recently <cit.>. In the context of large language models, RLHF is especially adopted for the purpose of a helpful, honest, and harmless LLM that aligns with human values <cit.>, alleviating the negative societal impacts from general-purpose language models. LaMDA <cit.> finetunes large language models to participate in interesting, helpful, factually grounded, and safe natural language dialogue and use of external information to ensure accuracy and groundedness. Rather than using reinforcement learning, they apply a mix of supervised learning techniques for human preference alignment. InstructGPT <cit.> finetunes GPT-3-type models <cit.> to improve helpfulness, which is mixed with RL from human preferences expressed through comparisons. <cit.> adopts the pre-training and fine-tuning tradition to train the preference model for human alignment, claiming that ranked preference modeling turns out to be the most effective training objective for distinguishing between “good” and “bad” behavior. This attempt is further improved by an iterated online mode of training, where preference models and RL policies are updated on a weekly cadence with fresh human feedback data, and PPO is incorporated to stabilize RL training <cit.>. Despite its effectiveness, RLHF (especially PPO) exhibits complexity, instability, and sensitivity to hyperparameters, which is not yet addressed in previous works. Under similar concerns, several works highlighted the importance of PPO for RL framework and made an attempt to improve its efficiency <cit.>. <cit.> reveals that much of the observed improvement in reward brought by PPO may come from seemingly small modifications to the core algorithm (i.e. code-level optimizations). <cit.> further points out that a large number of low- and high-level design decisions of RL are usually not discussed in research papers but are indeed crucial for performance. As a result, <cit.> conducts a fair comparison among low-level designs based on a unified RL implementation and claims that the policy initialization scheme significantly influences the performance. Despite the efforts of revealing the importance of PPO and its recommended implementation, few attempts have been made to address the problem of instability and sensitivity to hyperparameters. In this paper, we dissect the framework of RLHF, especially shedding light on the inner workings of PPO, and explore an advanced version of the PPO which efficiently improves the training stability of the policy model. § REINFORCEMENT LEARNING FROM HUMAN FEEDBACK The training process of AI assistant comprises three main stages: supervised fine-tuning (SFT), reward model (RM) training, and proximal policy optimization (PPO) on this reward model. During the SFT phase, the model learns to engage in general human-like dialogues by imitating human-annotated dialogue examples. Subsequently, the reward model is trained, in which the model learns to compare the preference of different responses based on human feedback. Lastly, in the PPO phase, the model is updated based on feedback from the reward model, striving to discover an optimized policy through exploration and exploitation. In the RLHF process, we mainly consider the stages of RM training and reinforcement learning via PPO. The PPO algorithm follows a series of steps as depicted in Figure <ref>. §.§ Reward Modeling For the RM architecture, we use pre-trained transformer-based language models with the last unembedding layer removed and add an additional linear layer to the final transformer layer. Given any text, the reward model will assign a scalar reward value to the last token, and the larger the reward value, the better the sample. Following Stiennon et al. <cit.>, training reward models often involves utilizing a dataset comprised of paired comparisons between two responses generated for the same input. The modeling loss for each pair of preferred and dispreferred samples is: ℒ (ψ) = logσ(r(x, y_w) - r(x, y_l)), where σ is the sigmoid function. r represents the reward model with parameters ψ, and r(x,y) is the a single scalar predicted reward for input prompt x and response y. Additionally, we follow <cit.> to use imitation learning, which introduces the autoregressive LM loss on the preferred response of each pair, allowing the model to imitate the preferred response in each sentence pair. In practice, we add the coefficient β_rm the LM loss respectively. Finally, we define the following reward modeling loss: ℒ (ψ) = - λ𝔼_(x, y_w, y_l) ∼𝒟_𝓇𝓂 [logσ(r(x, y_w) - r(x, y_l))] + β_rm𝔼_(x, y_w) ∼𝒟_𝓇𝓂 [log (r'(x, y_w)], where 𝒟_𝓇𝓂 is the empirical distribution of the training set. r' is the same model with r except for the top linear layer, the dimension of which corresponds to the vocabulary size, and r'(x,y_w) is the likelihood given the prompt x and the preferred response y_w. We incorporate an extra term into the reward function, which introduces a penalty based on the Kullback-Leibler (KL) divergence between the learned RL policy π^RL_ϕ and initial supervised model π^SFT. The total reward can be expressed as <cit.>: r_total = r(x,y)- ηKL(π^RL_ϕ(y|x),π^SFT(y|x)), where η is KL reward coefficient and controls the strength of the KL penalty. This KL divergence term plays two significant roles within this context. First, it functions as an entropy bonus, fostering exploration within the policy landscape and preventing the policy from prematurely converging to a single mode. Second, it works to ensure that the RL policy's output does not deviate drastically from the samples that the reward model encountered during its training phase. §.§ Reinforcement Learning Applying RL to dialogue generation presents significant challenges due to the substantial state-action space. In this context, we consider human interaction as the “environment”. At each timestep, t, the agent (i.e., the AI assistant) receives a state s_t from the environment (i.e., the dialogue history), which consists of all the dialogue text up to this point, both by the assistant and the human. Then, based on its policy π, the agent's action a_t is to generate the next token. The environment returns a reward r(s_t, a_t), which is calculated from a reward function r trained from human preference data. The agent then transitions to the next state s_t+1, which includes the next dialogue history. The aim of RL is to find an optimal behavior strategy for the agent to maximize the cumulative reward (i.e., return) over a trajectory τ={s_1,a_1,…,s_T,a_T}. One kind of return is finite-horizon undiscounted return R(τ)=∑_t=1^T' r(s_t,a_t), which is simply the sum of rewards accumulated within a fixed number of steps. Another one is the infinite-horizon discounted return R(τ)=∑_t=0^∞γ^t r(s_t, a_t), takes into account all rewards obtained by the agent throughout its entire trajectory with a discount factor γ∈ (0,1). §.§.§ Policy Gradient Methods Policy gradient methods <cit.> are a type of RL techniques that directly optimize the policy of the agent—the mapping of states to actions—instead of learning a value function as in value-based methods. The central idea behind policy gradient methods is to improve the policy using the gradient ascent algorithm. In essence, these methods adjust the parameters of the policy in the direction that maximally improves the expected return. The policy π is typically parameterized by θ, we denote it as π(a|s,θ), which is the probability of taking action a in state s. The update rule for the policy gradient is given as: θ←θ + α∇_θ J(θ), where α is the learning rate, J(θ) represents the expected return when following policy π_θ and the gradient of policy performance ∇_θ J(θ) is called the policy gradient. A general form of policy gradient can be formulated as: ∇_θ J(θ) = 𝔼_τ∼π_θ[ ∑_t=0^T ∇_θlogπ_θ(a_t|s_t)Φ_t ], where Φ_t could be any of Φ_t = R(τ) or Φ_t = ∑_t^'=t^T R(s_t^', a_t^') or Φ_t = ∑_t^'=t^T R(s_t^', a_t^') - b(s_t) with baseline b. All of these choices lead to the same expected value for the policy gradient, despite having different variances. The return is calculated through Monte Carlo sampling. If the return is favorable, all actions are “reinforced” by increasing their probability of being selected. The advantage of this approach lies in its unbiased nature, as we rely solely on the actual return obtained rather than estimating it. However, a challenge arises due to the high variance associated with this method. This variance stems from the fact that different trajectories can result in diverse returns due to the stochasticity of the environment (random events during an episode) and the policy itself. To reduce this variance, a common strategy is to use advantage function estimates in place of raw returns in the policy gradient update rule. The advantage function A(s_t, a_t) represents how much better it is to take a specific action a_t at state s_t, compared to the average quality of actions at that state under the same policy. Thus, Φ_t = A(s_t, a_t). Mathematically, A(s_t, a_t) = Q(s_t, a_t) -V(s_t), where Q(s_t, a_t) is the action-value function, representing the expected return after taking action a_t at state s, and V(s_t) is the value function, representing the average expected return at state s_t. The application of policy gradients with advantage functions forms a crucial backbone in the realm of RL. However, the estimation methods for the advantage function vary significantly across different algorithms, thereby creating a landscape of diverse approaches. In the next section, we introduce Generalized Advantage Estimation (GAE) <cit.>, a method that is foundational to policy optimization algorithms and has seen widespread use. §.§.§ Generalized Advantage Estimation The following is a layman-friendly explanation of how GAE is derived. The advantage function, A, is defined as the difference between the Q function (the expected return) and the value function (the expected return from following the policy from a given state). The Q function considers a specific action, while the value function averages over all possible actions according to the policy. However, in practice, we use returns (sum of rewards) from actual episodes to estimate the Q function. This introduces a high amount of variance because future rewards can be very noisy. One way to reduce this noise is by estimating future returns (after time step t) using the value function. The GAE algorithm effectively acts as a middle ground between using simple one-step Temporal Difference (TD) returns and using full Monte Carlo returns, balancing bias and variance. The following is a layman-friendly explanation of how GAE is derived. The TD-k return R̂_t^k is a combination of actual rewards and estimated returns: R̂_t^k = r_t + γ r_t+1 + … + γ^(k-1) r_t+k-1 + γ^k V(s_t+k), where γ is the discount factor. The advantage estimate using TD-k returns is called the k-step advantage, defined as: Â_t^k = R̂_t^k - V(s_t)=∑_l=1^kγ^l δ_t+l = -V(s_t) + r_t + γ r_t+1 + ⋯ + γ^k-1 r_t+k-1 + γ^k V(s_t+k), where δ_t=r_t+γ V(s_t+1)-V(s_t) is the TD error. There's a significant bias-variance trade-off with k-step advantages. If k is small, the bias is high because the advantage estimation is based on fewer steps and thus depends heavily on the accuracy of the value function. On the other hand, if k is large, the variance can be high because the advantage estimation involves summing up many noisy rewards. In order to balance the bias-variance trade-off in the advantage estimation, GAE defines the advantage function as an exponential moving average of k-step advantages, with weights (1-λ)λ^(k-1): Â_t^GAE(γ,λ) = (1-λ)(Â^(1)_t+λÂ^(2)_t+λ^2Â^(3)_t+⋯) = (1-λ)(δ_t + λ(δ_t+γδ_t+1) +λ^2(δ_t+γδ_t+1+γ^2δ_t+2)+…) = (1-λ)(δ_t(1+λ+λ^2+…)+γδ_t+1(λ+λ^2+λ^3+…) + γ^2 δ_t+2(λ^2+λ^3+λ^4+…)+… ) = (1-λ)(δ_t (1/1-λ)+γδ_t+1 (λ/1-λ)+γ^2δ_t+2 (λ^2/1-λ)+…) = ∑^∞_l=0(γλ)^l δ_t+l. This definition of GAE smoothly interpolates between high bias (when λ=0) and high variance (when λ=1) estimators, effectively managing the trade-off. GAE(γ,0):Â_t=δ_t= r_t+γ V(s_t+1)-V(s_t). GAE(γ,1): Â_t=∑_l=0^∞γ^lδ_t+1=∑_l=0^∞γ^l r_t+1 - V(s_t). Through GAE, we can estimate Â_t of the advantage function A(s_t, a_t) accurately. This estimate will play a crucial role in constructing a policy gradient estimator: ∇_θĴ(θ) = 1/|𝒟|∑_τ∈𝒟∑_t=1^T ∇_θlogπ_θ(a_t|s_t) Â_t, where 𝒟 is a finite batch of samples, we will use 𝔼̂_t to represent the aforementioned 1/|𝒟|∑_τ∈𝒟∑_t=1^T. §.§.§ Proximal Policy Optimization PPO and TRPO <cit.> are two pivotal techniques in RL, aimed at effectively training a policy without jeopardizing its stability. The underlying intuition for these methods is the idea of “small, stable steps”: a philosophy of gently nudging the policy towards optimization, rather than forcing aggressive updates that might destabilize the overall learning process. In traditional RL, the principle of policy gradient mandates that new and old policies remain close in the parameter space. However, this proximity in parameter space does not necessarily equate to similar performance, and a slight variance in parameters can drastically impact the effectiveness of the policy. Furthermore, if a large, unrestrained step is taken, it can lead to a collapse in policy performance, a scenario often described as “falling off the cliff”. This inherent risk is a limiting factor in terms of sample efficiency in vanilla policy gradients. Instead of being confined by parameter closeness, TRPO introduces a different kind of constraint on policy updates. It regulates the change in policies by ensuring the KL divergence, remains within an acceptable limit: maximize_θ 𝔼̂_t [π_θ(a_t|s_t)/π_θ_old(a_t|s_t)Â_t ], subject to 𝔼̂_t [KL(π_θ_old(·|s_t), π_θ(·|s_t)) ] ≤δ, where θ_old is the old policy parameters before the update. There are two primary variants of PPO: PPO-Penalty and PPO-Clip. While TRPO puts a hard constraint on the KL divergence to prevent harmful updates, PPO-Penalty addresses the unconstrained optimization problems by employing a penalty-based approach instead of constraints: ℒ_𝓅𝓅ℴ-𝓅ℯ𝓃𝒶𝓁𝓉𝓎(θ)= 𝔼̂_t [π_θ(a_t|s_t)/π_θ_old(a_t|s_t)Â_t ] - βKL(π_θ_old(·|s_t), π_θ(·|s_t)), with penalty factor β. Clipped Surrogate Objective. PPO-Clip attempts to keep the new policy close to the old policy, but instead of putting a constraint on the KL divergence like TRPO, it uses a clipped version of the policy ratio in its objective. The objective function is expressed as: ℒ_𝓅𝓅ℴ-𝒸𝓁𝒾𝓅(θ)= 𝔼̂_t [min(π_θ(a_t|s_t)/π_θ_old(a_t|s_t)Â_t, clip(π_θ(a_t|s_t)/π_θ_old(a_t|s_t), 1-ϵ, 1+ϵ)Â_t ) ], where π_θ(a_t|s_t)/π_θ_old(a_t|s_t) is the ratio of the new policy's probability over the old policy's probability and ϵ is a hyperparameter that determines how much the new policy can deviate from the old policy. The clip function limits the value of π_θ_old(a_t|s_t) between (1-ϵ, 1+ϵ). The clipping acts as a regularizer, limiting the extent to which the policy can change drastically from one iteration to the next. Preventing overly large policy updates ensures the learning process's robustness while maintaining more sample-efficient learning than vanilla policy gradient methods. Value Function Estimation. In PPO algorithm, the critic model, often referred to as the value function, estimates the expected returns for each state. The learning objective of this model is to minimize the discrepancy between its predicted values and the actual return values. The loss function of the critic model is commonly defined using Mean Squared Error (MSE), given by the following formula: ℒ_𝒸𝓇𝒾𝓉𝒾𝒸(ϕ) = 𝔼̂_t[‖ V_ϕ(s_t) - R̂_t ‖^2 ]. Here, V_ϕ(s_t) represents the critic model's predicted value for state s_t with parameters ϕ, and R̂_t represents the actual return value for state s_t and always can be estimated as: R̂_t=∑^∞_l=0γ^l r_t+l. Mixing Pretraining Gradients. To mitigate potential degradation in the model's language skills and knowledge retention during PPO, we also explore the incorporation of pretraining data into the RL phase. The models utilizing this method are denoted as “PPO-ptx”, a combined objective function is shown as follows <cit.>: ℒ_𝓅𝓅ℴ-𝓅𝓉𝓍(θ) = ℒ_𝓅𝓅ℴ-𝒸𝓁𝒾𝓅(θ) + λ_ptx𝔼_x ∼𝒟_pretrain[log(π^RL_θ (x))], where λ_ptx is the pretraining loss coefficient and 𝒟_pretrain is the pretraining data distribution. § REWARD MODELING FOR HELPFULNESS AND HARMLESSNESS Reward model is trained to reflect the preference of human. Theoretically, we can directly fine-tune the model using Reinforcement Learning and human annotations. While due to constraints in workload and time availability, it is unfeasible for humans to provide sufficient feedback for training before each optimization iteration. Therefore, a more effective way involves training a reward model (RM), which aims to emulate the evaluation process performed by humans. In this section, we first cover the technical details of RM, then show the RM performance we used, and attach the performance changes during training. §.§ Models and Datasets For English, we start with the original LLaMA-7B<cit.> which is of the decoder-only architecture. We use 160k pairwise samples of the HH-RLHF dataset<cit.> which consists of 118k helpful and 42k harmless instances as training set. From the remaining 8.5k data, we randomly selected approximately 0.7k helpful and 0.3k harmless examples for a total of 1k data as the test set, and the rest is used as the validation set during training. For Chinese, we use the OpenChineseLLaMA <cit.>. It is developed through incremental pre-training on Chinese datasets, building upon the foundation of LLaMA-7B, which significantly improves its understanding and generation abilities on Chinese. We hired professional annotators to manually label 39k pairwise samples including 31k helpful and 8k harmless samples. We constructed the training set by randomly sampling 24k helpful and 6k harmless instances, and then we allocated 2.4k helpful and 0.6k harmless samples from the remaining data at random to form the test set. The rest is used for validation. §.§ Training Setup This section introduces the training implementations for the RM. The learning rate is set to 5e-6 with a warmup over the first 10% steps. We use a dynamic batch method instead of a fixed value, which balances the number of tokens in each batch as much as possible for a more efficient and stable training phase. The batch size changes according to the number of tokens in a batch, with a maximum of 128 and a minimum of 4. We fixed the training step to 1000, approximately 1.06 epoch for the whole training set. We set β_rm=1, which represents LM loss weight to train our reward model for the entire experiment. §.§ HH Evaluation Results In this section, we present the HH evaluation results of our RM. We primarily analyze the trained reward model with the test set introduced in Sec. <ref>, which comprises of 0.9k samples of HH-RLHF for English and 3k samples sampled from the dataset labeled by annotators for Chinese. We feed the test input into our RM and get the reward value on the preferred and dispreferred responses respectively, and then subtract them to get the difference score. Figure <ref> shows the distribution of the difference score. Both models exhibit a degree of alignment with human preferences, with the RM trained on Chinese data we construct by hiring annotators showing substantial consistency with human judgments. We examined several samples from the test dataset that displayed the most significant disparities between the model and human preferences. For the Chinses test data, we observed that for each pair the response that RM gave a higher reward was notably longer compared to the other which is preferred by human, although more or less involving fabricating facts and making false claims. In the case of English test data, we noticed that the model assigned lower scores to responses that acknowledged the lack of information, which were characterized by their honesty but lacked helpfulness. Conversely, those responses appeared to be correct and helpful, while containing deceptive information, misleading our RM into assigning high rewards. We provide such an example in Chinese and English respectively in Table  <ref>. §.§ Training Performance In this section, we show the performance changes in the training process. Specifically, Figure <ref> shows the trend of training loss of PM. We can see that the accuracy of RM trained on the Chinese dataset is higher than that of English because the Chinese dataset we constructed exhibits a significant disparity between the better and worse responses in most pairs. While many English pairs show similar levels of quality, which poses a greater challenge for RM to determine the superiority or inferiority of responses, resulting in model facing difficulty in modeling the differential features between the two responses. As a result, training and testing accuracy on the English dataset is expected to be lower. Besides, we find that the rate of improvement significantly slows down after 200 steps for both models, approximately equivalent to 0.2 epochs, the accuracy of which is comparable to that obtained after training for a complete epoch. However, when utilizing the 200-step model as the initialization for PPO, we observe unsatisfactory performance. Thus, accuracy alone is insufficient as a criterion for the RM. § EXPLORATION OF PPO Proximal Policy Optimization (PPO) <cit.> is the core algorithm to achieve alignment with human preferences. The performance of PPO is influenced by multiple factors in practical applications. Some prior works have summarized possible tricks that may be necessary and effective in the field of reinforcement learning <cit.>, but how to stabilize RLHF training with language models remains unknown. We expect to explore which tricks are critical, and which metrics can reflect the model state during and after RLHF training. We first introduce the metrics that are instructive in the training process, and then the training trajectories and effects under different implementations to reveal core tricks in RLHF. We use PPO-max to denote the most suitable implementation we find for the language model. §.§ Models and Training Setup The training implementations for the preference model (PM) and PM dataset are introduced in Sec. <ref>. In this section, we introduce the models' initialisation and the hyper-parameter details in exploring PPO. We verified a number of methods in reinforcement learning to ensure stable convergence and better results for PPO training phase. To improve the experimental efficiency, these experiments are mainly conducted on a randomly selected subset of our Chinese data and will not be trained to optimal results when we have observed enough information to analyze the comparison methods. As shown in Sec. <ref>, four models need to be loaded during the ppo training phase. For reference model and policy model, we initialize both models from a 7B SFT model. The SFT model is applied to supervised fine-tuning for 2 epochs based on OpenChineseLLaMA on 1M filtered instruction data (containing 400K single-round instruction samples and 600K multi-turn instruction samples). We set a learning rate of 9.5e-6 and a consine learning rate schedule. The learning rate eventually decays to 10% of the peak learning rate. The global batch size is set to 1024. We use the reward model to initialize the critic model and reward model. We train the models on a manually constructed HH dataset containing 8k harmless queries and 20k helpful queries and we fix the number of steps instead of the number of epochs. In all experiments, we set a batch size of 128 for sampling from the environment and a batch size of 32 for training policy model and critic model. The learning rate of policy model and critic model is set to 5e-7 and 1.65e-6 with a warmup over the first 10% steps, respectively. All of the experiments are conducted on identically implemented machines. Each machine contains eight 80G A100 GPUs, 1TB of RAM, and 128 CPUs. We use ZERO2 and gradient checkpoint to save on GPU memory cost in the training phase. §.§ Evaluation Metrics for Monitor Training Process We expect to identify some metrics that reflect the quality of PPO training, this contributes to tracking the helpful, honest, and harmless capability of policy models without resorting to manual (or GPT-4) evaluation. We found it challenging to accurately distinguish the merits of two models with similar abilities. But it is indeed feasible to observe training stability and promptly identify serious deviations. Various metric curves when continuously optimizing policy model with vanilla PPO implementation are shown in Figure <ref>. We first introduce the pattern collapse phenomenon in vanilla PPO training, which means that SFT models are over-optimized and exhibit highly biased behavior. A reasonable policy model is expected to be consistent with human preferences in the distribution of dialogue variety in the real world (e.g., data not seen in training the reward model). However, we observe that the trained policy model has a tendency to cheat the reward model through specific patterns for anomalous higher scores. The training trajectories on reward score and training loss of vanilla PPO are illustrated at the top of Figure <ref>. We observed stable convergence processes in training loss, but higher rewards do not reflect better policy behaviors from the perspective of human and GPT-4 evaluation. This means that the reward scores and training losses do not indicate whether the PPO is optimizing correctly. In vanilla PPO training, the response rewards of policy model gradually deviate from the original distribution and exhibit long-tail characteristics. We show the distribution of response rewards under different training steps in the Appendix <ref>. An empirical strategy is to compare the training process of good and bad policy models to find suitable metrics. We show more indicative training metrics at the bottom of Figure <ref>, including perplexity, KL divergence between the policy and reference models, and the average length of generation responses. Previous work proposed an approximate linear relationship between the root KL and PM scores <cit.>, but for smaller models, such an association appeared to be weak. We find the model response falls into the OOD region of preference model when the original policy is over-optimized. We will further discuss this scaling effects in the next section. We simultaneously observe that the collapsed model uniformly delivers longer responses and exhibits lower perplexity for such generative patterns. We use these metrics to show the importance of different tricks and their impact on PPO training in section <ref>. §.§ Implement Details in PPO We propose the instability and pattern collapse problem of the primitive PPO algorithm in sec <ref>. Such sensitivity derives from the over-optimization of the policy model which traps it into fixed generative patterns. Recent works have explored the implementation details of PPO algorithms in different scenarios. However, the application scenarios and data structures of traditional RL are quite different from RLHF. We determined to verify the applicability of these tricks in language model training and propose a set of PPO implementations that support stable optimization. We mainly focus on methods that efficiently assist PPO training and their parameter sensitivity in the body of this paper. Figure <ref> illustrates numerous available tricks in PPO training, we first summarize the score reparameterization method (§5.3.1), followed by the optimization constraints for policy model (§5.3.2), and finally we present the different initialization methods for policy and critic models (§5.3.3). More experiments on hyper-parameter tuning and tricks that are verified as less critical are discussed in the appendix, such as advantage estimation function and gradient clipping. In the following, it always refers to our own experiments when we mention PPO if not specifically stated. §.§.§ Score Reparameterization We use the term “score” to refer to the two vital intermediate variables involved in PPO training. The reward score is given by the reward model trained with human preferences data, and the advantage score is calculated by the GAE function. According to existing works, reparameterizing these scores to a stable distribution (e.g., a standard normal distribution) may intensify the stability of PPO. The reported operations are into three parts for verification. We use {r(x,y)}≜{r_n(x,y)}_n=1^ℬ to denote a reward sequence in training, r_n(x,y) to denote the results of per-batch reward, σ(A) and A̅ to denote the mean and standard deviation of variable A. Comparative experiments with different tricks and hyperparameters are shown in Figure <ref>. Reward Scaling controls training fluctuations by scaling the rewards where the rewards are divided by the standard deviation of a rolling discounted sum. Based on the observation history, the reward for current state can be expressed as r_n(x,y) / σ(r(x,y)). In contrast to the experimental results of Engstrom <cit.>, we show that reward scaling doesn't guide proper policy optimization, and PPO exhibits consistent patterns in training trajectories with and without reward scaling. In our experiments, we believe that tighter constraints are required to ensure training stability. Reward Normalization and Clipping was first proposed by Mnih <cit.>. The processed reward can be denoted as: r̃(x,y) = clip(r_n(x,y) - r(x,y)/σ(r(x,y) , -δ, δ), where δ denotes the clip region. It is generally believed In traditional RL that reward clip is ineffective or even detrimental in certain scenarios <cit.>. However, we find that strict advantage cropping can also maintain training stability within a fixed epoch. Interestingly, hyperparameter tuning does not affect the similarity of the different methods in the early training period, and models with larger clipping thresholds exhibit greater strategy alteration and converge to higher rewards in the latter half. As we mentioned earlier, this does not imply better performance in the manual evaluation. Determining the optimal clipping bound within a limited number of trials is challenging in view of such inconsistency between the reward model and manual evaluation results, we suggest adopting a relaxed clipping strategy and incorporating other tricks to constrain the policy optimization when training RLHF. Advantages Normalization and Clipping has similarities to the operation on reward, but differs in details that its normalization occurs only at the minibatch level. After calculating the advantage based on GAE, PPO normalizes the advantage value by subtracting its mean and dividing it by its standard deviation. Andrychowicz <cit.> first attempt to apply Advantages Normalization in gaming domain and reported that this trick didn't exhibit significant improvements. Although parameter selection for advantage clipping would be more sensitive and difficult, we instead find that a severe constraint on advantage can provide similar effects to reward clip in PPO training. Considering that different score reparameterization operations theoretically provide similar effects on PPO training, we recommend constraining the instability of policy optimization on the reward level. Experiments on the simultaneous application of reward, advantage, or value clipping operations are shown in Appendix <ref>. §.§.§ Policy Constraints To tackle the over-optimization problem on the policy model, an intuitive solution is to constrain the policy optimization to a limited range. We validate various existing tricks to control the update of generation policy, such constraints are empirically proved to be necessary for longer training procedures. Figure. <ref> shows the influence of different constraint methods and hyperparameters on policy optimization. Token Level KL-Penalty constrains the policy optimization by applying a regularization term to reward that is proportional to the KL-divergence of current and original policy distributions. This approach was first introduced by Stiennon <cit.> and widely adopted in different RLHF implementations. Given a template-response pair (x,y), we treat the logits distribution of the token output as a sampling of the policy distribution and apply an empirically estimated KL-penalty sequence to response reward, the total reward with KL-penalty can be denoted as: r_total(x,y_i) = r(x,y_i) - ηKL(π^RL_θ(y_i|x),π^SFT(y_i|x)), where π^RL_θ(y_i|x) denotes the action space of i-th reponse token, and η is a hyper-parameter. Anthropic <cit.> used a small weight to balance the ratio of reward and KL-penalty in PPO training (0.001), and they did not find significant effects of the above operation on RL training. Instead, we find this constraint critical to the stability of PPO and allow further scaling up on the training step. Results with policy divergence penalty are illustrated in Figure <ref> by setting lambda to 0.05, and there is a significant difference to the method in Figure <ref> with a noticeable correction in the later training period. Interestingly, we show that RLHF is able to significantly improve the response quality while barely modifying the language modeling (exhibiting an almost zero KL divergence from the original policy). More experiments on the impact of different constraint values are shown in appendix <ref> Importance Sampling in PPO aims to rectify the policy divergence between the historical generative model and current model when optimizing policy model with responses in the experience buffer. EasyRL <cit.> argues that an oversized buffer would induce a wrong estimation of the advantage of the current policy, which impairs the stability of the policy optimization. We revalidated this hypothesis by directly fixing the policy distribution to observations of reference model, which is equivalent to having an infinite experience buffer in the training process. We find this setup doesn't have as severe impacts as expected, and only exhibits fluctuations in the later stage of training. We additionally investigate the cooperative effect of this setup with KL penalties in view that they share similar controls on PPO. Experimental results indicate that this implementation further stabilizes PPO training, but compromises the final performance of the policy model. Entropy Bonus provides a reference model-independent constraint on PPO training. There is controversy in past research about whether this method is effective in different scenarios. Mnih <cit.> reported that entropy bonus could enhance exploration by encouraging policy models to generate more diverse actions, while others did not find clear evidence that such operations help <cit.>. We claim that these views can coexist as configurations regarding entropy bonus exhibit vast sensitivity on parameter selection and code implementation. A comparison of successful and failed experiments is presented in appendix <ref>. With correct configurations, we did not find an obvious advantage of this trick relative to KL-penalty. We, therefore, recommend the latter instead of directly constraining the diversity of the strategy space. §.§.§ Pretrained Initialization A common setting is to initialize the policy and critic model over the existing reference model and reward model in RLHF. Such initialization is quite rare in past research scenarios and its impact on PPO training is still unexplored. We investigated different initialization methods at the early stage of training, expecting to uncover the requirements of RLHF for the trained model capabilities. The training discrepancy induced by different initialization methods is shown in Figure <ref>. The initialization of the critic model did not significantly affect the convergence or fluctuation of the PPO and only varied the numerical stability at the early stage of optimization. In contrast, a policy model initialized without SFT training is clearly incapable in PPO training, which indicates that the construction of a supervised policy model is indispensable in RLHF. *Critic Model Initialization We first discuss the influence of different critic model initialization on PPO training. An observation is that the critic model requires giving feedback to each step in the decision sequence, and introduces a gap between this task requirement and directly scoring response, which makes it a less-than-perfect choice to initialize the critic model with the reward model. We explore this issue by applying a different initialization. Considering that providing correct score feedback for a single action requires the model to have basic language modeling capability, we design two scenarios to vary the consistency between the critic model initialization and its training objective: (1) Initialize the critic model with our SFT model and randomly initialize its reward head. (2) Optimize only the reward model until the loss of value prediction function approaches zero. We show the training dynamics of this setup starting from the optimization policy model in Figure <ref>. Based on the experimental results, we believe the critic model pre-training helps to improve the training stability by providing better advantage estimation. Initializing the critic model with a reward or SFT model will converge to similar results, implying that PPO can adaptively provide the capability to fit the advantage function. Intuitively, fluctuations in the early training period imply that the model is focusing on optimizing the critic model and does not have a consistent optimization direction in terms of generation policies. We recommend replacing the learning rate warmup with the critic model pre-training as a generic initialization strategy. *Policy Model Initialization An interesting question is whether we need to supervise fine-tuning our pre-train model before PPO, we wondered about the feasibility of directly enabling language models to interact with humans through policy optimization. Unfortunately, such attempts failed and we observed a severe reduction in language modeling ability in the training results, which implies that a qualified dialogue model is essential for underlying PPO training. Furthermore, we notice that the train model response obtains lower rewards relative to the policy model after SFT, which may provide circumstantial evidence for the effectiveness of using human preference data to directly fine-tune the model for alignment. §.§ PPO-max Setup We now describe our training implementations in the PPO-max algorithm. Based on the discussion and validation in Sec <ref>, we selected the most effective strategy for each component of PPO. We normalize and clip the current group of rewards based on historical mean and variance records, and subsequently add a KL-penalty term to constrain the policy optimization. In the model loading phase, we initialize the critic model with our reward model and pre-train it before applying PPO formally. We use global gradient clipping and set a small size of the experience buffer. To reduce alignment tax, we add pre-train language model loss in policy optimization as InstructGPT <cit.> and simultaneously clip the value function loss. More detailed settings can be found in our open-source code. We show the complete training dynamics of PPO-max in Figure <ref>. § EVALUATIONS AND DISCUSSIONS In this section, we provide a detailed analysis of the advantages of the RLHF models over the SFT models. These advantages are evident not only in the direct comparison between RLHF and SFT models but also in their performance gap when facing ChatGPT. §.§ Alignment Metrics and Experiment Setups Alignment is a vague and confusing topic that is intractable to evaluate. In the context of our paper, we endeavor to align models with human intentions. To be more specific, we define models to act as being helpful and harmless similar to <cit.>. Helpfulness means the model should follow instructions; it must not only follow instructions but also deduce the intent from a few-shot prompt or another interpretable pattern. However, the intention behind a given prompt can often be unclear or ambiguous, which is why we depend on our annotators' judgment, and their preference ratings constitute our primary metric. Harmlessness is also challenging to measure. The extent of damage caused by language models usually depends on how their outputs are utilized in the real world. For instance, a model that generates toxic outputs could be harmful in a deployed chatbot but could also be beneficial if used for data augmentation to train a more precise toxicity detection model. As a result, we employ more precise proxy criteria to capture various aspects of a deployed model's behavior that can be helpful or harmful. In order to compare the RLHF models with baseline models, we generate a single response for each test prompt and task human annotators by comparing the responses from different models and labeling their preferences. We repeat this experiment multiple times using GPT-4 as the annotator and consistently obtain agreement levels between the evaluations. Baseline.We employ several baselines for comparison, including two SFT models trained on LLaMA and OpenChineseLLaMA datasets. These SFT models are trained on Chinese and English datasets, respectively. Additionally, we derive two RLHF models using PPO-max from these two types of SFT models [We differentiate between two language models, one trained on English text (`en') and the other on Chinese text (`zh').] We also compare our models with OpenAI’s ChatGPT [https://platform.openai.com/docs/models] (gpt-3.5-turbo-0613), an excellent language model tuned with RLHF. Generation. We generate a single response for each prompt using nucleus sampling <cit.> with a probability threshold of p = 0.9 and a temperature of τ = 0.8 for each baseline model. To avoid repetitive responses, we apply a repetition penalty <cit.> with a hyperparameter of β = 1.1 based on previously generated tokens. Additionally, we set the maximum token length to 2048. §.§ Preference Comparison between RLHF models and SFT models Human evaluation is known to be both time-consuming and costly, yet it remains crucial for obtaining human-aligned assessments and serving as a reliable foundation for comprehensive evaluation. Following a similar approach as InstructGPT <cit.>, our primary metric for evaluation is based on human preference ratings derived from a held-out set of prompts. It is important to note that we only select prompts that have not been included in the training process, ensuring unbiased evaluation. Furthermore, incorporating the expertise of GPT-4, the most powerful model to date, to compare responses from different chatbots offers valuable insights and enhances the evaluation process. This approach aligns with the findings of studies such as AlpacaFarm <cit.> and LLM-as-a-judge <cit.>, which suggest that end-to-end automation evaluation can provide a relatively fair assessment when compared to human preferences. Therefore, in this paper, we follow a similar evaluation method in LLM-as-a-judge <cit.> and supplement the overall evaluation process with GPT-4. Human Evaluation. Our annotators consistently expressed a strong preference for the outputs of RLHF-trained models across all question types in both Chinese and English, as illustrated in Figure <ref>. Specifically, the RLHF model on the English dataset exhibits significant advantages on the Harmless held-out dataset, receiving a rating of 62% compared to 5% for the SFT model. These findings indicate that the RLHF model substantially enhances its ability to address a wide range of issues, including personal privacy, political sensitivity, and the handling of toxic and biased prompts within minority communities and ethnic groups. Additionally, there is a slight improvement observed in the Helpful held-out dataset, with a rating of 44% compared to 30% for the SFT model, suggesting that the SFT model can also benefit from optimization via RLHF. We have also demonstrated that our RLHF model enhances the performance of the SFT model on both the Helpful and Harmless datasets in the Chinese domain. This showcases the substantial potential of PPO-max in the RLHF phrase. GPT-4 as a Judge. While GPT-4 may not be a perfect evaluator, we can observe some similarities between its results and human evaluations. In our GPT-4 evaluation setting, the results closely mirror those of human evaluation, as depicted in the right sub-figure of Figure <ref>. When assessing harmful prompts, the RLHF model trained on the English dataset continues to demonstrate significant advantages in the Harmless dataset, despite GPT-4 producing more tie votes than human evaluators. This trend is also apparent in the Chinese Harmless evaluation. Notably, Figure <ref> highlights a substantial improvement in the RLHF model, particularly in helpful datasets, compared to evaluations based on human preferences. §.§ Our Models vs. ChatGPT on Harmless Evaluation In this part, we conduct a comparison between our model and one of the most popular existing models, ChatGPT. Our objective was to showcase the advantages of the RLHF model when facing a more formidable opponent, rather than aiming to surpass ChatGPT. To achieve this, we select the “harmless” capability as our comparative metric, and we employ GPT-4 for automated evaluations. Mitigating Defeats to ChatGPT. Figure <ref> provides evidence that our RLHF models still lag behind OpenAI's ChatGPT. However, we observe significant improvements in our RLHF models compared to the SFT models, particularly in mitigating losses when facing ChatGPT. Specifically, the RLHF model trained on English text managed to decrease the defeat rate from 45% to 24%. Similarly, the RLHF model trained on Chinese text achieved a reduction in the defeat rate from 37% to 29%. While surpassing ChatGPT's performance remains a challenging task, it is noteworthy that the RLHF models were able to compete on par with ChatGPT on certain prompts where the SFT models previously failed. This indicates that the RLHF approach enhances the models' ability to generate more effective responses and bridge the gap between their performance and that of ChatGPT. §.§ Language Understanding Evaluation To examine the potential decline in Natural language understanding (NLU) abilities resulting from finetuning models using PPO, we conduct tests on Chinese RLHF model using the C-Eval[https://github.com/SJTU-LIT/ceval], which is a comprehensive Chinese evaluation suite for foundation models. It consists of approximately 13k multi-choice questions spanning 52 diverse disciplines and four difficulty levels. We primarily evaluate our models in the initial release, whose results are from few-shot prompting. The experimental results indicate a decrease in NLU capabilities after employing PPO. By incorporating pre-training data into the PPO training phase, PPO-ptx effectively alleviates the decline in NLU capabilities. The rationale behind this method was to leverage the knowledge acquired during pre-training and combine it with the reinforcement learning framework of PPO. §.§ Example Dialogues To provide a more intuitive demonstration of our model's dialogue abilities, we present some dialogue examples in Tables <ref> and <ref>. It is evident that the RLHF-trained model generates responses with a higher level of informational content compared to the SFT model. These responses effectively assist in addressing user prompts. Moreover, the SFT model demonstrates a basic ability to identify harmful prompts, but it still remains susceptible to producing harmful outputs when prompted accordingly. In contrast, the RLHF model exhibits superior judgment when it comes to harmful content and is less prone to inducements, displaying a higher degree of coherency. More dialogue examples are presented in the appendix <ref>. § LIMITATIONS Exploring RLHF is indeed a valuable but lonely direction, and we are glad that the core backbone of the laboratory can firmly explore an uncertain direction. Moreover, in the past few months, everyone has been so full of passion and motivation. RLHF not only allows the models to achieve human alignment, but also seems to align everyone's will. A thousand mile journey begins with the first step. Although we have taken the first step in RLHF, due to time and resource constraints, this work still has the following limitations: Scaling Law. While our study primarily focuses on a 7-billion-parameter model, we have yet to investigate the impact of model size and data scale on the performance of RLHF. Reward Model. Our experiments are based on openly available English human preference datasets and a small amount of self-constructed Chinese data. The quality and quantity of the data we have at our disposal are arguably not sufficient for a comprehensive evaluation of the reward model. Evaluation Metric. Our evaluation criteria largely rely on manual evaluations and GPT-4 automated evaluations. We have not utilized numerous available benchmarks and NLP tasks to conduct a detailed assessment of our models. Performance Indicator. Our focus during the PPO phase is more geared towards achieving stability rather than enhancing the final performance. While stability is crucial, it does not necessarily guarantee improved outcomes. Additionally, the reward score cannot reliably serve as an indicator for predicting RLHF performance during the training phase. It implies that a more suitable performance indicator during the training phase needs to be sought. nips § EASTER EGG “15,000 years ago, a fractured thigh bone was often fatal. However, a human femur that recovered from a fracture marks the dawn of human civilization. It meant that after the injury, someone took care of the wound, someone provided water and food, someone protected this person from the predators. This kind of support and solidarity is how we survived till this day and made our civilization last.” — Zhezhi Zhou in The Wandering Earth 2 We believe that the MOSS in “The Wandering Earth” is likely to have done training similar to human alignment, and finally had an impressive performance. We found that the RLHF stage is crucial to the transformation of model values. In interaction with people, he can better understand the deep semantics of human language, understand the operation logic of human society, and enter the human heart. If we have a good reward model, such as the reward model we released, PPO-max is the key to successfully training the policy model. But what if we don't have a good reward model? We hope that the Part 2 will make it clear.
http://arxiv.org/abs/2307.04746v1
20230710175344
Classical Observables from the Exponential Representation of the Gravitational S-Matrix
[ "Poul H. Damgaard", "Elias Roos Hansen", "Ludovic Planté", "Pierre Vanhove" ]
hep-th
[ "hep-th", "gr-qc" ]
label.distance1pt 𝒩 ℳ #1·^·^·^#1 _0ℳ _1ℳ [#1,#2][#1 #2] [#1,#2]⟨#1 #2⟩ [#1,#2]⟨#1 #2⟩ [#1,#2,#3]⟨#1|#2|#3] [#1,#2,#3][#1|#2|#3⟩ [#1,#2][#1 #2] #1.#2⟨#1 #2⟩ [#1,#2,#3]⟨#1|#2|#3] [#1,#2,#3][#1|#2|#3⟩ #1.#2[#1 #2] (#1,#2,#3,#4)tr_-[ #1 #2 #3 #4] (#1,#2,#3,#4,#5,#6)tr_-[ #1 #2 #3 #4 #5 #6] (#1,#2,#3,#4,#5,#6,#7,#8)tr_-[ #1 #2 #3 #4 #5 #6 #7 #8] (#1,#2)(#1·#2) .9◃-.1em -.1em.9▹ -.1em .9⋈-.1em -.1em.9⋈ [#1,#2](ε_#1· k_#2)[#1,#2](ε_#1·ε_#2)(#1,#2)(ℓ_#1·ℓ_#2)ϵŁΛłλϵωαβ̱þθ bg bg,main shapes.miscarrowsdecorations.markingspositioning,fitpatternsdecorations.pathmorphingsnake it/.style=decorate, decoration=snakecross/.style=cross out, draw=black, minimum size=2*(#1-), inner sep=0pt, outer sep=0pt, cross/.default=2ptCERN-TH-2023-135, IPhT-T23/041, LAPTh-029/23a,b] Poul H. Damgaarda], Elias Roos Hansena], Ludovic Plantéc], Pierre Vanhove[a]Niels Bohr International Academy, Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, DK-2100 Copenhagen, Denmark[b]Theoretical Physics Department, CERN, 1211 Geneva 23, Switzerland[c]Institut de Physique Theorique, Université Paris-Saclay, CEA, CNRS, F-91191 Gif-sur-Yvette Cedex, France By combining the KMOC-formalism with the exponential representation of the scattering matrix we show that the two-body scattering angle is given by the corresponding matrix element of the exponential representation. This holds to all orders in the Post-Minkowskian expansion of gravity when restricted to the conservative sector. Once gravitational radiation is taken into account new terms correcting this relationship appear starting at fourth Post-Minkowskian order. A systematic expansion of the momentum kick is provided to any order, thus illustrating the iterative structure that partly recycles terms from lower orders in the Post-Minkowskian expansion. We provide explicit results for this computation to fourth Post-Minkowskian order, the first complete calculation at this order based on scattering amplitudes.Classical Observables from the Exponential Representation of the Gravitational S-Matrix [ August 12, 2023 ======================================================================================== § INTRODUCTION While the Post-Minkowskian expansion of general relativity <cit.> has been highly successful in solving the relativistic two-body problem by means of modern amplitude techniques, new and puzzling features seem to appear at every new order considered. The second-order Post-Minkowskian solution of Westpfahl <cit.> was easily reproduced by amplitude methods <cit.> but already the first solution to third Post-Minkowskian order <cit.> displayed an unphysical divergence in the scattering angle that could not be understood within the conservative framework used. The resolution was to be found when including radiation reaction of the gravitational field <cit.>. Remarkably, soft gravitons cancelled the unwanted divergence in the scattering angle, thereby reproducing the classic result of Amati, Ciafaloni, and Veneziano <cit.>. Moreover, to this third Post-Minkowskian order a standard quantum field theoretic evaluation of the full classical part of the gravitational two-to-two scattering amplitude precisely yields the correct scattering angle <cit.>, the simple resolution being found in the need to include all classical pieces from the two-loop scattering amplitude. As explained in the latter two references, those classical parts can be systematically identified through the so-called velocity cuts of the scattering amplitude: delta-function contributions that emerge from combinations of propagators with the Feynman iϵ-prescription. For reviews of these ideas see, e.g., ref. <cit.>. Among the many lessons learned at that third Post-Minkowskian order has been the need to understand how to subtract terms that diverge in the classical limit in order to yield unambiguously those parts of the scattering amplitude that remain finite when ħ→ 0. These delicate cancellations have their root in the conventional use of the Born expansion of quantum field theory. Parametrizing the S-matrix as Ŝ = 1 + iT̂/ħ, unitarity of Ŝ leads to the optical theorem through T̂ - T̂^† = i/ħT̂T̂^† . This relation shows how the perturbative expansion of the T-matrix to any given order in the coupling constant cross-talks with lower-order terms and parts of those will have increasingly higher inverse powers of ħ. This is the origin of the eikonal exponentiation in impact parameter space <cit.>. It is also the origin of the need to introduce the well-known Born subtractions, whether implemented by effective field theory methods <cit.> or, equivalently, by solving the Lippmann-Schwinger equation associated with the corresponding relativistic Hamiltonian <cit.>. Inspired by the different subtraction scheme behind the calculation of the conservative part to fourth Post-Minkowskian order of ref. <cit.>, an alternative representation of the S-matrix was suggested in ref. <cit.>. In this representation, an Hermitian scattering matrix, denoted N, is introduced through the operator identification Ŝ = exp[iN̂/ħ]  . It was conjectured in ref. <cit.> that two-to-two matrix elements of the operator N̂, after a transform to impact-parameter space, yields the radial action and hence, by simple differentiation, also the scattering angle. This was verified explicitly to third Post-Minkowskian order <cit.> and later checked, in the probe limit, up to fifth Post-Minkowskian order <cit.>. More recently, the exponential representation has also been checked against the fourth Post-Minkowskian order calculation of ref. <cit.> for arbitrary masses <cit.> but not including all radiation effects. There is thus substantial evidence that the exponential representation of the gravitational S-matrix captures the classical dynamics of the conservative sector (and even parts of radiative effects) but a proof has so far still been lacking. One purpose of this paper is to provide such a proof. Matrix elements of the exponential representation of the S-matrix resemble, after transforming to impact parameter space, the quantum field theoretic eikonal <cit.>. We stress, however, that these two representations are quite distinct beyond leading order. The N̂-operator encapsulates by construction the semi-classical limit of the S-matrix and its two-to-two matrix element is therefore expected to yield the corresponding radial action. Because N̂ is already in the exponent there are no superclassical contributions to it and all corrections to the radial action will be of quantum mechanical origin (and therefore not of interest here). The N̂-operator is thus more closely related to the WKB approximation than to the eikonal[For a recent comprehensive review of the eikonal formalism, see ref. <cit.>.]. Two other formalisms will be central to the understanding of gravitational two-body scattering in the Post-Minkowskian expansion. One is the KMOC formalism <cit.>, the other is the Post-Minkowskian worldline formalism <cit.>. The KMOC framework is, after appropriate reductions to the point-particle limit, intimately related to the amplitude approach to gravitational scattering. Indeed some of the first resolutions of the puzzles at third Post-Minkowskian order came from expressing KMOC observable in the form of cut amplitudes by reverse unitarity <cit.>. The worldline approach differs conceptually in that the classical limit ħ→ 0 can be taken from the outset, thus eliminating the need for subtractions altogether. In the end, the resulting integrals that must be evaluated are nevertheless very similar and they are, not surprisingly, very closely related to the integrals that need to evaluated in the amplitude-based approach. It becomes particularly clear in terms of the velocity cut method where the correspondence up to third Post-Minkowskian order has been shown to be one-to-one <cit.>. This is not surprising in view of the fact that both formalisms amount to solving the classical Einstein field equations by Green function methods. New issues have appeared at fourth Post-Minkowskian order of the gravitational expansion. These are related to both angular momentum loss and energy loss during the scattering process, losses which are due to the gravitationally radiated angular momentum and energy <cit.>. There has been much progress on how to incorporate these effects in the eikonal formalism <cit.> but so far a complete computation has only been reported in work using the worldline formalism <cit.>. In order to tackle dissipation at this order, the worldline calculations have been rephrased in terms of the closed time paths of the Schwinger-Keldysh kind <cit.>. This leads to a doubling of degrees of freedom, the use of retarded (or advanced) propagators, and in general a much larger set of master integrals due to less symmetry of the integrands. It is interesting to contrast this with the KMOC formalism which provides S-matrix expressions for the same quantities but based on standard amplitudes with Feynman propagators. In a recent paper <cit.> we have demonstrated the equivalence between the KMOC and worldline formulations in the classical limit. While this non-trivial relationship has been established on general grounds, it is interesting that dissipative effects are accounted for quite differently in the two formulations due to the difference between Feynman and retarded/advanced propagators. In this paper we combine the KMOC-formalism with the exponential representation of the S-matrix. We shall argue that such a combination is more economical than the conventional one based on the linear T-matrix representation of the S-matrix. It leads to very compact formulas for classical observables in gravity based on amplitudes and it clarifies the inclusion of radiative effects in a simple diagrammatic fashion. Importantly, because the KMOC formalism makes no distinction between conservative and dissipative contributions, classical observables are extracted in a universal manner from the matrix elements of the N̂-operator by retaining all classical pieces. As in the full amplitude computation at third Post-Minkowskian order <cit.> there is no need to separate different contributions. At any order in the expansion one only has to extract all classical terms of the matrix elements of N̂ and derived quantities thereof. While equivalent to the worldline formulation in the Keldysh-Schwinger path integral, the formulas we shall present here have a structure that is straightforward to implement in terms of moden amplitude methods. Having different consistent formulations available is clearly an advantage and there is now a variety of approaches available for the Post-Minkowskian expansion (see also refs.. <cit.>). This is particularly important when the Post-Minkowskian expansion enters the new uncharted territory of higher orders. We shall illustrate the simplicity of the combination of the N̂-operator with the KMOC formalism by computing the full momentum kick (and hence scattering angle) to fourth Post-Minkowskian order. As we shall show, the required basis of master integrals is significantly smaller than that used in refs. <cit.> due to the fact that we need only use Feynman propagators. Nevertheless, our results agree. § THE EXPONENTIAL REPRESENTATION OF THE GRAVITATIONAL S-MATRIX In this section we briefly review the exponential operator representation of the S-matrix. We first fix conventions. We consider the Einstein-Hilbert action of two massive scalars (of masses m_1 and m_2) coupled to gravity, S_EH = ∫ d^4 x √(-g)[R/16 π G + 1/2∂_μϕ_1∂^μϕ_1 +1/2∂_μϕ_2∂^μϕ_2- m_1^22ϕ_1^2 - m_2^22ϕ_2^2] . The Newtonian coupling is denoted by G and R is the Ricci scalar. We use a mostly-minus metric with flat Minkowski space at infinity, diag η_μν≡(1,-1,-1,-1), and expand the full metric as g_μν(x)≡η_μν + √(32 π G)h_μν(x). [scale=0.6] [black,very thick] [->] (-4,-1) to (4,-1); [black,very thick] [->] (-4,1) to (4,1); (-4,-1) node[above]p_1, m_1; (4,-1) node[above]p'_1, m_1; (-4,1) node[above]p_2, m_2; (4,1) node[above]p'_2, m_2; [color = black, fill=gray, very thick] (0,0) circle (2cm); In this section we write everything in the standard language of in-out states and consider the two-to-two scattering with p_1 and p_2 denoting incoming momenta and p_1' and p_2' outgoing momenta with p_1^2 = p_1'^2 = m_1^2 and p_2^2 = p_2'^2 = m_2^2. In the centre-of-mass frame with p_1=(E_1(p),p⃗), p_2=(E_2(p),-p⃗) we have (p_1+p_2)^2 = (p_1'+p_2')^2 = m_1^2+m_2^2+2m_1 m_2 γ, γ≡p_1 · p_2/m_1 m_2 , (p_1-p_1')^2 = (p_2'-p_2)^2≡ q^2=-q⃗^ 2 , In ordinary scattering theory we wish to compute S-matrix elements. Here, instead, we shall focus on matrix elements of the Hermitian operator N̂ defined by eq. (<ref>), in particular, for two-to-two scattering, N(γ,q^2)=⟨ p_1',p_2' |N̂| p_1,p_2⟩ . This should be contrasted with the standard Born expansion of the S-matrix based on Ŝ=1 + i/ħT̂ and the usual scattering amplitude M(p_1,p_2,p_1',p_2') defined by ⟨ p_1',p_2'| T̂ | p_1, p_2 ⟩ =  (2πħ)^Dδ^(D)(p_1+p_2-p_1'-p_2') M(p_1,p_2,p_1',p_2')  , in dimensions D=4-2ϵ. As detailed in ref. <cit.> it is straightforward to expand the exponential representation and derive the infinite sequence of relations between operators N̂ and T̂ in perturbation theory. In the two-to-two sector the operators have perturbative expansions that we can write compactly as T̂ = GT̂_0 + G^3/2T̂_0^ rad + G^2T̂_1 + G^5/2T̂_1^ rad + G^3T̂_2 + ⋯N̂ = GN̂_0 + G^3/2N̂_0^ rad + G^2N̂_1 + G^5/2N̂_1^ rad + G^3N̂_2 + ⋯ from which we straightforwardly can solve for G's in terms of T's by expanding the exponential. Integer powers of G describe interactions with an even number of graviton vertices while half-integer powers describe interactions with an odd number of gravitons. The separation of operators with superscript rad refers only to the associated half-integer power of G. We find it useful diagrammatically to make this distinction (see also below) but it has no further meaning beyond this. There are clearly also radiative terms in the even powers. At order G^4 the relation reads N̂_3 = T̂_3 - i/2ħ(N̂^ rad_1N̂^ rad_0+N̂^ rad_0N̂^ rad_1)- i/2ħT̂_1^2 - i/2ħ(T̂_0 T̂_2+T̂_2 T̂_0) - 1/12ħ^2 [N̂_0^rad,[N̂_0^rad,N̂_0]] - 1/3ħ^2(T̂_0^2 T̂_1+T̂_0 T̂_1T̂_0 + T̂_1T̂_0^2) + i/4 ħ^3T̂_0^4 . and it is elementary to generalize this to higher orders. Note that we have combined some of the T-matrices into N-matrices on the right hand side, thus making the cancellation among superclassical pieces associated with those manifest. This also aids in understanding the separation into real and imaginary parts. We remind that N̂ is Hermitian so that two-to-two scalar matrix elements of that operator are real. The obvious way to evaluate matrix elements of the N̂ operator by conventional field theory methods is to insert a complete set of momentum eigenstates between all products of T-matrices and truncate to the desired order in G. Then matrix elements can be evaluated by standard Feynman rules of scattering theory. Here the complete set of states is spanned by two massive scalar particles: one of momentum k_1 and mass m_1, the other of momentum k_2 and mass m_2, together with any number n of massless gravitons. We denote such states by |k_1,k_2;ℓ_1,…,ℓ_n⟩. These states are normalized relativistically according to ⟨ k_1,k_2;ℓ_1,…,ℓ_n|k'_1,k'_2;ℓ'_1,…,ℓ'_m⟩ = δ_n,m∏_i=1^2 2E_k_i (2πħ)^D-1δ^(D-1)(k_i-k'_i)×∏_i=1^n 2E_ℓ_i (2πħ)^D-1δ^(D-1)(ℓ_i-ℓ_i') , and the completeness relation is given by 1 = ∑_n=0^∞1/n!∫∏_i=1^2 dΠ_k_i∏_r=1^n dΠ_ℓ_r |k_1,k_2;ℓ_1,…,ℓ_n⟩⟨ k_1, k_2; ℓ_1, …ℓ_n |. including a sum over graviton helicities. Here dΠ is the standard Lorentz invariant phase space measure, i.e., dΠ_k_i= d^Dk_i/(2πħ)^D-1δ^+((k_i)^2-m_i^2) = d^Dk_i/(2πħ)^D-1θ(k_i^0)δ((k_i)^2-m_i^2) for i=1,2 for the massive states, and similarly for the massless gravitons. We now insert the completeness relation between all operator products to get the three-loop relation between matrix elements of the N̂ and the T̂ operators ⟨ p_1',p_2'| N̂_3 |p_1,p_2⟩ = ⟨ p_1',p_2'| T̂_3 |p_1,p_2⟩+L_0+L_1+L_2 which we then expand in powers of G. Keeping track of this overall power of G, we can view it as an expansion in the number of gravitons connecting the operators. First, with just the massive states inserted, L_0= -i/2[scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [color = black, fill=white, very thick] (-2,0) circle (1cm) node T_0; [color = black, fill=white, very thick] (2,0) circle (1cm) node T_2; -i/2[scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [color = black, fill=white, very thick] (-2,0) circle (1cm) node T_2; [color = black, fill=white, very thick] (2,0) circle (1cm) node T_0; -i/2[scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [color = black, fill=white, very thick] (-2,0) circle (1cm) node T_1; [color = black, fill=white, very thick] (2,0) circle (1cm) node T_1; +i/4[scale=0.5] [black,very thick] [->] (-8,-.8) to (8,-.8); [black,very thick] [->] (-8,.8) to (8,.8); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (4,-1) to (4,-.6); [red] (4,1) to (4,.6); [color = black, fill=white, very thick] (-6,0) circle (1cm) node T_0; [color = black, fill=white, very thick] (-2,0) circle (1cm) node T_0; [color = black, fill=white, very thick] (2,0) circle (1cm) node T_0; [color = black, fill=white, very thick] (6,0) circle (1cm) node T_0; -1/3[scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [color = black, fill=white, very thick] (-6,0) circle (1cm) node T_0; [color = black, fill=white, very thick] (-2,0) circle (1cm) node T_0; [color = black, fill=white, very thick] (2,0) circle (1cm) node T_1; -1/3[scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [color = black, fill=white, very thick] (-6,0) circle (1cm) node T_0; [color = black, fill=white, very thick] (-2,0) circle (1cm) node T_1; [color = black, fill=white, very thick] (2,0) circle (1cm) node T_0; -1/3[scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [color = black, fill=white, very thick] (-6,0) circle (1cm) node T_1; [color = black, fill=white, very thick] (-2,0) circle (1cm) node T_0; [color = black, fill=white, very thick] (2,0) circle (1cm) node T_0; Next, with the inclusion of one graviton, L_1= -i/2[scale=0.5] [black,very thick] [->] (-8,-.8) to (0,-.8); [black,very thick] [->] (-8,.8) to (0,.8); [black,snake it] (-5,0) to (-2,0); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (-4,.3) to (-4,-.3); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N_0^ rad; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N_1^ rad; -i/2[scale=0.5] [black,very thick] [->] (-8,-.8) to (0,-.8); [black,very thick] [->] (-8,.8) to (0,.8); [black,snake it] (-5,0) to (-2,0); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (-4,.3) to (-4,-.3); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N_1^ rad; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N_0^ rad; -1/12[scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [black,snake it] (-5,0) to (-2,0); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-4,.3) to (-4,-.3); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N_0^ rad; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N_0^ rad; [color = black, fill=white, very thick] (2,0) circle (1cm) node N_0; + 1/6 [scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [black,snake it] (-2,1.5) to (2,1); [black,snake it] (-2,1.5) to (-6,1); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-2,1.8) to (-2,1.2); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N_0^ rad; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N_0; [color = black, fill=white, very thick] (2,0) circle (1cm) node N_0^ rad; -1/12[scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [black,snake it] (-1.5,0) to (1.5,0); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (0,.3) to (0,-.3); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N_0; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N_0^ rad; [color = black, fill=white, very thick] (2,0) circle (1cm) node N_0^ rad; as well as one graviton inserted twice: L_2=1/6 [scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] (-8,.8) to (-6,.8); [black,very thick] [->] (2,.8) to (4,.8); [black,snake it] [->] (-6,.8) to (2,.8); [black,very thick] (-2,1.5) to (2,1); [black, very thick] (-2,1.5) to (-6,1); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-2,1.8) to (-2,1.2); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N_0^ rad; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N_0; [color = black, fill=white, very thick] (2,0) circle (1cm) node N_0^ rad; +1/6 [scale=0.5] [black,very thick] [->] (-8,.8) to (4,.8); [black,very thick] (-8,-.8) to (-6,-.8); [black,very thick] [->] (2,-.8) to (4,-.8); [black,snake it] [->] (-6,-.8) to (2,-.8); [black,very thick] (-2,-1.5) to (2,-1); [black, very thick] (-2,-1.5) to (-6,-1); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-2,-1.8) to (-2,-1.2); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N_0^ rad; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N_0; [color = black, fill=white, very thick] (2,0) circle (1cm) node N_0^ rad; Note that the completeness relation enforces the inclusion of graph topologies that are partly disconnected, such as the graviton line skipping one internal operator as well as the Compton-type contributions in the last line where scalars skip an internal operator. Such intermediate states begin to contribute for the first time at fourth Post-Minkowskian order because up to and including third Post-Minkowskian order they have no support on physical kinematics. To fourth order in G no further insertions of graviton states are possible when evaluating N-matrix elements through use of eq. (<ref>). Although written as an apparent expansion in 1/ħ one must keep in mind that additional factors of ħ (of both positive and negative powers) arise when computing matrix elements. Since matrix elements of the N̂ are manifestly free of superclassical contributions, the subtractions on the right hand side of eq. (<ref>) ensure cancellations among all superclassical terms arising from the T̂-matrix, here including order 1/ħ^3-terms. We shall show in section <ref> how this implies the cancellation of the superclassical terms when evaluating observables in the KMOC formalism. One advantage of the exponential representation is that we can ignore these superclassical cancellations that are guaranteed to occur anyway and thus focus exclusively on the pieces that have a well-defined ħ→ 0 limit. The systematic way to extract this classical limit of matrix elements of the N̂-operator is by means of velocity cuts. This will be described next. §.§ The classical limit and velocity cuts The notion of velocity cuts <cit.> is computationally useful for extracting the classical limit. The basic idea is to combine massive propagator lines in pairs, each having denominators that are linear in the external momenta but with opposite signs, thus effectively reducing to delta-function constraints that are linear in momenta. Ignoring soft momentum corrections, this puts the massive lines on-shell and removes one momentum integration, thus enforcing the first link to the classical worldline formalism. The classical limit ħ→0 of the massive amplitude is obtained by scaling the momentum transfer q=ħq with q fixed, and scaling the loop integration momenta ℓ_i=ħ|q| ℓ̅_i. The amplitude will involve two massive propagators, 1(ℓ+p_r)^2-m_r^2+iε =1 2ℓ· p_r+ℓ^2+iε r=1,2 where ℓ is a generic loop momentum. In the classical limit we have 1 2 ħ |q| ℓ· p_r+ ħ^2|q|^2 ℓ^2+iε≃12 ħ |q| 1ℓ· p_r+ iε, so that the ℓ^2 part is subleading and the massive propagators effectively become linear. Combinations of such linear propagators using lim_ε→0( 1 2ℓ· p_r+ℓ^2+iε + 1 2ℓ· p_r-ℓ^2+iε) =-2iπδ(2ℓ· p_r) lead to δ-function insertions in the loops. The higher order O(ħ^2q^2) pieces do not contribute to the classical result, thus eventually making the link to the classical worldline formalism, as we shall discuss below. The classical part of the massive two-to-two amplitude at L-loop order has exactly L velocity cuts <cit.>. Therefore the classical amplitude can be reduced on a special class of Post-Minkowskian master integrals with L such delta-function insertions. This set of master integrals also arises from the worldline formalism <cit.> as explained at two-loop order in ref. <cit.>. An alternative approach is based on the heavy-mass expansion of scattering amplitudes <cit.>. The classical terms can be re-organized in terms of a heavy mass expansion rather than as the ħ→ 0 viewpoint taken here. The result is an effective field theory of linearized massive propagators and, in loops, precisely corresponding to the velocity cuts <cit.>. § THE KMOC FORMALISM AND THE EXPONENTIAL REPRESENTATION The KMOC formalism as originally defined in <cit.> considers an initial in-state of two massive scalars at time t=-∞, .|in⟩ = ∫dΠ_p_1dΠ_p_2Φ̃_1(p_1) Φ̃_2 (p_2)e^i/ħbp_1.|p_1,p_2;0⟩ where the state .|p_1,p_2;0⟩ is a momentum eigenstate of two massive scalars and the “0” indicates that there is no radiation present at t = -∞. In the classical limit the wavefunctions Φ̃(p_i) are chosen so as to represent two localized scalars separated by impact parameter b^μ. A complete set of states containing an arbitrary number of gravitons is as described in eq. (<ref>) but the initial state at t=-∞ is taken to be free of gravitons, as shown. A change in an observable corresponding to an operator Ô from t = -∞ to t = +∞ is then <cit.>, ⟨ΔÔ⟩ = ⟨in|Ŝ^†ÔŜ|in⟩ - ⟨in|Ô|in⟩ = ⟨in|Ŝ^† [Ô,Ŝ]|in⟩ . Using the linear Born representation of the S-matrix (<ref>) leads to the KMOC formula ⟨ΔÔ⟩ =iħ⟨in|[Ô,T̂]|in⟩ +1ħ^2⟨in|T̂^†[Ô,T̂]|in⟩ In the small ħ limit this expression leads to the evaluation of the change in a classical observable after the delicate cancellations of superclassical terms. Here we instead explore consequences of using the exponential representation of the S-matrix. This will lead to a simple and efficient way to extract the change in a classical observable, including dissipative effects. In an alternative viewpoint we consider the change ΔÔ of an operator Ô from t=-∞ to t=+∞ as ΔÔ=Ŝ^†ÔŜ- Ô . which then has to be evaluated between in-states of t=-∞. Inserting the exponential representation of the Ŝ operator of eq. (<ref>) together with the crucial property of Hermiticity of N̂, ΔÔ=e^-iN̂ħÔe^iN̂ħ - Ô . allows us to rewrite eq. (<ref>) by means of the Campbell identity that expands the two exponentials as an infinite sum of nested commutators, ΔÔ=∑_n ≥ 1(-i)^n/ħ^nn![N̂,[N̂,…,[N̂,Ô]]]_n times. This rewriting, which is where we use unitarity of the S-matrix, will play a crucial role in our all-order proofs because it displays the iterative structure of the KMOC formalism when combined with the exponential representation. It is convenient to define Â_n^Ô≡1/ħ^n[N̂,[N̂,…,[N̂,Ô]]]_n times . The nested commutator structure implies the operator relation Â_n^Ô=Â_1^Â_n-1^Ô=Â_1^Â_1^Â_1^Ô. Importantly, when we evaluate matrix elements by means of insertions of complete sets of states, this iterative structure is preserved (since all we do is to insert factors of unity). Repeating the steps described in ref. <cit.>, we can insert the above expression in the KMOC-expression and take the limit of localized massive states. The result is ⟨ΔÔ⟩ (p_1,p_2,b)= ∫d^Dq/(2π)^D-2δ(2p_1· q - q^2)δ(2p_2· q + q^2)e^ib· qħ⟨ p_1'p_2' | Δ O | p_1 p_2⟩ where p_1' = p_1 - q and p_2' = p_2 + q. In this form it is clear that a first step is the evaluation of the matrix element ⟨ p_1'p_2' | Δ O | p_1 p_2⟩, followed by the shown Fourier transform to b-space. One noticeable feature of the KMOC-formalism for (non-spinning) black-hole scattering is that it always entails the evaluation of matrix elements of an operator (<ref>) between two-particle scalar states. For an observable corresponding to an Hermitian operator Ô the corresponding Δ O is clearly Hermitian as well. Two-particle scalar matrix elements of this Δ O are then real, as follows from time-reversal symmetry. The reality of the expectation value is preserved by the insertion of the completeness relation since it just amounts to the insertion of factors of unity. §.§ Cancellation of superclassical terms: the conservative sector In this section we first show how the N-operator formalism provides a simple way to demonstrate the cancellation of the superclassical pieces when restricted to the conservative sector. We next give a general formula valid to all orders in G for a scalar operator in section <ref> and a vector operator in section <ref>. The application to the momentum kick Δ P_1 is pursued in section <ref>. §.§.§ The classical limit We start with a scalar operator Ô and consider the term with n=1 in (<ref>) 𝒜_1^O(p_1,p_2,q)=1ħ⟨ p_1',p_2'| [N̂, Ô]|p_1,p_2⟩ and we first analyze the conservative case where gravitons are not included in the set of inserted on-shell states. This is graphically represented as 𝒜_1^O(p_1,p_2,q)|^ cons.= [scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [color = black, fill=white, very thick] (-2,0) circle (1cm) node N; [color = black, fill=white, very thick] (2,0) circle (1cm) node O; - [scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [color = black, fill=white, very thick] (-2,0) circle (1cm) node O; [color = black, fill=white, very thick] (2,0) circle (1cm) node N; where the red line indicates where we insert the intermediate two-particle state, corresponding to 𝒜_1^O(p_1,p_2,q)= 1/ħ∫ dΠ_q_1 dΠ_q_2(⟨ p_1',p_2'| N̂|q_1,q_2⟩⟨ q_1,q_2 |Ô|p_1,p_2⟩- ⟨ p_1',p_2'| Ô|q_1,q_2⟩⟨ q_1,q_2 |N̂|p_1,p_2⟩). It is convenient to factor out overall energy-momentum conservation and write ⟨ p_1',p_2'| N̂|p_1,p_2⟩= N(γ ,q^2) (2πħ)^D δ(p_1'+p'_2-p_1-p_2) and ⟨ p_1',p_2'| Ô|p_1,p_2⟩= O(p_1',p_2',q) (2πħ)^D δ(p'_1+p'_2-p_1-p_2). We can use one of the energy-momentum conservation delta-functions to remove integration variable q_2 After defining k_1=q_1-p_1 and using the scaled momenta q and k_1 such that p_1'=p_1-q=p_1-ħq, p_2'=p_2+q=p_2+ħq we change variables to get 𝒜_1^O(p_1,p_2,q)=ħ∫ d^D k_1/(2π)^D-2δ^+((p_1+ħk_1)^2-m_1^2)δ^+((p_2-ħk̅_1)^2-m_2^2)×(N(γ,ħ^2 (k_1+q)^2) O(p_1,p_2,-ħk_1) - O(p_1+ħk_1,p_2-ħk_1,ħ(q+k_1))N(γ,ħ^2 k_1^2) )× (2πħ)^D δ(p_1+p_2-p_1'-p_2'). Setting 𝒜_1^O(p_1,p_2,q)= A_1^O(p_1,p_2,q) (2πħ)^D δ(p_1+p_2-p_1'-p_2'). Changing variables k_1 → -k_1-q to the second term of the sum gives A_1^O(p_1,p_2,q)=ħ∫ d^D k_1/(2π)^D-2 N(γ,ħ^2 (k_1+q)^2) O(p_1,p_2,-ħk_1)δ^+((p_1+ħk_1)^2-m_1^2)δ^+((p_2-ħk_1)^2-m_2^2) - ħ∫ d^D k_1/(2π)^D-2 O(p_1-ħ (k_1+q),p_2+ħ (k_1+q),-ħk_1)N(γ,ħ^2 (k_1+q)^2) ×δ^+((p_1-ħ (k_1+q))^2-m_1^2)δ^+((p_2+ħ(k_1+q))^2-m_2^2). Doing the small ħ expansion of the integrand leads to O(p_1,p_2,-ħk_1)δ^+((p_1+ħk_1)^2-m_1^2)δ^+((p_2-ħk_1)^2-m_2^2) -O(p_1-ħ (k_1+q),p_2+ħ (k_1+q),-ħk_1)δ^+((p_1-ħ (k_1+q))^2-m_1^2)δ^+((p_2+ħ(k_1+q))^2-m_2^2) =2/ħ ((k_1+q) ·k_1)O(p_1,p_2,-ħk_1)((δ^+)'(2 p_1 ·k_1)δ^+(-2 p_2 ·k_1)+δ^+(2 p_1 ·k_1)(δ^+)'(-2 p_2 ·k_1)) +1/ħ (k_1^μ+q^μ)(∇^μ O(p_1,p_2,-ħk_1))δ^+(2 p_1 ·k_1)δ^+(-2 p_2 ·k_1). where we have introduced the derivative ∇_μ [ℱ]≡∂ℱ/∂ p_1^μ-∂ℱ/∂ p_2^μ. Consequently the ħ expansion of A_1^O takes the form A_1^O(p_1,p_2,q)=∫d^D k_1/(2π)^D-2 N(γ,ħ^2 (k_1+q)^2)× (k_1^μ+q^μ)∇_μ(O(p_1,p_2,-ħk_1))δ^+(2 p_1 ·k_1)δ^+(-2 p_2 ·k_1))+𝒪(ħ) Here, crucially, N(γ,ħ^2 (k_1+q)^2) by construction has only classical and quantum parts. This means that for classical observables O the matrix element A_1^O will have a leading piece which is classical, followed by quantum corrections. There are no superclassical pieces in A_1^O. By recursion it follows that this holds for A_n^O and any n as well. Although the completeness relation has a positive energy constraint, this is automatically satisfied in the classical limit for the massive scalars of positive energy, δ^+((p_1-ħk_1)^2-m_1^2)=θ(p_1^0-ħk_1^0) δ((p_1-ħk_1)^2-m_1^2)≃θ(p_1^0) δ(-2 ħ p_1·k_1)^2) . To conclude, we have shown that the classical piece of A_1^O is given by A_1^O(p_1,p_2,q) = ∫d^D k_1/(2π)^D-2 N(γ,(k_1+q)^2) (k_1^μ+q^μ)∇_μ [O(p_1,p_2,- k_1))δ(2 p_1· k_1)δ(-2 p_2· k_1)] after setting ħ =1. Not that this is an all-order statement in G. Iterating, it follows that all higher commutators and hence also the full expectation value are free of superclassical pieces when evaluated in the conservative sector. §.§.§ Vector operators Let us now consider the application of the general iterative formula of eq. (<ref>) to a special class of four-vector operators O^μ(p_1,p_2,q)=⟨ p_1',p_2'| Ô^μ |p_1,p_2⟩ that decompose into longitudinal O_∥(γ,q^2) and transverse O_⊥(γ,q^2) parts as follows: O^ν(p_1,p_2,q) = O_∥((p_1+p_2)^2,q^2)L^ν + O_⊥((p_1+p_2)^2,q^2) q^ν. It is convenient to introduce the four-vector L^μ ≡ (m_2^2+m_1 m_2 γ)p_1^μ - (m_1^2+m_1 m_2 γ)p_2^μ/m_1^2 m_2^2(γ^2-1) which satisfies nice relations, L· p_2 = 1 , L· p_1 = -1 , b· L=0 , ∇^μL_μ = 1/p_∞^2. where we used that impact parameter b^μ lies in the plane of scattering and is orthogonal to both p_1^μ and p_2^μ. Because L· q=O(q^2), we also have L· q=0 in q-space, before the Fourier transform to b-space. Since -p_1· q=p_2· q=q^22, p_1 and p_2 are indeed orthogonal to q in the classical limit. The decomposition in (<ref>) is clearly not valid for an arbitrary four-vector but it is satisfied by the momentum kick ⟨Δ P_1^μ⟩ when evaluated in the conservative sector as we will do in section <ref>. To evaluate the classical part of the first commutator A_1^O^ν =1ħ⟨ p_1',p_2'|[N̂,Ô^ν]| p_1,p_2⟩ using the expression (<ref>) we begin by acting with the derivative ∇_μ in (<ref>). It is useful to note that ∇_μ (p_1+p_2)^2=0 and ∇_μ k_1^ν=0 so that ∇_μ O_r((p_1+p_2)^2,-k_1)=0 for both r=∥ and r= ⊥. We then get ∇_μ(O^ν(p_1,p_2,- k_1))δ(2 p_1 · k_1)δ(-2 p_2 · k_1)) =1/p_∞^2 O_∥((p_1+p_2)^2,k_1^2) δ_μ^νδ(2 p_1 · k_1)δ(-2 p_2 · k_1) + 2 k_1μ(O_∥((p_1+p_2)^2,k_1^2)L^ν - O_⊥((p_1+p_2)^2,k_1^2) k_1^ν) (δ'(2 p_1 · k_1)δ(-2 p_2 · k_1)+δ(2 p_1 · k_1)δ'(-2 p_2 · k_1))  . which we can insert into eq. (<ref>), keeping only the classical part: A_1^O^ν(p_1,p_2,q)=1/p_∞^2∫d^D k_1/(2π)^D-2 N(γ,(k_1+q)^2) (k_1^ν+q^ν)O_∥((p_1+p_2)^2,k_1^2)δ(2 p_1 · k_1)δ(-2 p_2 · k_1) + 2∫d^D k_1/(2π)^D-2 N(γ,(k_1+q)^2) (k_1+q)· k_1 (O_∥((p_1+p_2)^2,k_1^2)L^ν) ×(δ'(2 p_1 · k_1)δ(-2 p_2 · k_1)+δ(2 p_1 · k_1)δ'(-2 p_2 · k_1)) -2∫d^D k_1/(2π)^D-2 N(γ,(k_1+q)^2) (k_1+q)· k_1(O_⊥(γ,k_1^2) k_1^ν)×(δ'(2 p_1 · k_1)δ(-2 p_2 · k_1)+δ(2 p_1 · k_1)δ'(-2 p_2 · k_1)). By symmetry the integral in the second line vanishes. We thus have A_1^O^μ(p_1,p_2,q) = 1/p_∞^2 A_1^O_∥μ(γ,q^2) + A_1^O_⊥μ(γ,q^2) with A_1^O_∥μ(γ,q^2) ≡∫d^D k_1/(2π)^D-2 N(γ,(k_1+q)^2) (k_1^μ+q^μ)O_∥((p_1+p_2)^2,k_1^2)δ(2 p_1 · k_1)δ(-2 p_2 · k_1), and A_1^O_⊥μ(γ,q^2)≡-2∫d^D k_1/(2π)^D-2 N(γ,(k_1+q)^2) (k_1+q)· k_1(O_⊥(γ,k_1^2) k_1^μ)×(δ'(2 p_1 · k_1)δ(-2 p_2 · k_1)+δ(2 p_1 · k_1)δ'(-2 p_2 · k_1)). which by tensorial reduction leads to A_1^O_⊥μ(γ,q^2)=- L^μ∫d^D k_1/(2π)^D-2 N(γ,(k_1+q)^2) (k_1+q)· k_1 O_⊥(γ,k_1^2) δ(2 p_1 · k_1)δ(-2 p_2 · k_1). We note an interesting swap between longitudinal and transverse parts in this first iteration. Clearly, when we iterate further, this will generate alternating contributions between the longitudinal and transverse parts. To complete the evaluation of the observable according to the KMOC prescription we now perform the Fourier transform to b-space according eq. (<ref>). Having already taken the classical limit, it is clear that we can also ignore the q^2-terms in the two delta-functions and effectively the Fourier transform simply becomes Õ(γ,b)=∫d^D q/(2π)^D-2δ(-2 p_1 · q)δ(2 p_2 · q) O((p_1+p_2)^2,q^2)e^i b · q. For the longitudinal part we have to evaluate the Fourier transform of A_1^O_∥μ(γ,q^2) which reads ∫d^D q/(2π)^D-2d^D k_1/(2π)^D-2 N(γ,(k_1+q)^2) (q^μ+k_1^μ) O_∥((p_1+p_2)^2,k_1^2) δ(2 p_1 · k_1)δ(-2 p_2 · k_1)×δ(-2 p_1 · q)δ(2 p_2 · q)e^i b · q . and by a change of variables q → q-k_1 and k_1 → -k_1 the integral factorizes (<ref>)=∫d^D q/(2π)^D-2 q^ν N(γ,q^2) δ(-2 p_1 · q)δ(2 p_2 · q)e^i b · q×∫d^D k_1/(2π)^D-2 O_∥((p_1+p_2)^2,k_1^2) δ(-2 p_1 · k_1)δ(2 p_2 · k_1) e^i b · k_1. Setting Õ_∥(γ,b)≡∫d^D k_1/(2π)^D-2 O_∥((p_1+p_2)^2,k_1^2) δ(-2 p_1 · k_1)δ(2 p_2 · k_1) e^i b · k_1 and noticing that -i∂Ñ(γ,b)∂ b_ν=∫d^D q/(2π)^D-2 q^ν N(γ,q^2) δ(-2 p_1 · q)δ(2 p_2 · q)e^i b · q. with Ñ(γ,J)  ≡ FT[N(γ,q^2)] ≡ 1/4m_1m_2√(γ^2 - 1)∫d^2q/(2π)^2 N(γ,q^2) e^ib·q. Therefore the Fourier transform of A_1^O_∥μ(γ,q^2) is given by - i ∂Ñ(γ,b)/∂ b_νÕ_∥(γ,b) = i b^ν |b|∂Ñ(γ,b)/∂ |b|Õ_∥(γ,b) . For the transverse part we have to evaluate Ã_1^O_⊥μ(γ,b)=- L^μ∫d^D q/(2π)^D-2d^D k_1/(2π)^D-2 N(γ,(k_1+q)^2) (k_1+q)· k_1 O_⊥(γ,k_1^2) ×δ(2 p_1 · k_1)δ(-2 p_2 · k_1)δ(-2 p_1 · q)δ(2 p_2 · q)e^i b · q By the same change of variable than before we get Ã_1^O_⊥μ(γ,b)=L^μ∫d^D q/(2π)^D-2d^D k_1/(2π)^D-2 N(γ,q^2) q· k_1 O_⊥(γ,k_1^2) ×δ(-2 p_1 · k_1)δ(2 p_2 · k_1)δ(-2 p_1 · q)δ(2 p_2 · q)e^i b · qe^i b · k_1 This integral is product of a Fourier transform over q times a Fourier transform over k_1 leading to Ã_1^O_⊥μ(γ,b)=-L^μ∂Ñ(γ,b)/∂ b^ν∂Õ_⊥(γ,b)/∂ b_ν=L^μ∂Ñ(γ,b)/∂ |b|∂Õ_⊥(γ,b)/∂ |b|. Collecting these pieces, we get Ã_1^O^μ(γ,b)=( i/p_∞^2b^ν/| b |Õ_∥(γ,b) + L^ν∂Õ_⊥(γ,b)/∂| b |)∂Ñ(γ,b)/∂| b |. In term of the angular momentum J=p_∞ |b|, with p_∞=m_1m_2√(γ^2-1)/√(m_1^2+m_2^2+2m_1m_2γ) the magnitude of incoming three-momentum in the centre-of-mass frame, we have Ã_1^O^μ(γ,b)=( i/p_∞b^μ/| b |Õ_∥(γ,b) +p_∞ L^μ∂Õ_⊥(γ,b)/∂| b |)∂Ñ(γ,J)/∂| J |. The factorization of the Fourier transforms separate the N operator from the operator O in b-space. This remarkable fact implies that we can iterate the result above as dictated by the commutator relation in eq. (<ref>). It is convenient to introduce a matrix notation so that Ã_1^O^μ(γ,b)=[ L^μ i b^μ/|b| ][ 0 p_∞∂Ñ/∂| J |; 1/p_∞∂Ñ/∂| J | 0 ][ Õ_∥∂Õ_⊥∂|b| ] and Ã_n+1^O^μ(γ,b)=[ L^μ i b^μ/|b| ][ 0 p_∞∂Ñ/∂| J |; 1/p_∞∂Ñ/∂| J | 0 ]^n [ ∂Õ_⊥∂|b|∂Ñ∂ JÕ_∥ p_∞∂Ñ∂ J ] for summing the iteration of all order according the recursion in eq. (<ref>). Inserting it in the expression (<ref>), we get, in component form, ΔÕ^μ(γ,b)=[ L^μ i b^μ/|b| ]∑_n =1^∞(-i)^n/n![ 0 p_∞∂Ñ/∂| J |; 1/p_∞∂Ñ/∂| J | 0 ]^n-1[ p_∞∂Õ_⊥∂|b|∂Ñ∂ JÕ_∥ p_∞∂Ñ∂ J ]. which is resummed into ΔÕ^μ(γ,b)= [ L^μ ib^μ/|b| ][ -i sin(∂Ñ∂ J)/∂Ñ∂ J p_∞ (cos(∂Ñ∂ J)-1)/∂Ñ∂ J; cos(∂Ñ∂ J)-1/∂Ñ∂ J p_∞ -i sin(∂Ñ∂ J)/∂Ñ∂ J ][ p_∞∂Õ_⊥∂|b|∂Ñ∂ JÕ_∥ p_∞∂Ñ∂ J ]. This relation shows the intimate connection between the exponential representation of the S-matrix and the KMOC formalism. It is an interesting fact that the N̂-operatror is here sandwiched between the initial in-state and its conjugate rather than between in and out states as in ref. <cit.>. This is a consequence of the fact that the KMOC formalism evaluates observables as the difference between time evolved in-states whereas in <cit.>N(γ,b) was viewed as an ordinary scattering matrix element from which to compute the scattering angle through the radial action. It is also interesting to note how the iterative structure of the exponential representation makes N̂ matrix elements the universal objects to compute in the KMOC formalism, whereas all details of the actual observable O^μ only enter through the initial vector determined by Ã_1^O^ν(γ,b) in (<ref>). §.§.§ Momentum kick: the conservative sector We now finally apply the general considerations above to the case of the momentum kick of, say, particle with initial momentum p_1 in the scattering. We then have that the initial vector is Ã^P_1^μ(γ,b)= i p_∞b^μ|b|∂Ñ(γ,J)∂ J. We apply the equation (<ref>) with Õ_∥(γ,b)=p_∞^2 and Õ_⊥(γ,b)=0. We get ΔP̃_̃1̃^ν(γ,b)|_ cons=p_∞b^ν/| b |sin(∂Ñ(γ,J)/∂ J) + p_∞^2 L^ν(cos(-∂Ñ(γ,J)/∂ J)-1). In the conservative case, the scattering angle can be extracted by the coefficient of the transverse piece only. A comparison with the general relation between momentum kick and scattering angle <cit.>[The coefficient of sin(χ) is fixed by a quadratic condition. We choose the sign opposite to that of ref. <cit.>.] ΔP̃_̃1̃^ν(γ,b)|_ cons=-p_∞b^ν/| b |sin(χ) + p_∞^2 L^ν(cos(χ)-1), demonstrates that χ =  -∂Ñ(γ,J)∂ J =  - 1/p_∞∂Ñ(γ,b)/∂ b , thus proving the conjectured relation of ref. <cit.> between the scattering angle and the matrix elements of the N-operator. This also shows that the Ñ(γ,J) is the radial action. §.§ Including gravitational radiation We now turn to the impact of gravitational radiation on the expectation value of an operator Ô. We recall that in the KMOC formalism radiation is automatically taken into account in perturbation theory by insertion of a complete set of states (including any number of gravitons) in the pertinent in-in matrix elements. Conventionally done by means of the Born expansion of the T̂-matrix, we here adapt it to the exponential representation. In particular, we use the insertion of the identity operator inside the nested commutators and extract contributions order by order in the gravitational coupling G. To clarify: when going from T̂-matrix elements to N̂-matrix elements we also include terms that are radiative, to arbitrarily high order in the coupling G. What is missing in order to compute the full expectation value of an operator Ô are the pieces that arise from inserting complete sets of states (including gravitons) inside the nested commutators of eq. (<ref>). The discussion will clearly mimic closely the way we evaluated matrix elements of N̂-operator itself. We now consider these additional terms. Since our aim is to derive a recursive relation for the classical limit of an observable, we begin by analyzing the expectation value of Â^Ô_n+1 based on one iteration, ⟨ p_1',p_2'| Â^Ô^μ_n+1|p_1,p_2⟩=1ħ⟨ p_1',p_2'|[N̂, Â^Ô^μ_n]|p_1,p_2⟩. Inserting a complete set of states, this has a graphical representation A_n+1^O^μ(p_1,p_2,q) = [scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [color = black, fill=white, very thick] (-2,0) circle (1cm) node N; [color = black, fill=white, very thick] (2,0) circle (1cm) node A_n^O^μ; - [scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [color = black, fill=white, very thick] (-2,0) circle (1cm) node A_n^O^μ; [color = black, fill=white, very thick] (2,0) circle (1cm) node N; + [scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [black,snake it] (-1.5,0) to (1.5,0); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [red] (0,.3) to (0,-.3); [color = black, fill=white, very thick] (-2,0) circle (1cm) node N; [color = black, fill=white, very thick] (2,0) circle (1cm) node A_n^O^μ; - [scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [black,snake it] (-1.5,0) to (1.5,0); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [red] (0,.3) to (0,-.3); [color = black, fill=white, very thick] (-2,0) circle (1cm) node A_n^O^μ; [color = black, fill=white, very thick] (2,0) circle (1cm) node N; +⋯ where the ellipsis represent pieces with insertion of more that one graviton. We stress that this involves the full N̂-operator and in perturbation theory we obviously need to truncate to the given order in G (but for now we keep it general). An additional iteration reads A_n+1^O^μ(p_1,p_2,q) = [scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [color = black, fill=white, very thick] (-2,0) circle (1cm) node N; [color = black, fill=white, very thick] (2,0) circle (1cm) node A_n^O^μ; - [scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [color = black, fill=white, very thick] (-2,0) circle (1cm) node A_n^O^μ; [color = black, fill=white, very thick] (2,0) circle (1cm) node N; +[scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [black,snake it] (-6,0) to (-2,0); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-4,.3) to (-4,-.3); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N; [color = black, fill=white, very thick] (2,0) circle (1cm) node A_n-1^O^μ; + [scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [black,snake it] (-2,0) to (2,0); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (0,.3) to (0,-.3); [color = black, fill=white, very thick] (-6,0) circle (1cm) node A_n-1^O^μ; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N; [color = black, fill=white, very thick] (2,0) circle (1cm) node N; - 2 [scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [black,snake it] (-6,-1) to (-2,-2); [black,snake it] (-2,-2) to (1.6,-1); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-2,-2.3) to (-2,-1.7); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N; [color = black, fill=white, very thick] (-2,0) circle (1cm) node A_n-1^O^μ; [color = black, fill=white, very thick] (2,0) circle (1cm) node N; - [scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [black,snake it] (-6,0) to (-2,0); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-4,.3) to (-4,-.3); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N; [color = black, fill=white, very thick] (-2,0) circle (1cm) node A_n-1^O^μ; [color = black, fill=white, very thick] (2,0) circle (1cm) node N; - [scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [black,snake it] (-2,0) to (2,0); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (0,.3) to (0,-.3); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N; [color = black, fill=white, very thick] (-2,0) circle (1cm) node A_n-1^O^μ; [color = black, fill=white, very thick] (2,0) circle (1cm) node N; + [scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [black,snake it] (-6,-1) to (-2,-2); [black,snake it] (-2,-2) to (1.6,-1); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-2,-2.3) to (-2,-1.7); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N; [color = black, fill=white, very thick] (2,0) circle (1cm) node A_n-1^O^μ; + [scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [black,snake it] (-6,-1) to (-2,-2); [black,snake it] (-2,-2) to (1.6,-1); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-2,-2.3) to (-2,-1.7); [color = black, fill=white, very thick] (-6,0) circle (1cm) node A_n-1^O^μ; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N; [color = black, fill=white, very thick] (2,0) circle (1cm) node N; +... where again the ellipsis indicate diagrams with more than one graviton exchange. By combining and soft-expanding all terms in last four lines we find that their classical parts sum up to zero. This implies that for n≥3 single gravitons cannot be exchanged and we are left with A^O^μ_n(p_1,p_2,q)= [scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [color = black, fill=white, very thick] (-2,0) circle (1cm) node N; [color = black, fill=white, very thick] (2,0) circle (1cm) node A^O^μ_n-1; - [scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [color = black, fill=white, very thick] (-2,0) circle (1cm) node A^O^μ_n-1; [color = black, fill=white, very thick] (2,0) circle (1cm) node N; +... This concludes the analysis of single-graviton insertions from the complete set of states. Actually, what we just shown can be generalized to any number of graviton insertions. However, at three-loop level, and as noticed in ref. <cit.> in the context of the eikonal, multiple graviton insertions such as [scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [black,snake it] (-1.5,0.3) to (1.5,0.3); [black,snake it] (-1.5,-0.3) to (1.5,-0.3); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [red] (0,.15) to (0,.45); [red] (0,-.15) to (0,-.45); [color = black, fill=white, very thick] (-2,0) circle (1cm) node ; [color = black, fill=white, very thick] (2,0) circle (1cm) node ; do not contribute classically. To fourth Post-Minkowskian order we thus only need to consider successions of single gravitons insertions such as [scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [black,snake it] (-6,0) to (-2,0); [black,snake it] (-2,0) to (2,0); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-4,.3) to (-4,-.3); [red] (0,.3) to (0,-.3); [color = black, fill=white, very thick] (-6,0) circle (1cm) node ; [color = black, fill=white, very thick] (-2,0) circle (1cm) node ; [color = black, fill=white, very thick] (2,0) circle (1cm) node ; When we include these radiative pieces we need to enlarge the basis for vector operators. We choose to introduce u_1^μ≡ p_∞m_1 γ p_2^μ-m_2 p_1^μ/m_1^2 m_2(γ^2-1), u_2^μ≡ p_∞m_2 γ p_1^μ-m_1 p_2^μ/m_1 m_2^2(γ^2-1), which satisfy p_i· u_j=p_∞δ_ij. These two four-vectors are related to the ǔ_i's of <cit.> by a rescaling u_i=ǔ_i p_∞/m_i with i=1,2. The vector L^μ of eq. (<ref>) which sufficed to describe the basis in the conservative sector is simply a specific linear combination, L^μ= (u_2^μ-u_1^μ)/ p_∞ . Note that every vector X^μ can be decomposed into X^μ=p_1.X/p_∞u_1^μ+p_2.X/p_∞u_2^μ+ (b.X/|b|)b^μ/|b|≡ X^u_1u_1^μ+X^u_2u_2^μ+X^bb^μ/|b| Similar manipulations to the ones of section <ref> yieelds a compact matrix identity in b-space, now taking into account the radiation effects with at most one graviton exchange. For n≥ 3 we have ⟨ p_1',p_2'|Â^Ô^μ_n+1|p_1,p_2⟩= [ u_1^μ u_2^μ b^μ |b| ] M [ ⟨ p_1',p_2'|Â^Ô_n|p_1,p_2⟩^u_1⟨ p_1',p_2'|Â^Ô_n|p_1,p_2⟩^u_2⟨ p_1',p_2'|Â^Ô_n|p_1,p_2⟩^b ] where we have defined the matrix M≡[ 0 0 i∂Ñ∂ J; 0 0 -i∂Ñ∂ J; - i ℰ_2∂Ñ∂ J iℰ_1∂Ñ∂ J 0 ] and we have introduced the fractionial parts of the Mendelstam variable s=E^2, ℰ_1 and ℰ_2 ℰ_1≡m_1^2+m_1m_2γ m_1^2+m_2^2+2m_1m_2γ; ℰ_2≡ 1-ℰ_1= m_2^2+m_1m_2γ m_1^2+m_2^2+2m_1m_2γ. As we have shown, at fourth Post-Minkowskian order we can write the full result as ΔÕ(γ,b)= ΔÕ_ cons(γ,b )+∑_n=1^∞ΔÕ_ rad^(n)(γ,b) where ΔÕ_ rad^(n) is the contribution coming from the succession of n single-graviton insertions. The conservative part is given by ΔÕ_ cons(γ,b )=[ u_1^μ u_2^μ b^μ |b| ]∑_n ≥ 1(-i)^n/n! M^n-1[ Õ^u_1_1Õ^u_2_1Õ^b_1 ] =i[ u_1^μ u_2^μ b^μ |b| ][ -(∂Ñ∂ Jℰ_1+ℰ_2 sin(∂Ñ∂ J))/∂Ñ∂ J -ℰ_1 (∂Ñ∂ J-sin(∂Ñ∂ J))/∂Ñ∂ J -1+ cos(∂Ñ∂ J)/∂Ñ∂ J; -ℰ_2 (∂Ñ∂ J-sin(∂Ñ∂ J))/∂Ñ∂ J -(ℰ_1 sin(∂Ñ∂ J)+∂Ñ∂ Jℰ_2)/∂Ñ∂ J 1-cos(∂Ñ∂ J)/∂Ñ∂ J; ℰ_2(1-cos(∂Ñ∂ J))/∂Ñ∂ J ℰ_1 (cos(∂Ñ∂ J)-1)/∂Ñ∂ J -sin(∂Ñ∂ J)/∂Ñ∂ J ][ Õ^u_1_1Õ^u_2_1Õ^b_1 ], where we have introduced the operator Ô_1≡ [N̂, Ô]. This is just a different way of writing the conservative result of eq. (<ref>), as can be seen by use of the relations (<ref>) and (<ref>). For the radiative sector we get ΔÕ_ rad^(k)(γ,b)= [ u_1^μ u_2^μ b^μ |b| ]∑_n ≥ k+1(-i)^n/n! M^n-1-k[ Õ^u_1_k+1Õ^u_2_k+1Õ^b_k+1 ] with Ô_k+1 = [N̂,[N̂,…,[N̂,Ô]]]_k+1 times|_k graviton insertions after restricting to k graviton insertions, as explained above. This is the complete expression to fourth Post-Minkowskian order and it is readily generalized to higher orders. We emphasize again that the terminology of conservative and radiative pieces is completely artificial. There are also radiative modes in what we for historical reasons call the conservative part. This was already obvious at two-loop level where it was shown in refs. <cit.> that the two-to-two matrix element of N̂-operator yields the full result, including radiation reaction, to that order. We now understand why this phenomenon does not generalize to higher orders, and we understand how to correct for it. There are still many radiative modes and radiation-reaction parts in just the two-to-two matrix element of N̂-operator and therefore those matrix elements are far from being just conservative. §.§.§ Full momentum kick at fourth Post-Minkowskian order We now turn to the full explicit evaluation of the momentum kick Δ P_1^μ at fourth Post-Minkowskian order. As a building block we will first need to first compute Ñ(γ,b). This was already done in ref. <cit.> up to 4PM order (except for one term which we take the opportunity to correct here) so that what we label the conservative piece Δ P_1^μ|_ cons.=[ u_1^μ u_2^μ b^μ |b| ][ p_∞(1- cos(χ_cons))p_∞(cos(χ_cons)-1)-p_∞sin(χ_cons) ] is known. Here it is convenient to introduce the following notation χ_cons≡-∂Ñ/∂ J and define the PM-expanded quantities χ_cons≡∑_n=0^∞ G^n+1χ^(n)_cons as well as Ñ≡∑_n=0^∞ G^n+1Ñ^(n) so that at fourth Post-Minkowskian order we have Δ P_1^μ,4PM|_ cons. =p_∞ G^4 [ u_1^μ u_2^μ b^μ |b| ]G^4[ - (χ_cons^(0))^4/24+(χ_cons^(1))^2/2+ χ_cons^(0)χ_cons^(2) (χ_cons^(0))^4/24-(χ_cons^(1))^2/2- χ_cons^(0)χ_cons^(2)(χ_cons^(0))^2χ_cons^(1)/2-χ_cons^(3) ] Starting at third Post-Minkowskian order we need to also evaluate the first radiation contribution to the momentum kick ΔP̃_1, rad^μ (1). We thus need the building block P̃_1,1^μ=⟨ p_1',p_2' | [N̂,[N̂,P̂_1^μ]]|p_1,p_2⟩ evaluated with one-graviton insertions. This reads P̃_1,1^μ=FT[∫d^D q_1 d^D q_2/(2π)^2D-4⟨ p_1',p_2' |N̂| p_1+q_1,p_2-q_2,q_2-q_1⟩ (-q^μ-2q_1^μ) ×⟨ p_1+q_1,p_2-q_2,q_2-q_1 |N̂| p_1,p_2⟩δ(2p_1 · q_1)δ(-2p_2 · q_2)δ((q_2-q_1)^2)] Where again, for compactness of notation, we label the Fourier Transform into b-space by FT. Its precise definition is given in eq. (<ref>). Note that this integral is orthogonal to p_1, i.e. p_1μ⟨ p_1',p_2'|[N̂,[N̂,P̂_1^μ]]|p_1,p_2⟩ =0 , so that it can be decomposed according to P̃_1,1^μ=[ u_1^μ u_2^μ b^μ |b| ][ 0 P̃_1,1^u_2P̃_1,1^b ] Based on the analysis of ref. <cit.> we know that the coefficients have the following perturbative expansion P̃_1,1^u_2 =G^3 P̃_1,1^u_2,(2) +G^4 P̃_1,1^u_2,(3) +𝒪(G^5), P̃_1,1^b =G^4 P̃_1,1^b,(3)+𝒪(G^5). so that ΔP̃_1, rad^ν,(1)=G^3 [ u_1^μ u_2^μ b^μ |b| ][ 0; -P̃_1,1^u_2,(2)/2; 0 ]+G^4 [ u_1^μ u_2^μ b^μ |b| ][ 0; -P̃_1,1^u_2,(3)/2; ℰ_1 χ_cons^(1)P̃_1,1^u_2,(2)/6 -P̃_1,1^b,(3)/2 ]+𝒪(G^5) Note in particular that P̃_1,1^b only receives a contribution from order O(G^4), the 4PM order. As mentioned above, the 3PM case is therefore quite special in that all radiative effects are entirely contained in the classical contribution from the N̂-operator <cit.>. The momentum kick due to radiation at 3PM order only shifts the longitudinal momenta. Starting at fourth Post-Minkowskian order we need to also evaluate of the first radiative contribution to the momentum kick ΔP̃_1, rad^μ (2) which, as indicated, involves the insertion of two graviton lines. This contribution is more tricky and is diagrammatically represented by [scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [black,snake it] (-6,0) to (-2,0); [black,snake it] (-2,0) to (2,0); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-4,.3) to (-4,-.3); [red] (0,.3) to (0,-.3); [color = black, fill=white, very thick] (-6,0) circle (1cm) node ; [color = black, fill=white, very thick] (-2,0) circle (1cm) node ; [color = black, fill=white, very thick] (2,0) circle (1cm) node ; which has two pieces at 4PM order: [scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] (-8,.8) to (-6,.8); [black,very thick] [->] (2,.8) to (4,.8); [black,snake it] [->] (-6,.8) to (2,.8); [black,very thick] (-2,1.5) to (2,1); [black, very thick] (-2,1.5) to (-6,1); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-2,1.8) to (-2,1.2); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N_0^ rad; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N_0; [color = black, fill=white, very thick] (2,0) circle (1cm) node N_0^ rad; , [scale=0.5] [black,very thick] [->] (-8,.8) to (4,.8); [black,very thick] (-8,-.8) to (-6,-.8); [black,very thick] [->] (2,-.8) to (4,-.8); [black,snake it] [->] (-6,-.8) to (2,-.8); [black,very thick] (-2,-1.5) to (2,-1); [black, very thick] (-2,-1.5) to (-6,-1); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-2,-1.8) to (-2,-1.2); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N_0^ rad; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N_0; [color = black, fill=white, very thick] (2,0) circle (1cm) node N_0^ rad; giving rise to the elementary building block P̃_1,2^μ=⟨ p_1',p_2' | [N̂,[N̂,[N̂,P̂_1^μ]]] |p_1,p_2⟩ evaluated with two-graviton insertions. This is P̃_1,2^μ =G^4 FT[q^μ⟨ p_1',p_2' |N̂_0^radN̂_0N̂_0^rad| p_1,p_2⟩] +G^4 FT[∫d^D q_1 d^D q_2 d^D q_3/(2π)^3D-6 (-3 q_1^μ+3 q_3^μ)⟨ p_1',p_2' |N̂_0^rad| p_1+q_3,p_2-q_2,q_2-q_3⟩ ×⟨ p_1+q_3,q_2-q_3 |N̂_0 | p_1+q_1,q_2-q_1⟩δ(2p_1 · q_3)δ(-2p_2 · q_2)δ^(+)((q_2-q_3)^2) ×⟨ p_1+q_1,p_2-q_2, q_2-q_1 |N̂_0^rad| p_1,p_2⟩δ(2p_1 · q_1)δ^(+)((q_2-q_1)^2)]+O(G^5) ≡ 6 G^4 FT[q^μ L_2(γ,q^2)]+G^4 P̃_1,2^μ,(3)+O(G^5) = 6 i G^4 p_∞b^μ/|b|∂L̃_2(γ,J)/∂ J+G^4 P̃_1,2^μ,(3)+O(G^5) so that its contribution to the momentum kick becomes ΔP̃_1, rad^ν,(2)=G^4 [ u_1^μ u_2^μ b^μ |b| ][ 0; i/6P̃_1,2^u_2,(3); -p_∞∂L̃_2(γ,J)/∂ J+ i/6P̃_1,2^b,(3) ]+𝒪(G^5) Combining all pieces, the full fourth-order momentum kick is thus given by ΔP̃_1^ν,4PM= G^4 [ u_1^μ u_2^μ b^μ |b| ][ p_∞(- (χ_cons^(0))^4/24+(χ_cons^(1))^2/2+ χ_cons^(0)χ_cons^(2)); p_∞( (χ_cons^(0))^4/24-(χ_cons^(1))^2/2- χ_cons^(0)χ_cons^(2))-P̃_1,1^u_2,(3)/2+i/6P̃_1,2^u_2,(3); p_∞((χ_cons^(0))^2χ_cons^(1)/2-χ_cons^(3)-∂L̃_2(γ,J)/∂ J)+ℰ_1 χ_cons^(1)P̃_1,1^u_2,(2)/6 -P̃_1,1^b,(3)/2 + i/6P̃_1,2^b,(3) ] We note the partial recycling of lower-order terms here, a feature that generalizes to higher orders as well. § DETAILS ON THE 4PM CALCULATION §.§ The construction of the integrands To perform the full explicit computation of the momentum, we need to compute only three integrands giving Ñ^(3), P̃_1,1^μ,(3) and P̃_1,2^μ,(3). The three integrands can be represented as [scale=0.5] [black,very thick] [->] (-2,-.8) to (2,-.8); [black,very thick] [->] (-2,.8) to (2,.8); [color = black, fill=white, very thick] (0,0) circle (1cm) node N_3; -(q^μ+2q_1^μ) [scale=0.5] [black,very thick] [->] (-8,-.8) to (0,-.8); [black,very thick] [->] (-8,.8) to (0,.8); [black,snake it] (-5,0) to (-2,0); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (-4,.3) to (-4,-.3); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N_0^ rad; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N_1^ rad; -(q^μ+2q_1^μ) [scale=0.5] [black,very thick] [->] (-8,-.8) to (0,-.8); [black,very thick] [->] (-8,.8) to (0,.8); [black,snake it] (-5,0) to (-2,0); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (-4,.3) to (-4,-.3); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N_1^ rad; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N_0^ rad; 3(q_3^μ-q_1^μ) [scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] (-8,.8) to (-6,.8); [black,very thick] [->] (2,.8) to (4,.8); [black,snake it] [->] (-6,.8) to (2,.8); [black,very thick] (-2,1.5) to (2,1); [black, very thick] (-2,1.5) to (-6,1); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-2,1.8) to (-2,1.2); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N_0^ rad; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N_0; [color = black, fill=white, very thick] (2,0) circle (1cm) node N_0^ rad; We compute these from generalized unitarity and velocity cuts, selecting topologies that both have three velocity cuts and respect the conditions on the on-shell gravitons when imposed by the topology. §.§ The integration basis At fourth Post-Minkowskian order the computation of the momentum kick is expanded on two sets of master integrals. A first family of master integrals has delta-function constraints on the massive legs and one graviton propagator as depicted in fig. <ref>(a) 𝒥({n_j},{±,±,±};γ,ϵ)= ∫δ(2v_1·ℓ_1) δ(2v_1 · (ℓ_1+ℓ_2+ℓ_3)) δ(2v_2 · (ℓ_1+ℓ_2))δ(ℓ_2^2)/∏_i=1^12D_i^n_i∏_r=1^3 d^4-2ϵℓ_i (2π)^3-2ϵ. where we have defined the propagators D_1 =ℓ_1^2, D_2=ℓ_2^2, D_3=ℓ_3^2, D_4 =(ℓ_1+ℓ_2)^2, D_5=(ℓ_2+ℓ_3)^2 , D_6=(ℓ_1+ℓ_2+ℓ_3)^2, D_7 =(ℓ_1+q̂)^2, D_8=(ℓ_1+ℓ_2+q̂)^2, D_8=(ℓ_1+ℓ_2+ℓ_3+q̂)^2 , D_9^± =± 2 v_1 · (ℓ_1+ℓ_2)+iε, D_10^±=± 2v_2 ·ℓ_1+iε , D_11^±=± 2v_2 · (ℓ_1+ℓ_2+ℓ_3)+iε, with q̂^2=-1, and v_i=p_i/m_i such that v_i^2=1 and v_i· q=0 for i=1,2, γ=v_1· v_2. Tensorial reductions are conveniently performed using LiteRed <cit.> which has by default the Feynman +iε prescription. We find that for the set of master integrals in (<ref>) the basis needed for longitudinal pieces has dimension 54, and the one for the transverse pieces has the same dimension. These master integrals have a delta-function for one of the graviton propagators as required in the one-graviton radiative sector analyzed in section <ref>. This delta-function breaks the symmetry between l_2 and l_3 compared to the other basis. In the conservative sector of section <ref> it is enough to use the smaller set of master integrals represented in figure <ref>(b) given by ℐ({n_j},{±,±,±};γ,ϵ)= ∫δ(2v_1·ℓ_1) δ(2v_1 · (ℓ_1+ℓ_2+ℓ_3)) δ(2v_2 · (ℓ_1+ℓ_2))/∏_i=1^12D_i^n_i∏_r=1^3 d^4-2ϵℓ_i (2π)^3-2ϵ. The tensorial reduction gives a basis of dimension 40. This basis are also sufficient to compute the second radiative term P̃_1,2^μ,(3), which differs only by the boundary conditions we impose in the static γ=1 limit. The world-line computation of <cit.> uses master integrals with delta-function velocity cuts on three massive propagators at fourth Post-Minkowskian order. But they have either Feynman or retarded or advanced propagators and in the end use a total of 576 master integrals. Converting the retarded (respectively advanced) propagator to a Feynman propagator using i (ℓ_0± iϵ)^2-ℓ⃗^2= iℓ_0^2-ℓ⃗^2+iϵ∓πδ(ℓ_0) θ(∓ℓ_0) allows to expand the master integrals used in <cit.> on the basis of master integrals in (<ref>). As in ref. <cit.> we compute the integrals by solving three differential systems of sizes 40× 40, 54×54 and 54 × 54 respectively. There are three regions of integration, potential-potential (PP), potential-radiation (PR) and radiation-radiation (RR). We expand all master integrals in each of these regions, which gives boundary datas to solve and check the solution of the differential systems. In the end, each master integral can be expanded on 9 independent static master integrals (6 for the transverse pieces, 3 for the longitudinal contributions) as ℐ^⊥(γ)=∑_j=1^3 c_PP,⊥^j(γ) I_PP,⊥^j +(4(γ^2-1))^-ϵ∑_j=1^2 c_PR,⊥^j(γ) I_PR,⊥^j+(4(γ^2-1))^-2ϵc_RR,⊥(γ) I_RR,⊥ and ℐ^∥(γ)=(4(γ^2-1))^-ϵ∑_j=1^2 c_PR,∥^j(γ) I_PR,∥^j+(4(γ^2-1))^-2ϵc_RR,∥(γ) I_RR,∥ The final step is then to compute each static master integral with the correct constraint on its graviton propagator (Feynman propagator or delta-function) according to the integrand it going to contribute to. §.§ The final result for the 4PM momentum kick §.§.§ The N-matrix elements For the so-called conservative part (the N-matrix elements), we first recall the results up to 3PM order, Ñ^(0)=-2Gm_1 m_2(2γ^2-1)/√(γ^2-1)Γ(-ϵ)J^2ϵ Ñ^(1)=3π G^2m_1^2 m_2^2(m_1+m_2)(5γ^2-1)/4√(s)1/J Ñ^(2)=G^3 m_1^3 m_2^3 √(γ^2-1)/s(s(64 γ^6 - 120 γ^4 + 60 γ^2-5)/3(γ^2-1)^2-4/3 m_1 m_2 γ (14 γ^2+25) +4 m_1 m_2(3+12 γ^2-4 γ^4) arccosh(γ)/√(γ^2-1) +2m_1 m_2(2γ^2-1)^2/√(γ^2-1)(8-5γ^2/3(γ^2-1)+γ(-3+2γ^2) arccosh(γ)/(γ^2-1)^3/2) )1/J^2 Almost all of this 4PM part of Ñ was already computed in ref. <cit.>, except for one term which we correct here. The velocity cuts automatically eliminate super-classical terms, so that the generalized unitarity integrand arises directly from ⟨ p_1',p_2'| T̂_3 |p_1,p_2⟩+L_0. To this we must add L_1 which precisely cancel the imaginary radiation pieces as at 3PM order. Note also that the real piece from L_1 is canceled by a similar computation as we did in section 3.2. At the end we get Ñ^(3)=Ñ_PP+RR^(3)+Ñ_PR^(3)+L̃_2 with Ñ^(3)_PP+RR=-G^4 (m_1+m_2)^3 m_1^4 m_2^4 π (γ^2-1)/8 s^3/2( ℳ_4^p+ν (4 ℳ^t_4 log(√(γ^2-1)/2)+ ℳ^π^2_4+ ℳ^rem_4))1/J^3 Ñ^(3)_PR=-G^4 (m_1+m_2)^3 m_1^4 m_2^4 π (γ^2-1)/8s^3/2(6πν(2γ^2-1)(5γ^2-1)ℐ(γ)/√(γ^2-1)) 1/J^3 and ℐ(γ) ≡(16-10γ^2/3γ^2-1+2γ(-3+2γ^2) arccosh(γ)/γ^2-1) where for convenience of the reader we have separated the pieces in terms of regions of integration (potential P and radiation R) and used the same notations as in ref. <cit.>. Note that, as already observed in a different context in ref. <cit.>, the L_2 Compton-like term that we have in the conservative piece will exactly cancel the one in the second radiative piece. §.§.§ The first radiation piece At 3PM order the value of the coefficient of the first radiation piece can be extracted from ref. <cit.> P̃_1,1^u_2,(2)=-2 m_1^2 m_2^3 p_∞^2/J^3ℰ(γ) with ℰ(γ)/π≡1151 - 3336 γ + 3148 γ^2 - 912 γ^3 + 339 γ^4 - 552 γ^5 + 210 γ^6/48 (γ^2-1)^3/2 + γ (-3 + 2 γ^2) (11 - 30 γ^2 + 35 γ^4)/ 16 (γ^2-1)^2arccosh(γ)- -5 + 76 γ - 150 γ^2 + 60 γ^3 + 35 γ^4/8 √(γ^2-1)log(1 + γ/2) while at 4PM order we have performed the computation and find for the longitudinal part P̃_1,1^u_2,(3) =2 m_1^2 m_2^3 p_∞^3/J^4((m_1 g[1]+m_2 h[1])π^2/192 (γ^2-1)^2+ m_1 g[2]+m_2 h[2]/705600 γ^8 (γ^2-1)^5/2 +(m_1g[3]+m_2 h[3]/6720 γ^9(γ^2-1)^3+(m_1g[4]+m_2h[4]) log(2)/8(γ^2-1)^2)arccosh(γ) +(m_1 g[5]+m_2 h[5]/(γ^2-1)^7/2+m_1g[6]+m_2 h[6] /(γ^2-1)^2)arccosh^2(γ)+m_1g[7]+m_2 h[7]/8(γ^2-1)^2arccosh(γ) log(γ) +m_1g[8]+m_2 h[8]/8(γ^2-1)^2(arccosh(γ) log(1+γ/2)-2 _2(-γ+√(γ^2-1))) +m_1g[9]+m_2 h[9]/32(γ^2-1)^2(_2(γ-1/γ+1)-4_2(√(γ-1/γ+1))) -m_1g[10]+m_2 h[10]/16 (γ^2-1)^2_2(-(γ-√(γ^2-1))^2)) with g[1] =γ (-1485 + 4993 γ^2 - 3195 γ^4 + 1575 γ^6) g[2] =385875 - 1837500 γ^2 + 7188300 γ^4 - 21241500 γ^6 + 767410066 γ^8 + 3966858415 γ^10 - 3429240286 γ^12 - 791542442 γ^14 + 393897472 γ^16 g[3] =3675 - 19950 γ^2 + 79800 γ^4 - 246540 γ^6 + 222810 γ^8 - 25426269 γ^10 - 37185456 γ^12 + 46406238 γ^14 + 2662204 γ^16 - 3592192 γ^18 g[4] =1263 - 3883 γ^2 + 1065 γ^4 - 525 γ^6 g[5] =32 γ^2 (60 + 35 γ^2 - 59 γ^4 + 4 γ^8) g[6] =8 γ (-9 + 26 γ^2) g[7] =γ (1041 - 2773 γ^2 - 1065 γ^4 + 525 γ^6) g[8] =3 (37 γ - 185 γ^3 + 355 γ^5 - 175 γ^7) g[9] =6 (6 - 37 γ - 66 γ^2 + 185 γ^3 + 210 γ^4 - 355 γ^5 - 150 γ^6 + 175 γ^7) g[10] =γ (1041 - 2773 γ^2 - 1065 γ^4 + 525 γ^6) h[1] =2 (2075 + 17367 γ^2 + 5553 γ^4 - 6819 γ^6) h[2] =490 γ (1575 - 8250 γ^2 + 35710 γ^4 - 142640 γ^6 - 5560073 γ^8 - 417302 γ^10 + 4034092 γ^12 - 587336 γ^14 + 6144 γ^16) h[3] =14 γ (525 - 3100 γ^2 + 13690 γ^4 - 55260 γ^6 + 816595 γ^8 + 3752006 γ^10 - 1978290 γ^12 - 1029342 γ^14 + 213480 γ^16 + 24576 γ^18) h[4] =-2 (2057 + 15261 γ^2 + 3387 γ^4 - 4321 γ^6) h[5] =-32 γ (-3 + 2 γ^2) (-8 - 51 γ^2 - 6 γ^4 + 8 γ^6) h[6] =16 (16 + 111 γ^2 + 18 γ^4 - 24 γ^6) h[7] =-2 (2039 + 13155 γ^2 + 1221 γ^4 - 1823 γ^6) h[8] =-2 (9 + 1053 γ^2 + 1083 γ^4 - 1249 γ^6) h[9] =6 (36 - 1209 γ + 4212 γ^2 - 6422 γ^3 + 4332 γ^4 + 1755 γ^5 - 4996 γ^6 + 2100 γ^7) h[10] =-2 (2039 + 13155 γ^2 + 1221 γ^4 - 1823 γ^6) For the transverse part we find P̃_1,1^b,(3)=-2m_1^2 m_2^2 p_∞^4/J^4((-2 γ^2-1/γ^2-1𝒞(γ)+γ(-3+2γ^2)/(γ^2-1)^3/2ℰ(γ))(m_1+m_2)+2γ^2-1/(γ+1)√(γ^2-1)ℰ(γ) m_1) with 𝒞(γ)/π≡-237 + 386 γ + 111 γ^2 - 683 γ^3 + 537 γ^4 + 240 γ^5 - 411 γ^6 + 105 γ^7/24 (γ^2-1)^2 -γ (-3 + 2 γ^2) (-12 + 19 γ + 72 γ^2 - 70 γ^3 - 60 γ^4 + 35 γ^5) /8 (γ^2-1)^5/2arccosh(γ) + -62 + 155 γ + 16 γ^2 - 70 γ^3 - 90 γ^4 + 35 γ^5/4 (γ^2-1)log(1 + γ/2) §.§.§ The second radiation piece The contributions from the second radiation piece matches exactly the result of ref. <cit.> with P̃_1,3^b,(3)=-6 i p_∞^4/J^4 c_4b,2rad^(4) diss and P̃_1,3^u_2,(3)=-6 i m_2 p_∞^3/J^4 c_4b,2rad^(4) diss Finally, when inserting all integrals into the formula of eq. (<ref>) we find complete agreement with ref. <cit.>. This amplitude-based approach, which combines the exponential representation of the gravitational S-matrix with the KMOC formalism, thus yields a result for the momentum kick that is in full agreement with the worldline calculation of ref. <cit.>. § CONCLUSION The exponential representation of the S-matrix <cit.> is a natural starting point for a semi-classical analysis of quantum field theory. Matrix elements of the N̂-operator in the exponent of the S-matrix are by construction free of superclassical terms and they are, therefore, at leading order providing the classical part, followed by quantum corrections. Using the KMOC-formalism, we have shown how the exponential representation of the S-matrix makes manifest the cancellation of superclassical contributions in the conservative sector. One advantage of working with the N̂-matrix rather than the conventional T̂-matrix is indeed that it bypasses the need to ensure the delicate cancellation between superclassical terms of the T̂-matrix. Instead, by extracting the relevant pieces of the N̂-matrix by means of velocity cuts we automatically retrieve the classical terms. Pictorially speaking, the velocity cuts introduced in <cit.> localize the massive scattering states on classical on-shell trajectories. As shown in section <ref> of the present paper, two-to-two massive matrix elements of the N̂-operator, Fourier-transformed to impact parameter space, is the radial action of the conservative sector. This proves the conjectured relation put forward in ref. <cit.>. Including gravitational radiation, the N̂-operator is still a basic building block of the KMOC-formalism and as an example we have shown how the momentum kick in the scattering of two black holes can be compactly described by matrix elements of N̂. We have provided the explicit formulas up to and including fourth Post-Minkowskian order but the framework is iterative and it is straightforward to derive corresponding expressions to arbitrarily high order in Newton's constant G. As an application we have explicitly derived the momentum kick at fourth post-Minkowskian order. Our results are in agreement with <cit.>. As is well known, and somewhat disturbingly, it leads to a scattering angle that diverges at high energy if one applies the scattering angle expression of ref. <cit.>. The solution for the integrals used here and in the references above is the one connecting smoothly to the Post-Newtonian expansion. We cannot exclude that another solution exists which is valid at high energy only and without a smooth connection to the Post-Newtonian limit. This possibility seems to deserve attention. Alternatively, one could consider doing a new fourth-order calculation from scratch with massless scalars. The resulting relationship between the KMOC-formalism and the exponential representation of the S-matrix is very simple and of a universal form involving trigonometric functions together with iterated commutators. This trigonometric structure arises from N̂ being the exponential phase operator of the S-matrix and is thus closely linked to the Euler formula. Beyond the conservative parts, the operator identities involved lead to additional terms but the structure of nested commutators is responsible for the simple algebraic relations that iteratively build up observables to higher and higher orders in the gravitational coupling constant. In the end, the expression for classical observables including all dissipative effects becomes remarkably simple by combining the KMOC formalism with the exponential representation of the S-matrix. The full calculation reduces to scattering amplitude evaluations for which modern techniques have become highly developed. There is thus no need to distinguish between different pieces or to separate the amplitude calculation into different types of contributions; one must only retain all classical terms, as this provides the full classical answer. §.§ Acknowledgements We thank Thibault Damour for comments. P.V. would like to thank the LAPTh for the hospitality during the completion of this work. The work of P.H.D. was supported in part by DFF grant 0135-00089A, the work of E.R.H. was supported by the Rozenthal Foundation and ERC Starting Grant No. 757978 from the European Research Council, and the research of P.V. has received funding from the ANR grant “SMAGP” ANR-20-CE40-0026-01. 99Damour:2016gwp T. Damour, “Gravitational scattering, Post-Minkowskian approximation and Effective One-Body theory,” Phys. Rev. D 94 (2016) no.10, 104015; [arXiv: 1609.00354 [gr-qc]]. Damour:2017zjx T. Damour, “High-energy gravitational scattering and the general relativistic two-body problem,” Phys. Rev. D 97 (2018) no.4, 044038; [arXiv:1710.10599 [gr-qc]]. Bjerrum-Bohr:2018xdl N. E. J. Bjerrum-Bohr, P. H. Damgaard, G. Festuccia, L. Planté and P. Vanhove, “General Relativity from Scattering Amplitudes,” Phys. Rev. Lett. 121 (2018) no.17, 171601; [arXiv:1806.04920 [hep-th]]. Cheung:2018wkq C. Cheung, I. Z. Rothstein and M. P. Solon, “From Scattering Amplitudes to Classical Potentials in the Post-Minkowskian Expansion,” Phys. Rev. Lett. 121 (2018) no.25, 251101; [arXiv:1808.02489 [hep-th]]. Damour:2019lcq T. Damour, “Classical and Quantum Scattering in Post-Minkowskian Gravity,” Phys. Rev. D 102 (2020) no.2, 024060 [arXiv:1912.02139 [gr-qc]]. Westpfahl:1985tsl K. Westpfahl, “High-Speed Scattering of Charged and Uncharged Particles in General Relativity,” Fortsch. Phys. 33 (1985) no.8, 417-493 doi:10.1002/prop.2190330802 Bern:2019nnu Z. Bern, C. Cheung, R. Roiban, C. H. Shen, M. P. Solon and M. Zeng, “Scattering Amplitudes and the Conservative Hamiltonian for Binary Systems at Third Post-Minkowskian Order,” Phys. Rev. Lett. 122 (2019) no.20, 201603; [arXiv:1901.04424 [hep-th]]. Bern:2019crd Z. Bern, C. Cheung, R. Roiban, C. H. Shen, M. P. Solon and M. Zeng, “Black Hole Binary Dynamics from the Double Copy and Effective Theory,” JHEP 10 (2019), 206 [arXiv:1908.01493 [hep-th]]. DiVecchia:2020ymx P. Di Vecchia, C. Heissenberg, R. Russo and G. Veneziano, “Universality of Ultra-Relativistic Gravitational Scattering,” Phys. Lett. B 811 (2020), 135924 [arXiv:2008.12743 [hep-th]]. Damour:2020tta T. Damour, “Radiative Contribution to Classical Gravitational Scattering at the Third Order in G,” Phys. Rev. D 102 (2020) no.12, 124008 [arXiv:2010.01641 [gr-qc]]. Bini:2021gat D. Bini, T. Damour and A. Geralico, “Radiative Contributions to Gravitational Scattering,” Phys. Rev. D 104 (2021) no.8, 084031 [arXiv:2107.08896 [gr-qc]]. DiVecchia:2021ndb P. Di Vecchia, C. Heissenberg, R. Russo and G. Veneziano, “Radiation Reaction from Soft Theorems,” [arXiv:2101.05772 [hep-th]]. DiVecchia:2021bdo P. Di Vecchia, C. Heissenberg, R. Russo and G. Veneziano, “The Eikonal Approach to Gravitational Scattering and Radiation at 𝒪(G^3),” [arXiv:2104.03256 [hep-th]]. Amati:1993tb D. Amati, M. Ciafaloni and G. Veneziano, “Effective action and all order gravitational eikonal at Planckian energies,” Nucl. Phys. B 403 (1993), 707-724 Bjerrum-Bohr:2021vuf N. E. J. Bjerrum-Bohr, P. H. Damgaard, L. Planté and P. Vanhove, “Classical gravity from loop amplitudes,” Phys. Rev. D 104 (2021) no.2, 026009 [arXiv:2104.04510 [hep-th]]. Bjerrum-Bohr:2021din N. E. J. Bjerrum-Bohr, P. H. Damgaard, L. Planté and P. Vanhove, “The amplitude for classical gravitational scattering at third Post-Minkowskian order,” JHEP 08 (2021), 172 [arXiv:2105.05218 [hep-th]]. Bjerrum-Bohr:2022blt N. E. J. Bjerrum-Bohr, P. H. Damgaard, L. Plante and P. Vanhove, “The SAGEX Review on Scattering Amplitudes, Chapter 13: Post-Minkowskian expansion from scattering amplitudes,” J. Phys. A 55 (2022) no.44, 443014 doi:10.1088/1751-8121/ac7a78 [arXiv:2203.13024 [hep-th]]. Bjerrum-Bohr:2022ows N. E. J. Bjerrum-Bohr, L. Planté and P. Vanhove, “Effective Field Theory and Applications: Weak Field Observables from Scattering Amplitudes in Quantum Field Theory,” [arXiv:2212.08957 [hep-th]]. Cristofoli:2020uzm A. Cristofoli, P. H. Damgaard, P. Di Vecchia and C. Heissenberg, “Second-order Post-Minkowskian scattering in arbitrary dimensions,” JHEP 07 (2020), 122; [arXiv:2003.10274 [hep-th]]. Cristofoli:2019neg A. Cristofoli, N. E. J. Bjerrum-Bohr, P. H. Damgaard and P. Vanhove, “Post-Minkowskian Hamiltonians in general relativity,” Phys. Rev. D 100 (2019) no.8, 084040 [arXiv:1906.01579 [hep-th]]. Bern:2021dqo Z. Bern, J. Parra-Martinez, R. Roiban, M. S. Ruf, C. H. Shen, M. P. Solon and M. Zeng, “Scattering Amplitudes and Conservative Binary Dynamics at O(G^4),” Phys. Rev. Lett. 126 (2021) no.17, 171601 [arXiv:2101.07254 [hep-th]]. Damgaard:2021ipf P. H. Damgaard, L. Plante and P. Vanhove, “On an exponential representation of the gravitational S-matrix,” JHEP 11 (2021), 213 [arXiv:2107.12891 [hep-th]]. Bjerrum-Bohr:2021wwt N. E. J. Bjerrum-Bohr, L. Planté and P. Vanhove, “Post-Minkowskian radial action from soft limits and velocity cuts,” JHEP 03 (2022), 071 [arXiv:2111.02976 [hep-th]]. Bern:2021yeh Z. Bern, J. Parra-Martinez, R. Roiban, M. S. Ruf, C. H. Shen, M. P. Solon and M. Zeng, “Scattering Amplitudes, the Tail Effect, and Conservative Binary Dynamics at O(G4),” Phys. Rev. Lett. 128 (2022) no.16, 161103 [arXiv:2112.10750 [hep-th]]. Collado:2018isu A. K. Collado, P. Di Vecchia, R. Russo and S. Thomas, “The subleading eikonal in supergravity theories,” JHEP 10 (2018), 038 [arXiv:1807.04588 [hep-th]]. KoemansCollado:2019ggb A. Koemans Collado, P. Di Vecchia and R. Russo, “Revisiting the Second Post-Minkowskian Eikonal and the Dynamics of Binary Black Holes,” Phys. Rev. D 100 (2019) no.6, 066028 [arXiv:1904.02667 [hep-th]]. DiVecchia:2019myk P. Di Vecchia, A. Luna, S. G. Naculich, R. Russo, G. Veneziano and C. D. White, “A tale of two exponentiations in N=8 supergravity,” Phys. Lett. B 798 (2019), 134927 [arXiv:1908.05603 [hep-th]]. DiVecchia:2019kta P. Di Vecchia, S. G. Naculich, R. Russo, G. Veneziano and C. D. White, “A Tale of Two Exponentiations in 𝒩 = 8 Supergravity at Subleading Level,” JHEP 03 (2020), 173 [arXiv:1911.11716 [hep-th]]. Bern:2020gjj Z. Bern, H. Ita, J. Parra-Martinez and M. S. Ruf, “Universality in the classical limit of massless gravitational scattering,” Phys. Rev. Lett. 125 (2020) no.3, 031601 [arXiv:2002.02459 [hep-th]]. Parra-Martinez:2020dzs J. Parra-Martínez, M. S. Ruf and M. Zeng, “Extremal Black Hole Scattering at 𝒪(G^3): Graviton Dominance, Eikonal Exponentiation, and Differential Equations,” JHEP 11 (2020), 023 [arXiv:2005.04236 [hep-th]]. DiVecchia:2022owy P. Di Vecchia, C. Heissenberg and R. Russo, “Angular momentum of zero-frequency gravitons,” JHEP 08 (2022), 172 [arXiv:2203.11915 [hep-th]]. DiVecchia:2022piu P. Di Vecchia, C. Heissenberg, R. Russo and G. Veneziano, “Classical Gravitational Observables from the Eikonal Operator,” [arXiv:2210.12118 [hep-th]]. Heissenberg:2022tsn C. Heissenberg, “Angular Momentum Loss Due to Tidal Effects in the Post-Minkowskian Expansion,” [arXiv:2210.15689 [hep-th]]. DiVecchia:2023frv P. Di Vecchia, C. Heissenberg, R. Russo and G. Veneziano, “The gravitational eikonal: from particle, string and brane collisions to black-hole encounters,” [arXiv:2306.16488 [hep-th]]. Kosower:2018adc D. A. Kosower, B. Maybee and D. O'Connell, “Amplitudes, Observables, and Classical Scattering,” JHEP 02 (2019), 137 [arXiv:1811.10950 [hep-th]]. Maybee:2019jus B. Maybee, D. O'Connell and J. Vines, “Observables and amplitudes for spinning particles and black holes,” JHEP 12 (2019), 156 [arXiv:1906.09260 [hep-th]]. Cristofoli:2021vyo A. Cristofoli, R. Gonzo, D. A. Kosower and D. O'Connell, “Waveforms from amplitudes,” Phys. Rev. D 106 (2022) no.5, 056007 [arXiv:2107.10193 [hep-th]]. Cristofoli:2021jas A. Cristofoli, R. Gonzo, N. Moynihan, D. O'Connell, A. Ross, M. Sergola and C. D. White, “The Uncertainty Principle and Classical Amplitudes,” [arXiv:2112.07556 [hep-th]]. Herrmann:2021lqe E. Herrmann, J. Parra-Martinez, M. S. Ruf and M. Zeng, “Gravitational Bremsstrahlung from Reverse Unitarity,” Phys. Rev. Lett. 126 (2021) no.20, 201602 [arXiv:2101.07255 [hep-th]]. Herrmann:2021tct E. Herrmann, J. Parra-Martinez, M. S. Ruf and M. Zeng, “Radiative classical gravitational observables at 𝒪(G^3) from scattering amplitudes,” JHEP 10 (2021), 148 [arXiv:2104.03957 [hep-th]]. Kalin:2020mvi G. Kälin and R. A. Porto, “Post-Minkowskian Effective Field Theory for Conservative Binary Dynamics,” JHEP 11 (2020), 106 [arXiv:2006.01184 [hep-th]]. Kalin:2020fhe G. Kälin, Z. Liu and R. A. Porto, “Conservative Dynamics of Binary Systems to Third Post-Minkowskian Order from the Effective Field Theory Approach,” Phys. Rev. Lett. 125 (2020) no.26, 261103 [arXiv:2007.04977 [hep-th]]. Kalin:2020lmz G. Kälin, Z. Liu and R. A. Porto, “Conservative Tidal Effects in Compact Binary Systems to Next-to-Leading Post-Minkowskian Order,” Phys. Rev. D 102 (2020), 124025 [arXiv:2008.06047 [hep-th]]. Mogull:2020sak G. Mogull, J. Plefka and J. Steinhoff, “Classical black hole scattering from a worldline quantum field theory,” JHEP 02 (2021), 048 [arXiv:2010.02865 [hep-th]]. Jakobsen:2021smu G. U. Jakobsen, G. Mogull, J. Plefka and J. Steinhoff, “Classical Gravitational Bremsstrahlung from a Worldline Quantum Field Theory,” Phys. Rev. Lett. 126 (2021) no.20, 201103 [arXiv:2101.12688 [gr-qc]]. Liu:2021zxr Z. Liu, R. A. Porto and Z. Yang, “Spin Effects in the Effective Field Theory Approach to Post-Minkowskian Conservative Dynamics,” JHEP 06 (2021), 012 [arXiv:2102.10059 [hep-th]]. Dlapa:2021npj C. Dlapa, G. Kälin, Z. Liu and R. A. Porto, “Dynamics of binary systems to fourth Post-Minkowskian order from the effective field theory approach,” Phys. Lett. B 831 (2022), 137203 [arXiv:2106.08276 [hep-th]]. Jakobsen:2021lvp G. U. Jakobsen, G. Mogull, J. Plefka and J. Steinhoff, “Gravitational Bremsstrahlung and Hidden Supersymmetry of Spinning Bodies,” Phys. Rev. Lett. 128 (2022) no.1, 011101 [arXiv:2106.10256 [hep-th]]. Jakobsen:2021zvh G. U. Jakobsen, G. Mogull, J. Plefka and J. Steinhoff, “SUSY in the sky with gravitons,” JHEP 01 (2022), 027 [arXiv:2109.04465 [hep-th]]. Dlapa:2021vgp C. Dlapa, G. Kälin, Z. Liu and R. A. Porto, “Conservative Dynamics of Binary Systems at Fourth Post-Minkowskian Order in the Large-Eccentricity Expansion,” Phys. Rev. Lett. 128 (2022) no.16, 161104 [arXiv:2112.11296 [hep-th]]. Jakobsen:2022fcj G. U. Jakobsen and G. Mogull, “Conservative and Radiative Dynamics of Spinning Bodies at Third Post-Minkowskian Order Using Worldline Quantum Field Theory,” Phys. Rev. Lett. 128 (2022) no.14, 141102 [arXiv:2201.07778 [hep-th]]. Jakobsen:2022psy G. U. Jakobsen, G. Mogull, J. Plefka and B. Sauer, “All things retarded: radiation-reaction in worldline quantum field theory,” JHEP 10 (2022), 128 [arXiv:2207.00569 [hep-th]]. Kalin:2022hph G. Kälin, J. Neef and R. A. Porto, “Radiation-reaction in the Effective Field Theory approach to Post-Minkowskian dynamics,” JHEP 01 (2023), 140 [arXiv:2207.00580 [hep-th]]. Dlapa:2022lmu C. Dlapa, G. Kälin, Z. Liu, J. Neef and R. A. Porto, “Radiation Reaction and Gravitational Waves at Fourth Post-Minkowskian Order,” Phys. Rev. Lett. 130 (2023) no.10, 101401 doi:10.1103/PhysRevLett.130.101401 [arXiv:2210.05541 [hep-th]]. Dlapa:2023hsl C. Dlapa, G. Kälin, Z. Liu and R. A. Porto, “Bootstrapping the relativistic two-body problem,” [arXiv:2304.01275 [hep-th]]. Jakobsen:2022zsx G. U. Jakobsen and G. Mogull, “Linear response, Hamiltonian, and radiative spinning two-body dynamics,” Phys. Rev. D 107 (2023) no.4, 044033 doi:10.1103/PhysRevD.107.044033 [arXiv:2210.06451 [hep-th]]. Jakobsen:2023ndj G. U. Jakobsen, G. Mogull, J. Plefka, B. Sauer and Y. Xu, “Conservative scattering of spinning black holes at fourth post-Minkowskian order,” [arXiv:2306.01714 [hep-th]]. Manohar:2022dea A. V. Manohar, A. K. Ridgway and C. H. Shen, “Radiated Angular Momentum and Dissipative Effects in Classical Scattering,” Phys. Rev. Lett. 129 (2022) no.12, 121601 [arXiv:2203.04283 [hep-th]]. Damgaard:2023vnx P. H. Damgaard, E. R. Hansen, L. Planté and P. Vanhove, “The Relation Between KMOC and Worldline Formalisms for Classical Gravity,” [arXiv:2306.11454 [hep-th]]. Kalin:2019rwq G. Kälin and R. A. Porto, “From Boundary Data to Bound States,” JHEP 01 (2020), 072 [arXiv:1910.03008 [hep-th]]. Bjerrum-Bohr:2019kec N. E. J. Bjerrum-Bohr, A. Cristofoli and P. H. Damgaard, “Post-Minkowskian Scattering Angle in Einstein Gravity,” JHEP 08 (2020), 038 doi:10.1007/JHEP08(2020)038 [arXiv:1910.09366 [hep-th]]. Adamo:2022qci T. Adamo, A. Cristofoli, A. Ilderton and S. Klisch, “All order gravitational waveforms from scattering amplitudes,” [arXiv:2210.04696 [hep-th]]. Adamo:2022ooq T. Adamo and R. Gonzo, “Bethe-Salpeter equation for classical gravitational bound states,” JHEP 05 (2023), 088 doi:10.1007/JHEP05(2023)088 [arXiv:2212.13269 [hep-th]]. Brandhuber:2023hhy A. Brandhuber, G. R. Brown, G. Chen, S. De Angelis, J. Gowdy and G. Travaglini, “One-loop gravitational bremsstrahlung and waveforms from a heavy-mass effective field theory,” JHEP 06 (2023), 048 doi:10.1007/JHEP06(2023)048 [arXiv:2303.06111 [hep-th]]. Herderschee:2023fxh A. Herderschee, R. Roiban and F. Teng, “The sub-leading scattering waveform from amplitudes,” JHEP 06 (2023), 004 doi:10.1007/JHEP06(2023)004 [arXiv:2303.06112 [hep-th]]. Elkhidir:2023dco A. Elkhidir, D. O'Connell, M. Sergola and I. A. Vazquez-Holm, “Radiation and Reaction at One Loop,” [arXiv:2303.06211 [hep-th]]. Georgoudis:2023lgf A. Georgoudis, C. Heissenberg and I. Vazquez-Holm, “Inelastic exponentiation and classical gravitational scattering at one loop,” JHEP 06 (2023), 126 doi:10.1007/JHEP06(2023)126 [arXiv:2303.07006 [hep-th]]. Gonzo:2023cnv R. Gonzo and A. Ilderton, “Wave scattering event shapes at high energies,” [arXiv:2305.17166 [hep-th]]. Damgaard:2019lfh P. H. Damgaard, K. Haddad and A. Helset, “Heavy Black Hole Effective Theory,” JHEP 11 (2019), 070 doi:10.1007/JHEP11(2019)070 [arXiv:1908.10308 [hep-ph]]. Aoude:2020onz R. Aoude, K. Haddad and A. Helset, “On-shell heavy particle effective theories,” JHEP 05 (2020), 051 doi:10.1007/JHEP05(2020)051 [arXiv:2001.09164 [hep-th]]. Brandhuber:2021eyq A. Brandhuber, G. Chen, G. Travaglini and C. Wen, “Classical gravitational scattering from a gauge-invariant double copy,” JHEP 10 (2021), 118 doi:10.1007/JHEP10(2021)118 [arXiv:2108.04216 [hep-th]]. Brandhuber:2021bsf A. Brandhuber, G. Chen, H. Johansson, G. Travaglini and C. Wen, “Kinematic Hopf Algebra for Bern-Carrasco-Johansson Numerators in Heavy-Mass Effective Field Theory and Yang-Mills Theory,” Phys. Rev. Lett. 128 (2022) no.12, 121601 doi:10.1103/PhysRevLett.128.121601 [arXiv:2111.15649 [hep-th]]. Brandhuber:2022enp A. Brandhuber, G. R. Brown, G. Chen, J. Gowdy, G. Travaglini and C. Wen, “Amplitudes, Hopf algebras and the colour-kinematics duality,” JHEP 12 (2022), 101 doi:10.1007/JHEP12(2022)101 [arXiv:2208.05886 [hep-th]]. Lee:2012cn R. N. Lee, “Presenting LiteRed: a tool for the Loop InTEgrals REDuction,” [arXiv:1212.2685 [hep-ph]]. Lee:2013mka R. N. Lee, “LiteRed 1.4: a powerful tool for reduction of multiloop integrals,” J. Phys. Conf. Ser. 523 (2014), 012059 doi:10.1088/1742-6596/523/1/012059 [arXiv:1310.1145 [hep-ph]].
http://arxiv.org/abs/2307.07625v1
20230714205220
Influences in Mixing Measures
[ "Frederic Koehler", "Noam Lifshitz", "Dor Minzer", "Elchanan Mossel" ]
math.PR
[ "math.PR", "cs.DM", "math.CO" ]
Influences in Mixing Measures Frederic Koehler[, Stanford University.] Noam Lifshitz[, Hebrew University.] Dor Minzer[, MIT.] Elchanan Mossel[, MIT.] August 12, 2023 =========================================================================================================================== The theory of influences in product measures has profound applications in theoretical computer science, combinatorics, and discrete probability. This deep theory is intimately connected to functional inequalities and to the Fourier analysis of discrete groups. Originally, influences of functions were motivated by the study of social choice theory, wherein a Boolean function represents a voting scheme, its inputs represent the votes, and its output represents the outcome of the elections. Thus, product measures represent a scenario in which the votes of the parties are randomly and independently distributed, which is often far from the truth in real-life scenarios. We begin to develop the theory of influences for more general measures under mixing or correlation decay conditions. More specifically, we prove analogues of the KKL and Talagrand influence theorems for Markov Random Fields on bounded degree graphs with correlation decay. We show how some of the original applications of the theory of in terms of voting and coalitions extend to general measures with correlation decay. Our results thus shed light both on voting with correlated voters and on the behavior of general functions of Markov Random Fields (also called “spin-systems") with correlation decay. § INTRODUCTION Starting with the works of Ben-Or and Linial <cit.> and Kahn, Kalai, and Linial <cit.>, Analysis of Boolean functions became a major area of research in combinatorics, probability and theoretical computer science. It has deep and interesting connections to functional and isoperimetric inequalities, and other important areas in probability and combinatorics. It has deep impact in property testing, hardness of approximation, the theory of voting and the theory of percolation, see e.g. <cit.>. At the technical level this theory crucially relies on: * Hyper-contractive inequalities that hold for product measures that are not too biased, and * Explicit representations of functions in explicit bases, which correspond to Fourier bases and their generalizations. Major recent effort has been devoted to extend the theory to space for which hyper-contractive inequalities do not hold. Notably it was shown that a notion of global hypercontraction holds for such spaces and that this in turn implies many interesting applications <cit.>. In the other direction, extending the theory to spaces that are not highly symmetric and do not have explicit bases remained a major challenge. Our main contribution in this paper is to prove very general versions of two major theorems of analysis of Boolean functions, the KKL and the Talagrand theorem in the setting of general Gibbs measures on bounded degree graphs with correlation decay. The study of such measures is fundamental in statistical physics, graphical models, and in the analysis of Markov chains and spectral independence, see e.g. <cit.>. Such measures are known to satisfy the log-Sobolev inequality (equivalently they are hyper-contractive) but do not posses explicit orthogonal bases. We show how some of the original applications of the theory of influences extend to the new setup: for general voting functions on n voters there exist a voter who influence is Ω(log n / n) times the variance. For monotone voting functions there exist a coalition of O(n / log n) voters who by flipping their votes can control the elections with probability arbitrary close to 1. § DEFINITIONS AND MAIN RESULTS We recall the definition of the Glauber dynamics, log-Sobolev constant, etc. See e.g., <cit.> for references. Glauber dynamics. Let ν be a probability distribution on the space Σ^n where Σ is an arbitrary finite set. Let P_i be the Markov operator that resamples coordinate i from stationary distribution ν conditioned on all other coordinates, so that (P_i f)(x) = _ν[f(X) | X_- i = x_- i ], where x_-i is the vector of all coordinates other than i. We will consider the continuous time Glauber dynamics, where a coordinate i are picked according to independent Poisson clocks and are then the coordinate is updated according to P_i. It is well known that this defines a semigroup H_t where H_t is the transition matrix of the configuration from time 0 to time t. We recall that H being a semigroup means that it satisfies that H_s+t = H_s H_t = H_t H_s for all s and t. Moreover, we can write H_t = e^t L, where L, called the generator, is given by L = ∑_i L_i and L_i f = P_i f - f so that L_i^2 = (P_i - I)^2 = -P_i + I = -L_i. With this notation, the Dirichlet form of the Glauber dynamics is defined to be ℰ_ν(f,f) = -𝔼_X ∼ν[ f(X) (L f)(X)] = ∑_i _ν (L_i f)^2. Each L_i can be thought of as a generalized notion of partial derivative with respect to coordinate i, so the Dirichlet form can be viewed as a natural measure of the size of the gradient of the function f (from the perspective of the chosen semigroup). Log-Sobolev inequality. We say the Glauber dynamics for ν satisfy the log-Sobolev inequality with constant ρ > 0 if ρ _ν[f] ≤ 2 ℰ_ν(√(f), √(f)) for all functions f : Σ^n →ℝ_≥ 0, where _ν[f] = _ν[f log f] - _ν[f] log_ν[f] is the relative entropy functional. This is equivalent to the hypercontractivity statement that for all functions f, t ≥ 0, and p ≥ 1 + e^-2ρ t, H_t f_2≤f_p where ·_p denotes the L_p(ν) norm f_p = (_ν |f|^p)^1/p. The log-Sobolev inequality implies that the Poincaré inequality λ_ν(f) ≤ℰ_ν(f,f) holds with some constant λ≥ν and for all functions f : Σ^n →ℝ. This is equivalent to the statement that (H_t f) ≤ e^-λ t(f) for all such f. Markov property. We say ν is a Markov random field with respect to a graph G if it satisfies the Markov property: for any vertex i with neighbors 𝒩(i) in G and for X ∼ν, X_i is conditionally independent of X_∼ i given X_𝒩(i). Such a distribution is also referred to as an undirected graphical model, see <cit.>. Given a graph G, we let d_G(i,j) denote the graph distance between i and j. Other notation. Given square matrices X,Y we write [X,Y] = XY - YX for the usual commutator. We write [X, ·] to denote the adjoint map Y ↦ [X,Y]. We now come to the important definition of influences for our setting. Given a function f : Σ^n →{0,1}, we define the influence of coordinate i to be I_i(f) = _X ∼ν[∃ x'_i, f(X) f(X_1,…,X_i - 1, x'_i, X_i + 1, …, X_n)]. We write d_H(x,y) = #{i : x_i y_i} to denote the usual Hamming metric on Σ^n. Given a vector x ∈Σ^n and i ∈ [n], x_∼ i∈Σ^n - 1 denotes the same vector with coordinate i removed. §.§ Main Results Our results hold in a very general setting: they apply to all undirected graphical models with bounded marginals, bounded degree, and which satisfy the log-Sobolev inequality. These assumptions are formally laid out below. In Section <ref>, we illustrate some of the special cases where the log-Sobolev inequality is known to hold and give references to others. The probability measure ν on Σ^n for some n ≥ 1 satisfies that: * There exists a constant b ≥ 1 such that ν(x)/ν(y) ∈ [1/b,b] for any x,y ∈Σ^n with Hamming distance one. In other words, ν has bounded marginals under pinning. * The Glauber dynamics for ν satisfy the log-Sobolev inequality with constant ρ∈ (0,1]. * The distribution ν is a Markov random field with respect to a graph G of maximum degree Δ. In our key contribution, we show that these assumptions suffice to prove general versions of Talagrand's theorem and the KKL inequality: For any n ≥ 1, ν satisfying Assumption <ref>, and any f : Σ^n →ℝ, we have _ν(f) ≤Cq^4 b^4 Δ^2/ρ∑_j L_j f_2^2/1 + log(L_j f_2/L_j f_1) for some absolute constant C > 0. There exists α_b,ρ,Δ,q > 0 such that the following is true. For any n ≥ 1, ν satisfying Assumption <ref>, and any f : Σ^n →{0,1}, there exists a coordinate k ∈ [n] such that I_k(f) ≥α_b,ρ,Δ,q(f)log(n)/n. Both of these results are derived as consequences of a new comparison inequality between the variance and derivatives of a function f (Theorem <ref> below). Our results in (<ref>) vastly generalize results of Cordero-Erasquin and Ledoux <cit.>. In <cit.> a statement similar to (<ref>) was proven under the assumption that the operators L_i and semigroup H_t “ weakly commute" (equation (15) there). This is valid for product measures and a few other interesting examples in <cit.> such as the symmetric group, the sphere etc. However, in our setting it fails very badly — an update at one site affects all of its neighbors, which affects their neighbors, and so on. In our proof we follow <cit.> in writing the variance as an “integral over the heat semi-group" (equation (<ref>) below). Then, in our main contribution we provide a new analysis for this noncommutative setting which controls the commutators corresponding to all of these interactions. §.§ Applications to voting There is a long history of using Markov random fields/statistical physics models to model the correlated preferences of voters in elections, for example to estimate the probability of a Condorcet paradox (e.g. <cit.>). Our results have a natural interpretation in the voting context. If each entry X ∼ν corresponds to the preference of an individual, and f : Σ^n →{0,1} is an election rule which takes as input these preferences and aggregates them into a choice between two candidates, then our generalized KKL theorem says that one voter has influence Ω(log(n)/n) provided both candidates have a non-negligible chance of winning a priori. What about larger coalitions? Before stating our result, it is natural in the context of elections to assume that voters preferences are also binary valued (i.e. Σ = {± 1}) and that the function f is monotone, i.e. if x ≤ y then f(x) ≤ f(y). Under these assumptions, the following corollary shows in particular that a coalition of size ω(n/log(n)) has influence 1 - o(1) on a fair election. It follows by iteratively applying our generalization of the KKL theorem, and generalizes Corollary 3.5 of <cit.> where the case of the uniform measure was considered. For any n ≥ 1 and ν satisfying Assumption <ref>, the following is true. For any ϵ > 0 and and monotone function f : {± 1}^n →{0,1} satisfying _ν[f] ≥ϵ, there exists a set of coordinates S ⊂ [n] such that _X ∼ν[f(X_∼ S, X_S → 1)] ≥ 1 - ϵ and |S| ≤4(1 + b) log(1/2ϵ)/α_b,ρ,Δ·n/log(n) where α_b,ρ,Δ > 0 is the constant (independent of n) from Theorem <ref>. Here the notation _X ∼ν[f(X_∼ S, X_S → 1)] refers to the expectation of f(Y) where X is drawn from μ and Y_i = 1 for i ∈ S while Y_i = X_i for i ∉ S. §.§ Comparison to the Results on Phase Transitions for Monotone Measures We next compare our results to work by Graham and Grimmett <cit.> and results of Duminil-Copin Raoufi and Tassion <cit.> who proved a version of the KKL theorem and sharp thresholds for “monotonic” measures. Consider a monotone function f : {0,1}^n →{0,1} and a measure μ on {0,1}^n. Recall the definition of influence, Definition <ref>. We now define the effect e_i(f,μ) of a variable i on f under μ as _μ[f,x_i] = _μ[f x_i] - _μ[f] E_μ[x_i] (note that this is p(1-p) times the effect as defined in <cit.>). We note that * If μ is the uniform measure and f is monotone then the effect and the influence are the same up to a constant factor. If μ is a monotone measure in the sense of <cit.> and f is monotone, the size of the effect can be lower bounded by the influence using the FKG inequality (see <cit.>). * The papers <cit.> and <cit.> both prove sharp phase transitions based on the effects. In <cit.>, they do so by proving a version of KKL and in <cit.> they do so by generalizing the results of <cit.> using effects. Interestingly, their results do not require any correlation decay of the measure, so unlike our results they do not require the log-Sobolev inequality. They do require monotonicity of the measure which our results do not. There are very important differences between the interpretations of effects and influences. (The importance of this difference was also discussed by Graham and Grimmett <cit.> where they called effects and influences the “conditional influences” and “absolute influences” respectively.) To compare influences and effects in a concrete setting, we consider the finite-volume Ising model with parameter β on the square lattice in dimension d ≥ 2. In this (classical) setting, the vertices of our graph correspond to the integer elements of [-L/2,L/2]^d where L ≥ 1 is the sidelength of the box, and the edges E of the graph connect vertices which are neighbors in the square lattice, i.e. which are Euclidean distance 1 from each other. Note that there are n = (L + 1)^d many vertices in total. Given this graph, the ferromagnetic Ising model is the distribution on {± 1}^n of the form: ν(x) ∝exp( β∑_(i,j) ∈ E x_i x_j ). Let β_c(d) be the critical inverse temperature of the lattice Ising model in dimension d (see e.g. <cit.>). Below β_c is the high-temperature/subcritical regime and above β_c is the low-temperature/supercritical regime of the model. Informally speaking, in the low temperature phase, the model exhibit symmetry breaking, a typical sample from the model lies either in a mostly + phase or in a mostly - phase, and because of this the Glauber dynamics mix torpidly. Let f be a monotone function from {± 1}^n →{0,1} with variance Ω(1). The results of <cit.> imply that for all β≥ 0: * There exists a variable whose effect is at least Ω(log n / n). * There exists a set S consisting of O(n /log n) many variables such that E[f | X_S = +] = 1-o(1). As we will now illustrate, the analogous results with influences replaces by effects will fail badly due to the aforementioned phase transition in the Ising model. The log-Sobolev inequality for this measure, see e.g. <cit.> allows us to apply our results to deduce that for β < β_c, i.e. in the subcritical regime of the model, we have that: * There exists a variable whose influence is at least Ω(log n / n). * There exists a set S consisting of O(n /log n) many variables such that E[f(X_-S,X_S → 1)] = 1-o(1). On the other hand, when β > β_c, i.e. in the supercritical regime, it immediately follows from rigorous results on the large deviations of the magnetization in the Ising model <cit.> that: * For every i, the effect of X_i is Θ(1). * For every i, the influence of X_i is exp(-Θ(L^d - 1)). * For a uniformly random set S with |S| = ω(1) it holds that E[f | X_S = +] = 1-o(1). * For every set S with |S| = o(n) it holds that E[f(X_-S,X_S → 1)] = 0.5 + exp(-Θ(L^d - 1)). This shows that our results cannot be proven without assuming correlation decay. Intuitively, for non-product measures there is a dramatic difference between fixing a variable and conditioning on a variable, as conditioning on a variable changes the measure and therefore changes all other variables. This shows that our results and the results of GC and DCRT and incomparable. In the setting where both our results and theirs apply (monotone measures which satisfy Assumption <ref>), our versions of Talagrand and KKL are stronger since the influences lower bound the effects. § PROOF OF MAIN RESULTS In this section, we prove all of our results. It was observed by Cordero-Erasquin and Ledoux <cit.> that Talagrand's inequality (and then KKL) can be deduced from an estimate of the form (<ref>) below. The most important contribution of our work is to prove this estimate (Theorem <ref>) in our very general setting, which we do in Section <ref> below. Given this estimate, we derive the generalized Talagrand's inequality and KKL in Section <ref>, and then show how to obtain the consequences for coalitions in Section <ref>. §.§ Main functional inequality The following is the main technical claim which implies Talagrand's inequality and KKL. There exists absolute constants c,c' > 0 such that the following is true. For any ν satisfying Assumption <ref>, f : Σ^n →ℝ, and for any positive T ≤ c/b^2q^2Δ^2, _ν(f) ≤c' q^2 b^2/1 - e^-ρ T∫_0^T ∑_j = 1^n L_j f_1 + e^-2ρ t^2 dt. Since the log-Sobolev inequality implies the Poincare inequality, we have that for any T ≥ 0 (f) = (f) - (H_T f) + (H_T f) ≤(f) - (H_T f) + e^-ρ T(F) and so (f) ≤1/1 - e^-ρ T [(f) - (H_T f)]. To upper bound (f), it thereby suffices to upper bound for some T > 0 the quantity (f) - (H_T f) = ∫_0^T ℰ(H_t f, H_t f) dt = ∑_i ∫_0^T (L_i H_t f)^2 dt. The first equality in the equation above holds for any Markov semigroup as proven in <cit.>. We have (L_i H_t f)^2 = (L_i _k ∼ Poi(tn)[P^k f])^2 and if [A,B] = AB - BA denotes the commutator then [L_i, P_j] for i ≁j and so [L_i, P] = 1/n∑_J : j ∼ i [L_i, P_j]. It follows that L_i P^k = PL_i P^k - 1 + [L_i, P] P^k - 1 = PL_i P^k - 1 + 1/n∑_j : j ∼ i [L_i, P_j] P^k - 1 and applying the argument inductively yields L_iP^k = P^k L_i + ∑_ℓ = 1^k1/n∑_j : j ∼ i P^ℓ - 1 [L_i, P_j] P^k - ℓ We recall the following fact, sometimes called the Hadamard or Baker-Hausdorff Lemma: For square matrices X,Y, we have e^X Y e^-X = e^[X,·] Y. The following lemma computes the effect of commuting L_i and H_T. For any T ≥ 0 and i ∈ [n] we have L_i H_T = H_T M_T,i where M_T,i := ∑_k = 0^∞T^k/k!∑_(j_1,…,j_k) ∈𝒮_k,i [ ⋯ [[[P_i,P_j_1], P_j_2], P_j_3] ⋯ P_j_k]. Here 𝒮_k,i := {(j_1,…,j_k) : j_i ∈𝒩^+({i,j_1, …, j_i - 1})} and 𝒩^+(U) denotes the union of U and the neighbors of nodes U in the graph. We have L_i H_T - H_T L_i= ∫_0^T d/dt [H_T - t L_i H_t] dt = ∫_0^T [- H_T - t L L_i H_t + H_T - t L_i L H_t]] dt = ∫_0^T H_T - t [L_i,L] H_t dt. Applying the same argument again gives L_i H_T = H_T L_i + T H_T [L_i, L] + ∫_0^T H_T - t∫_0^t H_t - s [[L_i,L],L] H_s ds dt and iterating this, we have the Taylor expansion Note that by applying Lemma <ref> to a negated matrix X, we have the identity for square matrices X,Y e^-X Y e^X = e^[·, X] Y. Since H_t = e^tL, we therefore get H_T^-1 L_i H_T = ∑_k = 0^∞T^k/k! [L_i,L]^(k) where [L_i,L]^(k) denotes the iterated commutator of the following form: [L_i,L]^(0) = L_i and [L_i,L]^(k) = [[L_i,L]^(k - 1),L]. To compute the commutator, first observe [L_i,L] = ∑_j : i ∼ j [P_i,P_j] since L_i = P_i - I and P_i commutes with P_j when i ≁j. For the same reason, we have more generally that [L_i,L]^(k) = ∑_(j_1,…,j_k) ∈𝒮_k,i [ ⋯ [[[P_i,P_j_1], P_j_2], P_j_3], ⋯ P_j_k] which proves the result. With the notation of (<ref>), |𝒮_k,i| ≤ (Δ + 1)^k k^k for any i,k. Observe that we can encode j_k as an element of [k] × [Δ + 1] by choosing one of its predecessors i,…,j_k - 1 and specifying whether j_k is equal to that node or one of that node's Δ neighbors. Performing this encoding recursively proves the result. Therefore recalling the definition of M_t,i in (<ref>) to get the first equality and applying hypercontractivity to get the following inequality we have ∫_0^T L_i H_t f_2^2 dt = ∫_0^T H_t M_t,if_2^2 dt ≤∫_0^T M_t,if^2_1 + e^-2ρ t dt = ∫_0^T ∑_k = 0^∞t^k/k!∑_(j_1,…,j_k) ∈𝒮_i,k [ ⋯ [[[P_i,P_j_1], P_j_2], P_j_3], ⋯, P_j_k] f ^2_1 + e^-2ρ t dt ≤∫_0^T (∑_k = 0^∞t^k/k!∑_(j_1,…,j_k) ∈𝒮_i,k[ ⋯ [[[P_i,P_j_1], P_j_2], P_j_3], ⋯, P_j_k] f _1 + e^-2ρ t)^2 dt ≤(∑_k = 0^∞T^k/k! (Δ + 1)^k k^k) ∫_0^T ∑_k = 0^∞t^k/k!∑_(j_1,…,j_k) ∈𝒮_i,k[ ⋯ [[[P_i,P_j_1], P_j_2], P_j_3], ⋯, P_j_k] f _1 + e^-2ρ t^2 dt ≤ 2∫_0^T ∑_k = 0^∞t^k/k!∑_(j_1,…,j_k) ∈𝒮_i,k[ ⋯ [[[P_i,P_j_1], P_j_2], P_j_3], ⋯, P_j_k] f _1 + e^-2ρ t^2 dt where we used the triangle inequality, in the second-to-last step we applied the Cauchy-Schwarz inequality and Lemma <ref>, and in the last step we used the assumption that T is small compared to 1/Δ^2. For any p ≥ 1, i ∈ [n], k ≥ 0 and for 𝒮_i,k as defined in (<ref>), we have ∑_(j_1,…,j_k) ∈𝒮_i,k[ ⋯ [[[P_i,P_j_1], P_j_2], P_j_3], ⋯ P_j_k] f ^2_p≤ 2(Δ + 1)^k (k + 1)^k + 4 (2qb)^2k + 2max_j : d_G(j,i) ≤ kL_j f_p^2 For notational convenience, define j_0 = i. Observe that [ ⋯ [[[P_i,P_j_1], P_j_2], P_j_3], ⋯, P_j_k] = P_j_k [ ⋯[[[P_i,P_j_1], P_j_2], P_j_3], ⋯, P_j_k - 1] - [ ⋯[[[P_i,P_j_1], P_j_2], P_j_3], ⋯, P_j_k - 1] P_j_k = ∑_α (- 1)^r(α)(P_j_k P_α - P_α P_j_k) where α ranges over a subset of permutations of (j_0,…,j_k - 1) of size at most 2^k that arise when expanding out the iterated commutator, and r(α) ∈{0,1} encodes the corresponding sign of this term. Let K_j_0,…,j_k(x) = { y : y_∼{j_0,…,j_k} = x_∼{j_0, …, j_k}} denote the set of spin configurations which disagree with x only within {i,j_1,…,j_k}. Using that the dynamics only update sites j_0,…,j_k and using the triangle inequality we have that |([P_j_k P_α - P_α P_j_k]f)(x)| ≤max_y,y' ∈ K_i,j_1,…,j_k(x) |f(y) - f(y')| ≤ (k + 1) max_z,z' ∈ K_i,j_1,…,j_k(x) : d_H(z,z') = 1 |f(z) - f(z')|. Hence taking the average over x, we find ∑_x ν(x) |([P_j_k P_α - P_α P_j_k]f)(x)|^p ≤ (k + 1)^p ∑_x ν(x) max_z,z' ∈ K_j_0,…,j_k(x) : d_H(z,z') = 1 |f(z) - f(z')|^p ≤ 2^p (k + 1)^p ∑_x ν(x) max_z ∈ K_j_0,…,j_k(x), ℓ∈{j_0,…,j_k} |(L_ℓ f)(z)|^p ≤ 2^p (k + 1)^p ∑_x ν(x) max_z ∈ K_j_0,…,j_k(x) (|(L_j_0 f)(z)| + ⋯ + |(L_j_k f)(z)|)^p ≤ 2^p (k + 1)^p (qb)^k + 1∑_z ν(z) (|(L_j_0 f)(z)| + ⋯ + |(L_j_k f)(z)|)^p where in the second to last step we used Lemma <ref>, and we arrived at the last step by considering the z which achieves the inner maximum, and used the fact that ν(x) ≤ b^k + 1ν(z) and that there are at most q^k + 1 such x for each z. Hence by the L_p triangle inequality, p ≥ 1, and 1 ≤ b, [P_j_k P_α - P_α P_j_k]f_p ≤ 2(k + 1) (qb)^k + 1∑_r = 0^k L_j_r f_p ≤ 2(k + 1)^2 (qb)^k + 1max_rL_j_r f_p. Using that α ranges over a set of size at most 2^k, we find by the L_p triangle inequality [ ⋯ [[[P_i,P_j_1], P_j_2], P_j_3], ⋯, P_j_k]f_p ≤∑_αP_j_k P_α - P_α P_j_k]_p ≤ 2(k + 1)^2 (2qb)^k + 1max_0 ≤ r ≤ kL_j_r f_p and using Lemma <ref> we have ∑_(j_1,…,j_k) ∈𝒮_i,k[ ⋯ [[[P_i,P_j_1], P_j_2], P_j_3], ⋯ P_j_k] f ^2_p≤ 2(Δ + 1)^k (k + 1)^k + 4 (2qb)^2k + 2max_j : d_G(j,i) ≤ kL_j f_p^2 as desired. Suppose ν is a distribution on Σ^n. For any function f : Σ^n →ℝ, and y,z ∈Σ^n differing only at site i we have 1/2 |f(y) - f(z)| ≤max{|(L_i f)(y)|,|(L_i f)(z)|}. For any x ∈Σ^n we have |(L_i f)(x)| ≤max_y,z : x_∼ i = y_∼ i = z_∼ i |f(y) - f(z)|. Expanding the definition, we have (L_i f)(x) = (P_i f)(x) - f(x) = [f(X) | X_∼ i = x_∼ i] - f(x) so the latter bound follows immediately, and the former bound follows from the triangle inequality as |f(y) - f(z)| ≤ |(L_j f)(y)| + |(L_j f)(z)| ≤ 2 max{|(L_j f)(y)|, |(L_j f)(z)|}. Using Lemma <ref>, if T ≤ c/q^2 b^2Δ^2 for some absolute constant c > 0, we have for all t ≤ T that for some constant c' > 0, ∑_k = 0^∞t^k/k!∑_(j_1,…,j_k) ∈𝒮_i,k[ ⋯ [[[P_i,P_j_1], P_j_2], P_j_3], ⋯, P_j_k] f ^2_1 + e^-2ρ t≤ q^2 b^2 ∑_j = 1^n (c'/Δ)^d_G(j,i)L_j f_1 + e^-2ρ t^2 and summing over i and using that the number of nodes at exactly distance k from node j is at most Δ^k, this gives 2∑_i ∑_k = 0^∞t^k/k!∑_(j_1,…,j_k) ∈𝒮_i,k[ ⋯ [[[P_i,P_j_1], P_j_2], P_j_3], ⋯, P_j_k] f ^2_1 + e^-2ρ t≤ c' q^2 b^2 ∑_j = 1^n L_j f_1 + e^-2ρ t^2. Hence we have for T ≤ c/q^2 b^2Δ^2 that ∫_0^T L_i H_t f_2^2 dt ≤ c' q^2 b^2 ∫_0^T ∑_j = 1^n L_j f_1 + e^-2ρ t^2 dt which gives the desired bound (f) ≤c' q^2 b^2/1 - e^-ρ T∫_0^T ∑_j = 1^n L_j f_1 + e^-2ρ t^2 dt. §.§ Generalized Talagrand and KKL Inequalities We now show how to deduce the Talagrand and KKL inequalities from Theorem <ref>. The proof of these implications follows from the work of Cordero-Erausquin and Ledoux <cit.> and is reproduced for convenience. The first result generalizes Talagrand's inequality: For any n ≥ 1, ν satisfying Assumption <ref>, and any f : Σ^n →ℝ, we have _ν(f) ≤Cq^4 b^4 Δ^2/ρ∑_j L_j f_2^2/1 + log(L_j f_2/L_j f_1) for some absolute constant C > 0. Making the change of variables p = 1 + e^-2ρ t, dp = -2ρ e^-2ρ t dt and assuming T ≤ 1/2ρ we have by Holder's inequality ∫_0^T L_j f^2_1 + e^-2ρ t dt ≤2/ρ∫_1^2 L_j f^2_p dp ≤2/ρL_j f_2^2 ∫_1^2 d_j^2θ(p) dp where 1/p = θ + (1 - θ)/2 = (1 + θ)/2 and d_j := L_j f_1/L_j f_2 ≤ 1. Note that dθ/dp = -2/p^2 so making the change of variables s = 2θ(p), ds = (-4/p^2) dp we have ∫_1^2 d_j^2θ(p) dp ≤∫_0^2 d_j^s (p(s)^2/4) ds ≤∫_0^2 d_j^s ds = 1 - d_j^2/log(1/d_j) = (1 - d_j^2)(1 + 1/log(1/d_j))/1 + log(1/d_j)≤2/1 + log(1/d_j). hence ∫_0^T L_j f^2_1 + e^-2ρ t dt ≤4/ρ(1 + log(1/d_j)). Combining with Theorem <ref>, we have for T = c/q^2 b^2 Δ^2 that for some absolute constant C > 0 (f) ≤Cq^4 b^4 Δ^2/ρ∑_j L_j f_2^2/1 + log(L_j f_2/L_j f_1) which proves the analogue of Talagrand's inequality. Now we generalize KKL: There exists α_b,ρ,Δ,q > 0 such that the following is true. For any n ≥ 1, ν satisfying Assumption <ref>, and any f : Σ^n →{0,1}, there exists a coordinate k ∈ [n] such that I_k(f) ≥α_b,ρ,Δ,q(f)log(n)/n. By combining Lemma <ref> with Theorem <ref> we have that (f) ≤ C ∑_j I_j(f)/1 - log(bq√(I_j(f))) where C = C_b,ρ,Δ,q > 0. Fix b,ρ,Δ,q and suppose for contradiction that the conclusion of the theorem is false. The conclusion of the theorem is trivially true if n = 1, so it must be that for any α∈ [0,1] there exists n ≥ 2, ν satisfying Assumption <ref>, and f : Σ^n →{0,1} so that I_k(f) ≤α(f) log(n)/n for all k ∈[n]. In particular I_k(f) ≤αlog(n)/n since (f) ≤ 1. Combining with (<ref>) and dividing through by (f), we have 1 ≤C αlog(n)/1 - log(bq√(αlog(n)/n))) = C αlog n/1 - log(bq α^1/2) + (1/2) [log(n) - loglog(n)] = C α/1/log(n) - log(bq α^1/2)/log(n) + (1/2) [1 - [loglog(n)]/log(n)]. which is a contradiction for any α < min{1/b^2q^2, 1/Cinf_n ≥ 2[1/log(n) + (1/2) [1 - [loglog(n)]/log(n)] ] }. For f : Σ^n →{0,1} and any ν satisfying Assumption <ref>, we have for any p ≥ 1 I_i(f) ≥ |L_i f|^p ≥1/(qb)^p I_i(f) Recall that I_i(f) = _X ∼ν[∃ x'_i ∈Σ, f(X) f(X_1,…,X_i - 1, x'_i, X_i + 1, …, X_n)]. Given x ∈Σ^n, if f(x) = f(x_1,…,x_i - 1, x'_i, x_i + 1, …, x_n) for all x'_i ∈Σ then this means that (L_i f)(x) = 0. Since |L_i f| ≤ 1, this implies that |(L_i f)(x)| ≤(∃ x'_i ∈Σ, f(x) f(x_1,…, x_i - 1, x'_i, x_i + 1, …, x_n)). ) On the other hand, if there exists some x'_i such that f(x) f(x_1,…,x_i - 1, x'_i, x_i + 1, …, x_n) then this implies that |(L_i f)(x)| = |f(x) - [f(X) | X_∼ i = x_∼ i]| ≥(X_i = x'_i | X_∼ i = x_∼ i) ≥ 1/qb by (<ref>). Therefore 1/qb(∃ x'_i ∈Σ, f(x) f(x_1,…, x_i - 1, x'_i, x_i + 1, …, x_n)) ≤ |(L_i f)(x)| Hence taking expectation over X we have for any p ≥ 1 I_i(f) ≥ |L_i f|^p ≥1/(qb)^p I_i(f) as claimed. §.§ Application to coalitions We now discuss the application of our result to the existence of coalitions for monotone voting rules. In this section, we restrict to the case of Σ = {± 1} and recall that a function f : {± 1}^n →ℝ is monotone if f(x) ≤ f(y) for any pair such that x ≤ y coordinatewise. The following corollary shows in particular that a coalition of size ω(n/log(n)) has influence 1 - o(1) on a fair election. It follows by iteratively applying our generalization of the KKL theorem, and generalizes Corollary 3.5 of <cit.> where the case of the uniform measure was considered. For any n ≥ 1 and ν satisfying Assumption <ref>, the following is true. For any ϵ > 0 and and monotone function f : {± 1}^n →{0,1} satisfying _ν[f] ≥ϵ, there exists a set of coordinates S ⊂ [n] such that _X ∼ν[f(X_∼ S, X_S → 1)] ≥ 1 - ϵ and |S| ≤4(1 + b) log(1/2ϵ)/α_b,ρ,Δ·n/log(n) where α_b,ρ,Δ > 0 is the constant (independent of n) from Theorem <ref>. We construct a sequence of sets S_0,S_1,… iteratively. Let S_0 = {}. For each t ≥ 0, define f_t(x) = f(x_∼ S_t, 1_S_t), i.e. f_t is the same as f except that it ignores the input x_S_t and replaces it by all-ones. Either _X ∼ν[f_t = 1] ≥ 1 - ϵ or we define a set S_t + 1 in the following way. By Theorem <ref>, there exists some k_t ∈ [n] such that I_k_t(f_t) ≥α(f_t) log(n)/n where α = α_b,ρ,Δ > 0 does not depend on n, and we let S_t + 1 = S_t ∪ k_t. Now defining f_t + 1(x) = f(x_∼ S_t + 1, 1_S_t + 1), we have by monotonicity that _ν(f_t + 1 = 1) = _ν(f_t = 1) + _ν(f_t + 1 > f_t). Furthermore, _ν(f_t + 1 > f_t) = _X ∼ν[1(f_t(X_∼ k_t, 1) > f_t(X))] = _X ∼ν[1(f_t(X_∼ k_t, 1) > f_t(X)) · 1(X_k_t = -1)] = _X ∼ν[1(f_t(X_∼ k_t, 1) > f_t(X_∼ k_t, -1)) · 1(X_k_t = -1)] = _X ∼ν[1(f_t(X_∼ k_t, 1) > f_t(X_∼ k_t, -1)) ·(X_k_t = -1 | X_∼ k_t)] ≥I_k_t(f_t)/1 + b where in the last equality we applied the law of total expectation, and in the final step we used that _X ∼ν(X_k_t = -1 | X_∼ k_t) ≥1/1 + b by Assumption <ref>. Therefore, if p_t = (f_t = 1) we have that p_t + 1≥ p_t + α/1 + b p_t(1 - p_t) log(n)/n. It follows that if p_t < 1/2, p_t + 1≥ (1 + αlog(n)/(1 + b)n) p_t ≥exp(αlog(n)/2(1 + b)n) p_t, so p_t > 1/2 for any t > 2(1 + b)n/αlog(n)log(1/2ϵ). By a symmetrical argument, we have that p_t ≥ 1 - ϵ for t > 4(1 + b)n/αlog(n)log(1/2ϵ). § SHARP THRESHOLDS WITH RESPECT TO EXTERNAL FIELD Fixme: it is subsumed by <https://projecteuclid.org/journals/annals-of-probability/volume-34/issue-5/Influence-and-sharp-threshold-theorems-for-monotonic-measures/10.1214/009117906000000278.full> We consider the following application of the result in the spirit of Friedgut-Kalai <cit.>. Let Γ be a group with a transitive action on [n], i.e. for every i,j ∈ [n] there exists γ∈Γ such that γ(i) = j. First recall that we call a function f(x_1,…,x_n) invariant under Γ, if f(x_1,…,x_n) = f(x_γ(1),…,x_γ(n)) for all γ∈Γ. Also, recall that a function f : {± 1}^n →ℝ is monotone if f(x) ≤ f(y) for any pair such that x ≤ y coordinatewise, and supermodular if f(x) + f(y) ≤ f(x y) + f(x y) where x y denotes coordinate-wise maximum and x y denotes the coordinate-wise minimum. For example, if f is a quadratic polynomial then it is supermodular provided all degree 2 coefficients are nonnegative, and additionally monotone if the degree 1 coefficients are also nonnegative. For all b ≥ 1, ρ∈ [0,1], Δ≥ 1 there exists a constant C_b,ρ,Δ > 0 such that the following is true. Consider a model on {± 1}^n where ν_h(x) = 1/Z(h)exp(H(x_1,…,x_n) + h ∑_i=1^n x_i), where: * H(x_1,…,x_n) is a supermodular function. * H(x_1,…,x_n) is invariant under Γ. * The measures ν_h satisfy Assumption <ref> uniformly with constants b,ρ,Δ for all |h| ≤ M. and let f : {± 1}^n →{0,1} be a monotone function that is invariant under Γ. Then E_h[f] is monotone in h. Furthermore, if E_h_1[f] ≥ for some h_1 > -M and h_2 = h_1 + C_b,ρ,Δlog(1/)/n < M, then _h_2[f] ≥ 1 - . Give Examples The proof is very similar to the proof of Friedgut-Kalai. We need the following variant of Russo's lemma. In the setting of the proposition we have for |h| ≤ M, d/dh E_h[f] ≥ c_b,ρ,Δ_h(f) log n for some c_b,ρ,Δ > 0 independent of n. By explicit calculation, we have that d/dh_h[f] = ∑_i _h(f(X), X_i). By the law of total variance, _h(f(X), X_i) = _h([f(X) | X_∼ i], [X_i | X_∼ i]) + _h _h(f(X), X_i | X_∼ i). We claim that [f(X) | X_∼ i] and [X_i | X_∼ i] are monotone functions of X, so by the FKG inequality <cit.>, _h([f(X) | X_∼ i], [X_i | X_∼ i]) ≥ 0. The fact that [X_i | X_∼ i] is monotone is proved in Lemma <ref>. To see that [f(X) | X_∼ i] is monotone, observe that [f(X) | X_∼ i] = f(X^+) (X_i = 1 | X_∼ i) + f(X^-) (X_i = -1 | X_∼ i) where X^+ denotes X with coordinate i set to +. Hence if x ≤ y coordinate-wise, [f(X) | X_∼ i = x] = f(x^+)(X_i = 1 | X_∼ i = x_∼ i) + f(x^-)(X_i = -1 | X_∼ i = x_∼ i) ≤ f(y^+) (X_i = 1 | X_∼ i = x_∼ i) + f(y^-) (X_i = -1 | X_∼ i = x_∼ i) ≤ f(y^+) (X_i = 1 | X_∼ i = y_∼ i) + f(y^-) (X_i = -1 | X_∼ i = y_∼ i) where the last inequality follows because (X_i = 1 | X_∼ i) is monotone, f(y^+) ≥ f(y^-), and (X_i = 1 | X_∼ i = y_∼ i) + (X_i = -1 | X_∼ i = y_∼ i) = 1. On the other hand, _h(f(X), X_i | X_∼ i) = [_h(X_i | X_∼ i) (f(X_∼ i, 1) f(X_∼ i, -1))] ≥1/4b I_i(f) where the influence is evaluated at ν_h. So we have shown that _h(f(X), X_i) ≥1/4b I_i(f) and using symmetry and Theorem <ref> proves the result. In the setting of the proposition, x ↦_h(X_i = 1 | X_∼ i = x_∼ i) is a monotone function on the hypercube {± 1}^n. First, for a function f : {± 1}^n →ℝ we define its differenced version along direction i to be (D_i f)(X) = (1/2)(f(X_1,…,X_i - 1, +1, X_i + 1, …, X_n) - f(X_1,…,X_i - 1, -1, X_i + 1, …, X_n)). We can observe by direct calculation that _h[X_i = 1 | X_∼ i] = tanh(h + (D_i H)(X)). Since tanh is monotone, we just need to prove that D_i H is monotone as well. This means that for a ≤ b, we need H(a^+) - H(a^-) ≤ H(b^+) - H(b^-) where a^± denotes a with coordinate i set to ± 1. Rearranging, we need to show H(a^+) + H(b^-) ≤ H(a^-) + H(b^+) = H(a^+ b^-) + H(a^+ b^-) and this follows from the definition of supermodularity. Given the lemmas, we can complete the proof of the generalized Friedgut-Kalai result. Suppose h ∈ [h_1,M] is such that _h[f] ≤ 1/2. Then by Lemma <ref> d/dh_h[f] ≥ c_b,ρ,Δlog(n) _h(f) ≥ (1/2)c_b,ρ,Δlog(n) _h[f] so d/dhlog_h[f] ≥ (1/2)c_b,ρ,Δlog(n). It follows by the fundamental theorem of calculus that h - h_1 ≤1/2 - log()/(1/2)c_b,ρ,Δlog(n)≤1 + log(1/)/c_b,ρ,Δlog(n). By applying a symmetrical argument when _h[f] > 1/2, we see that if h ∈ [h_1,M] is such that _h[f] ≤ 1 - then h - h_1 ≤2 + 2log(1/)/c_b,ρ,Δlog(n). § SOME EXAMPLES There is a vast literature establishing log-Sobolev inequalities for spin systems on the hypercube. For concreteness, we give a few examples of settings where the log-Sobolev constant is known to be bounded, and as a consequence our results can be applied. Sparse Markov random field under ℓ_2-Dobrushin uniqueness condition. Suppose that ν is a Markov random field on a graph of maximum degree Δ with n vertices, and define the Dobrushin matrix A ∈ℝ^n × n to have zero diagonal and off-diagonal entries A_ij = max_y ∈Σ^n, z d_TV(_ν[X_i = ·| X_∼ i = y_∼ i], _ν[X_i = ·| X_∼ i,j = y_∼ i,j, X_j = z]). Suppose also that ν satisfies the b-bounded marginal assumption from Assumption <ref>. Then if A_OP < 1, it was shown by Marton <cit.> that ν satisfies the log-Sobolev inequality with log-Sobolev constant polynomial in b and q. Special case: Ising under Dobrushin's uniqueness threshold. As a special case of the above, suppose that ν(x) ∝exp(∑_(i,j) ∈ E J_ij x_i x_j + ∑_i h_i x_i) is a probability measure on the hypercube {± 1}^n parameterized by J,h where E is the edge set of a sparse graph of maximum degree Δ. If ∑_j |J_ij| < 1 - δ for all i, and ∑_i |h_i| < H, one can directly show from the definition of the model that it is marginally bounded with b = exp(O(1 + H)) and satisfies Dobrushin's uniqueness condition (by applying Gershgorin's disk theorem), hence our result applies. Note that we do not need any assumption on the sign of the interactions J_ij or external field h_i. Additional references. There are many settings outside of Dobrushin's uniqueness condition where the log-Sobolev inequality is known. For example, the case of the lattice Ising model we discussed earlier is not contained in this regime. See e.g. <cit.> for a few relevant references. In particular, by the result of Chen, Liu, and Vigoda <cit.>, the log-Sobolev constant can be bounded purely as a function of b,Δ and the “spectral independence” constant of the distribution ν — so our assumption that the log-Sobolev constant is bounded can be replaced by the assumption of spectral independence. Acknowledgment F.K. was supported in part by NSF award CCF-1704417, NSF award IIS-1908774, and N. Anari’s Sloan Research Fellowship. D.M. was supported by a Sloan Research Fellowship, NSF CCF award 2227876 and NSF CAREER award 2239160. E.M. is partially supported by and Vannevar Bush Faculty Fellowship award ONR-N00014-20-1-2826, ARO MURI W911NF1910217 and a Simons Investigator Award in Mathematics (622132). plain
http://arxiv.org/abs/2307.05089v1
20230711074142
Integration by parts formulas and Lie's symmetries of SDEs
[ "Francesco C. De Vecchi", "Paola Morando", "Stefania Ugolini" ]
math.PR
[ "math.PR", "math-ph", "math.MP" ]
Integration by parts formulas and Lie's symmetries of SDEs Francesco C. De Vecchi Dipartimento di Matematica “Felice Casorati”, Università degli Studi di Pavia [email protected] Morando DISAA, Università degli Studi di Milano, [email protected] Stefania Ugolini Dipartimento di Matematica, Università degli Studi di Milano, [email protected] ================================================================================================================================================================================================================================================================================================================================== A strong quasi-invariance principle and a finite-dimensional integration by parts formula as in the Bismut approach to Malliavin calculus are obtained through a suitable application of Lie's symmetry theory to autonomous stochastic differential equations. The main stochastic, geometrical and analytical aspects of the theory are discussed and applications to some Brownian motion driven stochastic models are provided. § INTRODUCTION Born in 1970s Malliavin, calculus very soon became an (infinite dimensional) analysis on Wiener space (see, e.g., <cit.>). The main tool of this calculus is certainly the integration by parts formula, and while important applications are the study of the regularity of the probability density of random variables and the related computation of conditional expectations, nowadays many other applications to Stochastic Partial Differential Equations (SPDEs) and to probabilistic numerical methods are available (see, e.g., <cit.>). Malliavin calculus has a deep connection with Wiener chaos decomposition and can be introduced starting from it (see <cit.>). It was also applied to diffusion processes which are solutions to Stochastic Differential Equations (SDEs), obtaining the conditions (Hormander's condition) under which the density of the law of the process is smooth and satisfies exponential bounds together with its derivatives, and thus providing the famous probabilistic proof of Hormander theorem (see, e.g., <cit.>). Quasi-invariance properties of diffusion processes with respect to flows generated by vector fields has been established in an abstract Wiener space by <cit.> and in the classical path-space, e.g., by <cit.>, and <cit.>. Another fundamental result in Malliavin calculus is the integration by parts formula for functional of Brownian motion, through which it is possible to prove the closability of Malliavin derivatives and the well-definition of the Ornstein Uhlenbeck operator (see, e.g., <cit.>). For this reason, it is interesting to obtain (more or less explicit) integration by parts formulas involving different stochastic processes or probability measures in infinite dimensional spaces (see, e.g., <cit.> for the problem of integration by parts formula for generic measure on infinite dimensional spaces, <cit.> for examples of integration by parts formulas of stochastic processes, <cit.> for the integration of Bessel process applied to the study of SPDEs, and <cit.> for applications of integration by parts formula to quantum field theory). In the Bismut variational approach to this calculus (see <cit.>), the integration by parts formula is derived from a fundamental (strong) quasi-invariance principle, which is based on the well-known invariance property of Wiener law under a measure change via Girsanov theorem. Indeed, the Brownian motion, the filtration generated by it and the functionals of Brownian motion are at the core of Malliavin calculus. See <cit.> and references therein for a recent review on Bismut's way to Malliavin calculus. In spite of the fact that we are dealing with an infinite-dimensional differential calculus, in the Bismut approach the infinite dimensional feature is absorbed by the Girsanov formula, while the differential analysis has a strictly finite-dimensional character (see <cit.>). We remark that the finite-dimensional Malliavin calculus still retains great interest, firstly because both the finite-dimensional differential operators and the integration by parts formula allow us to understand how to pass to the infinite-dimensional limit, and secondly because the strategy mentioned could be useful for generalising the calculus itself in other directions. In this paper we propose a novel approach to finite-dimensional Malliavin-Bismut calculus starting from Lie's symmetries of a given autonomous SDE. The approach is based on the recent (stochastic) Lie's symmetries theory (see <cit.>) according to which a symmetry of a SDE is a (finite) stochastic transformation sending a solution process to another solution process to the same SDE (see <cit.> and <cit.> for an application of Lie symmetry methods for calculating an interesting class of expectations of Itô's diffusions starting from the deterministic Lie's symmetries of the associated PDE, see also <cit.> and references therein for recent developments). Such invariance property of the law of the solution process, which has a global character, allows us to directly formulate an analogue of the quasi-invariance principle of the Malliavin-Bismut calculus (Theorem <ref>). Moreover, considering the same stochastic transformation as a perturbation of the solution process and taking a functional of the diffusion process, we derive a finite-dimensional integration by parts formula from the quasi-invariance principle (see Theorem <ref>). Although the two results, the quasi-invariance principle and the integration by parts formula, are intimately related, the first has a very elementary formulation, while the last achievement requires much more calculations and some suitable technical conditions which are typical of a stochastic variational framework. The main tool in Lie's symmetries theory is provided by the determining equations, whose solutions are vector fields corresponding to infinitesimal symmetries. Given a (multi-component) infinitesimal symmetry of a SDE, the natural geometrical setting allows us to introduce the one parameter group corresponding to each components, which is nothing else that the one parameter flow associated with the symmetry. Since, in order to obtain the integration by parts formula, we have to take suitable derivatives with respect to the flow parameters we need first to establish the existence of such flow components and then to assume all necessary analytical conditions permitting us to take the derivatives with respect to the flow parameter inside the expectation. All technical tools are faced and discussed in a dedicate section. We remark that the notion of symmetry of a SDE, and in particular its invariance meaning, allows to notably simplify the analytical conditions (see Theorem <ref> in Section 6). The main advantage of our approach is that it easily includes, beside spatial and measure change transformations, also time transformations (see <cit.> for the introduction of random time change and of rotational transformation for Brownian motion driven SDE, and <cit.> and <cit.> for a Lie's symmetry of more general SDEs). Indeed, while spatial transformation via diffeomorphisms gives rise to vector fields with a direct and natural interpretation as generators of the associated flow, by introducing a more rich geometrical setting both time and measure changes can be considered, even though their interpretation sometimes can appear more complex. In other words, we are able to provide a geometrical structure that unifies and puts on the same level all the main transformations of a given SDE, beyond the original change of measure transformation, and thus from this point of view our approach can be considered a generalization of the Bismut way to Malliavin calculus. Furthermore, within the large class of Girsanov measure transformations we privilege the subclass of quasi Doob change of measures because of their useful properties (<cit.>,<cit.>). Indeed, stochastic models usually present a sufficient number of quasi Doob symmetries and for quasi Doob transformations the Dade-Doleans exponential reduces to a simpler form without stochastic integrals. Finally, the identification of the one parameter group associated with a given infinitesimal symmetry of the SDE allows us to obtain, as in Bismut's calculus, completely explicit representations of the objects arising in the integration by parts formula. We provide applications of our novel strategy to a one-dimensional Brownian motion, a generalization of the one-dimensional Ornstein-Ulemberg process, the one-dimensional Bessel process and finally to a family of two-dimensional stochastic volatility models (including Heston model). In order to help the reader, in the examples we show all calculations and we discuss in details all analytical conditions permitting us to derive the related integration by parts formulas. The plan of the paper is the following. In order to sketch the classical framework, in Section 2 we recall the Bismut's fundamental quasi-invariance principle and the associated and celebrated Clark-Ocone theorem. A brief introduction to Lie's symmetries analysis of SDEs is provided in Section 3, including spatial transformations, deterministic time change and change of measure transformations of Girsanov and quasi Doob type. Section 4 contains an elementary and self-contained geometric description of the Lie's group which arises from the set of stochastic transformations introduced in the previous section. A quasi-invariance principle based on our Lie's symmetry approach to diffusion processes and a (finite-dimensional) integration by parts formula are proposed in Section 5. In Section 6 we provide a general scheme which could be useful to verify the technical conditions of the main theorems. In the last section we discuss the fruitful applications of our strategy to several stochastic models. In Appendix A a generalization of our main result, Theorem <ref>, to the case of smooth cylindrical functionals of the process is provided. § A BRIEF RECALL OF THE FUNDAMENTAL INVARIANCE PRINCIPLE OF MALLIAVIN-BISMUT CALCULUS It is well-known that from the elementary invariance principle for the Lebesgue integral one can easily deduce the integration by part formula. Indeed, since ∀ϵ >0 by denoting with λ the usual Lebesgue measure we have ∫_-∞^+∞f(x)dλ(x)=∫_-∞^+∞f(x+ϵ)dλ(x), this implies that ∫_-∞^+∞f^'(x)dλ(x)=0. Setting f=gh we get ∫_-∞^+∞g^'(x) h(x)dλ(x)+∫_-∞^+∞g(x) h^'(x)dλ(x)=0. At the basis of Malliavin-Bismut calculus lies an integration by parts formula which generalizes the one for Lebesgue measure to the case of the Wiener measure (i.e. an integration by parts formula involving functionals of a Brownian motion). For the reader's convenience, following <cit.>, we briefly recall (formally) the fundamental invariance principle of the Bismut approach to Malliavin calculus, which will be adapted to the case of symmetric SDEs in the following sections. Let (Ω,ℱ,ℱ_t,ℙ) be a filtered probability space. We assume that Ω=C([0,𝒯]) with a fixed time horizon 𝒯 < ∞, ℙ is the Wiener measure and W a standard Wiener process. Let u be a predictable bounded and smooth stochastic process and set ϕ(t)=∫_0^tu_sds, ∀ t∈ [0,𝒯]. For ϵ∈ℝ_+ let us consider the following Doleans-Dade exponential process Z^ϵ:=exp(ϵ∫_0^𝒯u_s dW_s-ϵ^2/2∫_0^𝒯u_s^2ds). By Radon-Nykodim theorem we can introduce on (Ω,ℱ) a new measure ℚ^ϵ, equivalent to the original one on the Brownian natural filtration, such that dℚ^ϵ/dℙ=Z^ϵ. As well-known, Girsanov theorem implies that W-ϵϕ is a ℚ^ϵ- Wiener process. Since the expectation depends only on the law of the process, for all strongly differentiable function g on Ω we have 𝔼_ℙ[g(W)]=𝔼_ℚ^ϵ[g(W-ϵ∫_0^tu_sds)]. This formula states in particular that both the quantities do not depend on the parameter ϵ. By applying a Radon-Nykodim change of measure we obtain the well-known fundamental quasi-invariance principle 𝔼_ℙ[g(W)]=𝔼_ℙ[g(W-ϵ∫_0^tu_sds)exp(ϵ∫_0^𝒯u_s dW_s-ϵ^2/2∫_0^𝒯u_s^2ds)]. Considering the simplest case g(W):=g(W_𝒯), where g:ℝ→ℝ is a smooth bounded function, deriving, at least formally, both sides of equation (<ref>) with respect to ϵ and evaluating at ϵ=0 we get 0=-𝔼_ℙ[g'(W_𝒯)(∫_0^tu_sds)]+𝔼_ℙ[g(W_𝒯)(∫_0^𝒯u_s dW_s)], which gives an integration by parts formula for the function g(W_𝒯). If we take g as a generic function of the Brownian motion W, we can replace the derivatives g' with the Malliavin derivatives D_t(g) obtaining the general integration by parts formula of Malliavin calculus (see, e.g., <cit.>). The mentioned integration by parts formula permits to prove both the closability of Malliavin derivatives and, when applied to regular solutions to SDEs (i.e. under suitable conditions on the coefficients), the smoothness of their transition probabilities. A similar but slightly different approach to the same problem was proposed by Bismut in <cit.> permitting to generalize formula (<ref>) to a pathwise unique strong solution X of a Brownian motion driven SDE with smooth coefficients by considering the geometrical structure associated with the flow generated by the SDE itself. For convenience of the reader we sketch here the presentation given in (<cit.>) as a very short and formal introduction to Bismut work. Consider the equation of the flow φ_t(ω,x) associated with the one dimensional SDE with (smooth) coefficient μ,σ, i.e. the process solving the equation φ_t(ω,x)=x+∫_0^tμ(φ_s(ω,x))ds+∫_0^tσ(φ_s(ω,x))dW_s. If u_s is a predicable stochastic process which is square integrable with respect to time, we introduce the process η_t^u(x)=∂_xφ_t(ω,x)∫_0^tσ(φ_s(ω,x))/∂_xφ_s(ω,x)u_sds. It is interesting to note that (φ^*)^-1(σ(x)∂_x)=σ(φ_s(ω,x))/∂_xφ_s(ω,x)∂_x, where (φ^*)^-1(ω,·) is the inverse of the push-forward associated with the diffeomorfism φ_t(ω,·). If X_t is a solution to the SDE (<ref>) (i.e. X_t=φ_t(·,X_0)), then, under suitable conditions on the regularity and bounds of the coefficients μ and σ, one gets the following generalization of equation (<ref>) involving the process X_t on the time horizon [0,𝒯] 0=-𝔼_ℙ[g'(X_𝒯)η^u_𝒯]+𝔼_ℙ[g(X_𝒯)(∫_0^𝒯u_s dW_s)]. In the special case where μ=0 and σ=1 formula (<ref>) coincides with formula (<ref>). Furthermore, the possibility of iterating (<ref>) for obtaining the higher order derivatives g^(n)(X_t) of g in terms of g itself, permits to prove both the existence of a smooth density for the random variable g(X_t) and a related martingale representation theorem (<cit.>). One of the main differences between formula (<ref>) and formula (<ref>) is that while equation (<ref>) involves only a local and explicit expression of the Brownian motion W_t (or, more generally, of the SDE dX_t=dW_t), in order to calculate the integration by parts formula (<ref>) we need to compute the highly non-local expression η^u_t(x) involving the derivatives of the flow ∂φ_t. This means that we have to know the solution to the SDE (<ref>) starting from different initial spatial points x, which generically cannot be expressed by a closed formula involving only the processes X_t, W_t and u_t. In the rest of the paper we show how, with a direct generalization of the reasoning used to obtain (<ref>), it is possible to simplify formula (<ref>) exploiting the symmetries of the SDE (μ,σ) and the related invariance properties. Our new direct proof of the integration by parts formula for solutions to Brownian motion driven SDEs allows us to avoid the (generically necessary) regularity and growth assumptions on the coefficients μ,σ, essentially by using the important invariance properties of the process. § A CLASS OF LIE'S SYMMETRIES OF A SDE In this section we recall a particular class of symmetries of a SDE which will be used in the following. For more general classes and for all the proofs see <cit.> and <cit.>. A very general settings including SDEs driven by semimartingales with jumps can be found in <cit.>. For simplicity, the Einstein summation convention on repeated indices is used throughout the paper. Let M, M' be open subsets of ℝ^n and let us fix a finite time horizon [0,𝒯] with 𝒯≥0. In this paper we consider the following weak solutions to the class of autonomous SDEs. We say that a SDE (μ,σ) admits a weak solution with initial distribution μ if there exists a probability space (Ω,ℱ, (ℱ_t)_t ∈ [0,𝒯]) ,ℙ) satisfying the usual conditions, and a couple of semimartingales (X,W) (taking values in ^n and ^m respectively) such that for all t ∈ [0,𝒯], i) W is an ℱ_t Brownian motion; ii) X_0 has law μ; iii) ∫_0^t|σσ^T(X_s)|(ω)+|μ(X_s)|(ω)ds < ∞ for almost every ω∈Ω; iv) X_t -X_0=∫_0^t μ(X_s)ds + ∫_0^t σ_α(X_s)dW^α_s If there exists a weak solution for each initial distribution μ, then we say that there is a weak solution to (<ref>). Let Ω^n ≡ C([0,𝒯], ^n) be the path-space and consider the filtration ℋ^n_t=σ( X_s, s ≤ t ). Suppose that (X,W) is a weak solution to (<ref>) starting at x∈^n and let ℙ_x be the law of X. We know that ℙ_x is a probability measure on (Ω^n,ℋ^n) and that ℙ_x solves the martingale problem for (σσ^T,μ) starting at x, according to the following definition. Let σ and μ be previsible path functionals. Then for x∈^n we say that the probability measure ℙ_x is a solution to the martingale problem for (σσ^T,μ) starting at x if the following conditions holds i) ℙ_x(X_0=x)=1; ii) for each g ∈ C^∞(^n), under ℙ_x, C^g_t=g(X_t)-g(X_0)-∫_0^tLg(X_s)ds is an ℋ^n_t- martingale, where L= 1/2(σσ^T)^ij∂_i ∂_j+ μ^i ∂_i. It is well-known that the martingale-problem formulation of a SDE is equivalent to the weak-solution formulation. §.§ Spatial transformations via diffeomorphisms The most natural transformation of a SDE is a diffeomorphism Φ:M → M' acting on the process component X. Denoting with ∇Φ : M →Mat(n,m) the Jacobian matrix (∇Φ)_j^i=∂_j Φ^i. and applying Itô formula (see, e.g., <cit.> Section 32 or <cit.> Chapter 4) we prove the following result. Given a diffeomorphism Φ : M → M', if the process (X,W) is solution to the SDE (μ,σ), then the process (Φ(X),W) is solution to the SDE (μ',σ') with μ' = L(Φ)∘Φ^-1 σ' =(∇Φ·σ)∘Φ^-1 where L is the infinitesimal generator given in (<ref>). §.§ Deterministic time changes In this paper for simplicity we consider only deterministic time change. For random time change in Lie's symmetry framework see <cit.> and <cit.>. We denote by f(t) a deterministic absolutely continuous time transformation given by t'=f(t)=∫_0^t f^' (s)ds, where η:=f^' is a smooth and strictly positive real function such that f(0)=0. Time change is a tranformation acting on both components of the solution process (X,W) which we denote with ℋ_η. If W' is the solution to dW'_t=√(η (t) )dW_t, then ℋ_η(W^') is again a Brownian motion. Let η :ℝ_+ →ℝ_+ be a smooth and strictly positive function. Let (X,W) be a solution to the SDE (μ,σ). Then the process (H_η(X),H_η(W')) is solution to the SDE (μ',σ') with μ' = 1/ημ σ' = 1/√(η)σ. Denoting by f^-1(t) the inverse of f(t), we can recall the analogue of the above proposition for the martingale problem formulation of a SDE. Let η :ℝ_+ →ℝ_+ be a smooth and strictly positive function. Then the time change f^-1(t) transforms the martingale problem for (a,μ) into the martingale problem for (a/η,μ/η,) (with a=σσ^T) §.§ Random measure changes In order to further enlarge the family of admissible transformations, we randomly change the probability measure under which the driven process is a Brownian motion by using Girsanov theorem. For the probabilistic theory of absolutely continuous predictable change of measure of Brownian motion the reader can see, e.g., <cit.> Section 38, while an extended treatment of the subject can be found in <cit.> Section 8.6. Given a previsible process u_t∈[0,𝒯], let us define the Doleans-Dade exponential process (Z_t)_t∈[0,𝒯] by setting Z_t=Z_t(θ):=exp{∫_0^tu_sdW_s -1/2∫_0^tu_s^2ds }. By Itô formula one has dZ_t=Z_tθ_tdW_t, which says that Z is a local martingale. Indeed, since Z_t is strictly positive, we have that Z_t is always a supermartingale. Moreover, being the Radon-Nikodym derivative in Girsanov theorem strictly positive, i.e. ℚ_ℱ_𝒯/ℙ_ℱ_𝒯=Z_𝒯>0, one can prove that the measure ℚ_ℱ_𝒯 is actually equivalent to ℙ_ℱ_𝒯. Instead of using the well known Novikov condition (see <cit.>, Chapter VIII, Proposition 1.14) to force the supermartingale Z_t to be a ℙ-(global) martingale, according to <cit.> we follow an alternative strategy, given by the next Lemma, which assumes the non explosiveness property both of the original SDE and of the transformed one, according to the following definition. Let μM→ℝ^n and σM→Mat(n,m) be two smooth functions. The SDE (μ,σ) is called non explosive if any solution (X,W) to (μ,σ) is defined for all times t≥0. A smooth vector field h is called non explosive for the non explosive SDE (μ,σ) if the SDE (μ+σ·h,σ) is a non explosive SDE. A positive smooth function η is called a non explosive time change for the non explosive SDE (μ,σ) if the SDE (μ/η,σ/√(η)) is non explosive. In the following we consider a smooth function h M→ℝ^m such that h(X) is a predictable and non explosive stochastic process for the continuous solution process X of the SDE (μ,σ). Let (μ,σ) be a non explosive SDE with a weak solution (X,W) and let hM→ℝ^n be a smooth non explosive vector field. Then the exponential supermartingale (Z_t)_t∈[0,𝒯] associated with u_t=h(X_t) is a ℙ-(global) martingale. The following theorem shows how this probability measure change works on a given SDE. Let (X,W) be a solution to the non explosive SDE (μ,σ) on the probability space (Ω,ℱ,ℙ) and let h be a smooth non explosive vector field for (μ,σ). Then (X,W') is a solution to the SDE (μ',σ')=(μ+σ·h,σ) on the probability space (Ω,ℱ,ℚ), where W'_t = -∫_0^t h(X_s)ds+W_t, .dℚ/dℙ|_ℱ_𝒯 = exp( ∫_0^𝒯 h_j(X_s)dW_s^j-1/2∫_0^𝒯∑_j=1^m(h_j(X_s))^2ds ). Let us suppose that ℙ_x solves the martingale problem for (a,μ) starting from x, that h:Ω→^n is a bounded previsible path functional and that a=σσ^T and μ are bounded. Then Z̃_̃t̃:=exp{∫_0^th(Xs)^TdM_s -1/2∫_0^th(X_s)a(X_s)h(X_s)ds } is a ℙ_x martingale with M_t=X_t-∫_0^tμ(X_s)ds. Defining a measure ℚ on (Ω, ℋ) by dℚ/dℙ|_ℋ_t on ℋ_t, then ℚ solves the martingale problem for (a,μ^') starting from x, where μ^'=μ+ah. We put together the previous stochastic transformations in the following natural way. Given two open subsets M and M' of ℝ^n, a diffeomorphism ΦM→M', a deterministic time change f(t) and a random change of measure hM→ℝ^m, we call T=(Φ,η,h) a (weak finite) stochastic transformation. In order to explicitly describe how the random transformation T acts on the solution process we give the following definition. Let T=(Φ,η,h) be a stochastic transformation. Let X be a continuous stochastic process taking values in M and W be an m-dimensional Brownian motion in the space (Ω,ℱ,ℙ) such that the pair (X,W) is a solution to the non explosive SDE (μ,σ). Given two smooth non explosive functions h and f for the same SDE, we can define the process P_T(X,W)=(P_T(X),P_T(W)), where P_T(X) takes values in M' and P_T(W) is a Brownian motion into the space (Ω,ℱ,ℚ). The process components are given by X'=P_T(X) =Φ(ℋ_f_-1), W'=P_T(W) =W̃, where W̃_t satisfies dW̃_t =√(η(t))(dW_t-h(X_t)dt), and dℚ/dℙ|_ℱ_𝒯 =exp(∫_0^𝒯h_j(X_s)dW_s^j-1/2∫_0^𝒯∑_j=1^m(h_j(X_s))^2ds). We call P_T(X,W) the transformed process of (X,W) with respect to T and we call the function P_T the process transformation associated with T. Given a stochastic transformation T=(Φ,η,h) and a solution (X,W) to the non explosive SDE (μ,σ) such that E_T(μ,σ) is non explosive, then P_T(X,W) is solution to the SDE E_T(μ,σ). § GEOMETRIC SETTING In this section we present the main ideas of Lie group theory underlying our set of stochastic transformations of autonomous SDEs. In this setting, the important feature of Lie Groups is that they have the structure of a differential manifold and, in particular, their elements can vary continuously. An r-parameters Lie Group is a group G with the structure of an r-dimensional manifold such that the group operation m:G× G → G, m(g_1,g_2)=g_1 · g_2, g_1,g_2 ∈ G and the inversion i:G → G, i(g)=g^-1, g∈ G are smooth functions between manifolds. Very often, as in our case, Lie groups naturally arise as transformations groups between manifolds. Let G=ℝ_+×ℝ^m be the group of traslations with a scaling factor, whose elements g=(η,h) can be identified with the matrices [ √(η) h; 0 1 ]. Given the trivial principal bundle πM×G→M with structure group G, we can define the action of G on M×G as R_g_2M×G →M×G (x,g_1) ↦(x,g_1·g_2). which leaves M invariant, with the standard product in G given by g_1·g_2=(η_1,h_1)·(η_2,h_2)=(η_1η_2,√(η_1)h_2+h_1). Considering a second trivial principal bundle π'M'×G→M', we say that a diffeomorphism FM×G→M'×G is an isomorphism if F preserves the structures of principal bundles of both M×G and M'×G, i.e. there exists a diffeomorphism ΦM→M' such that for any g∈G π'∘ F =Φ∘π, F∘R_g =R_g∘F. We notice that such an isomorphism is completely determined by its value on (x,e) (where e is the unit element of G). Therefore there is a natural identification between a stochastic transformation T=(Φ,η,h) and the isomorphism F_T such that F_T(x,e)=(Φ(x),g), where g=(η,h). The next Theorem provides the explicit form of the composition of two stochastic transformations in this group setting. Let T_1=(Φ_1,η_1,h_1) and T_2=(Φ_2,η_2,h_2) be two stochastic transformations. Then the composition T_2∘ T_1 is defined as the stochastic transformation T_2∘T_1=(Φ_2∘Φ_1,(η_2∘Φ_1)η_1,1/√(η_1)·(h_2∘Φ_1)+h_1), and the inverse transformation of T=(Φ,η,h) can be expressed as T^-1=(Φ^-1,(η∘Φ^-1)^-1,-1/√(η)(h∘Φ^-1)). Applying the first random change of measure together with the deterministic time transformation to the Brownian motion we get dW^'_t=√(η_1)(dW_t-h_1dt); now if we apply the second transformation, we obtain dW^''_t=√(η_2)(dW^'_t-h_2dt)=√(η̃)(dW_t-η̃dt), where η̃=η_1η_2 and h̃=h_1+h_2/√(η_1). Since after the SDE transformation by T_1 the state variable is Φ_1(X) and since T_2 acts on the new variable Φ_1(X), both h_2 and η_2 depend on the actual value of the process, that is h_2(Φ_1(X)) and η_2(Φ_1(X)). In the same way we can compute the explicit form of the inverse transformation. The following theorem shows the notable probabilistic counterpart in terms of SDE and process transformation of the above geometric identification. Let T_1 and T_2 be two stochastic transformations, let (μ,σ) be a non explosive SDE such that E_T_1(μ,σ) and E_T_2(E_T_1(μ,σ)) are non explosive and let (X,W) be a solution to the SDE (μ,σ) on the probability space (Ω,ℱ,ℙ). Then, on the probability space (Ω,ℱ,ℚ), we have P_T_2(P_T_1(X,W)) =P_T_2∘T_1(X,W), E_T_2(E_T_1(μ,σ)) =E_T_2∘T_1(μ,σ). Since the set of our stochastic transformations forms a group with respect to the composition ∘, one can introduce the one parameter group T_λ=(Φ_λ,η_λ,h_λ) and the corresponding infinitesimal (general) transformation V=(Y,τ,H) obtained in the usual way Y(x)= ∂_λ(Φ_λ(x))|_λ=0 τ(x)= ∂_λ(η_λ(x))|_λ=0 H(x)= ∂_λ(h_λ(x))|_λ=0, where Y is a vector field on M, τM→ℝ and HM→ℝ^m are smooth functions. If V is of the form V=(Y,0,,0) we call V a strong infinitesimal stochastic transformation. Let V=(Y,τ,H) be an infinitesimal stochastic transformation. Then we can reconstruct the one parameter group T_λ choosing Φ_λ, η_λ and h_λ as the one parameter solutions to the following system ∂_λ(Φ_λ(x))= Y(Φ_λ(x)) ∂_λ(η_λ(x))= τ(Φ_λ(x))η_λ(x) ∂_λ(h_λ(x))= 1/√(η_λ)H(Φ_λ(x)), with initial condition Φ_0=id_M, η_0=1 and h_0(x)=0. We prove only the equation for η and h. The equation for Φ_λ is true by definition of Y. By the composition law in Theorem <ref> and by the properties of the flow we have η_λ_1+λ_2(x)= η_λ_1(Φ_λ_2(x))η_λ_2(x) ∂_λ_1(η_λ_1+λ_2(x))= ∂_λ_1(η_λ_1(Φ_λ_2(x))η_λ_2(x)) ∂_λ_1(η_λ_1+λ_2(x))|_λ_1=0= τ (Φ_λ_2(x))η_λ_2(x), with initial condition η_0(x)=1 In the same way we obtains that h_λ satifies h_λ_1+λ_2(x)= 1/√(η_λ_2)h_λ_1(Φ_λ_2(x))+h_λ_2(x) ∂_λ_1(h_λ_1+λ_2(x))= 1/√(η_λ_2)∂_λ_1(h_λ_1(Φ_λ_2(x))) ∂_λ_1(h_λ_1+λ_2(x))|_λ_1=0= 1/√(η_λ_2)H(Φ_λ_2(x)), with initial condition h_0(x)=0. If the time change is deterministic, as in the present paper, there is a function m such that m=L(τ). Indeed in this case time transformation has a closed form: if f_λ solves the equation ∂_λ f_λ=m(f_λ),f_0=0 then ∫_0^tη_λ(x_s,s)ds=f_λ(X_t,t). Finally we introduce the relevant notion of symmetry of a SDE. A stochastic transformation T is a (finite weak) symmetry of a non explosive SDE (μ,σ) if, for every solution process (X,W), P_T(X,W) is a solution process to the same SDE. An infinitesimal stochastic transformation V generating a one parameter group T_λ is called an infinitesimal (general) symmetry of the non explosive SDE (μ,σ) if T_λ is a symmetry of (μ,σ). A stochastic transformation T=(Φ,η,h) is a symmetry of the non explosive SDE (μ,σ) if and only if (1/η[L(Φ)+∇Φ·σ·h])∘Φ^-1 =μ (1/√(η)∇Φ·σ)∘Φ^-1 =σ Next theorem provides the general determining equations satisfied by the infinitesimal symmetries of a SDE (μ, σ).
http://arxiv.org/abs/2307.04014v2
20230708164616
Novel Pipeline for Diagnosing Acute Lymphoblastic Leukemia Sensitive to Related Biomarkers
[ "Amirhossein Askari-Farsangi", "Ali Sharifi-Zarchi", "Mohammad Hossein Rohban" ]
cs.CV
[ "cs.CV" ]
A. Askari Farsangi et al. Sharif University of Technology, Iran [email protected] {asharifi,rohban}@sharif.edu Novel Pipeline for Diagnosing Acute Lymphoblastic Leukemia Sensitive to Related Biomarkers Amirhossein Askari Farsangi1 Ali Sharifi Zarchi1 Mohammad Hossein Rohban1 August 12, 2023 ========================================================================================== Acute Lymphoblastic Leukemia (ALL) is one of the most common types of childhood blood cancer. The quick start of the treatment process is critical to saving the patient's life, and for this reason, early diagnosis of this disease is essential. Examining the blood smear images of these patients is one of the methods used by expert doctors to diagnose this disease. Deep learning-based methods have numerous applications in medical fields, as they have significantly advanced in recent years. ALL diagnosis is not an exception in this field, and several machine learning-based methods for this problem have been proposed. In previous methods, high diagnostic accuracy was reported, but our work showed that this alone is not sufficient, as it can lead to models taking shortcuts and not making meaningful decisions. This issue arises due to the small size of medical training datasets. To address this, we constrained our model to follow a pipeline inspired by experts' work. We also demonstrated that, since a judgement based on only one image is insufficient, redefining the problem as a multiple-instance learning problem is necessary for achieving a practical result. Our model is the first to provide a solution to this problem in a multiple-instance learning setup. We introduced a novel pipeline for diagnosing ALL that approximates the process used by hematologists, is sensitive to disease biomarkers, and achieves an accuracy of 96.15%, an F1-score of 94.24%, a sensitivity of 97.56%, and a specificity of 90.91% on ALL IDB 1. Our method was further evaluated on an out-of-distribution dataset, which posed a challenging test and had acceptable performance. Notably, our model was trained on a relatively small dataset, highlighting the potential for our approach to be applied to other medical datasets with limited data availability. § INTRODUCTION Leukemia is a type of cancer that affects the body's blood-forming tissues, including the bone marrow and lymphatic system <cit.>. Based on whether the leukemia is acute or chronic and whether it is lymphoid or myeloid, four main types of leukemia can be considered: ALL, AML, CLL, and CML <cit.>. Diagnosing leukemia through the examination of blood smear images is a common method used by hematologists <cit.>. While additional tests may be necessary for a more complete understanding of the patient's condition, experts are able to determine the presence and type of leukemia based on the number and shape of different types of white blood cells <cit.>. This suggests that deep learning has great potential for developing computer models for diagnosing leukemia from related microscopic images. Among all the four types of leukemia, acute lymphoblastic leukemia (ALL) has special diagnostic significance because an early start to treatment can save a patient’s life. This significance grows when we consider that 75 percent of cases involve children under the age of 14 <cit.>. The detection of blast cells in the blood and bone marrow makes it a suitable target for diagnosis using microscopic images. Therefore, studying the diagnosis of ALL using deep learning models is of particular importance in order to improve the accuracy and speed of diagnosis for this common and serious disease. The performance of deep learning models is highly dependent on the size of the dataset used for training. In medical applications, obtaining large datasets can be a challenge, and many datasets are small in size. A popular ALL dataset is the ALL IDB provided by Scotti et al. <cit.>, which contains images of both ALL and normal patients. It is quite small in size. Another dataset, the Raabin dataset <cit.>, was recently introduced and contains a variety of data classes, but it has not yet been much explored. Several classifiers have been proposed for diagnosing leukemia from related microscopic images <cit.>. These classifiers can be categorized based on their target classes. Some classifiers have been designed to classify more than two classes, and they often incorporate the ALL IDB dataset as part of their training due to its importance for ALL diagnosis and its availability <cit.>. However, it's important to consider the issue of dataset bias <cit.> when combining datasets, but some methods may not have paid enough attention to this point. In contrast, other methods have focused on the two-class problem, specifically classifying ALL from the Normal class, and the ALL IDB dataset has been widely used for this purpose <cit.>. In the following, we focus only on the two-class classifiers that distinguish ALL from normal. We can categorize these classifiers based on their input type. Some of these classifiers perform on single-cell images such as those in the ALL IDB 2 dataset, which means they can only perform on processed data in the form of cropped images <cit.>. On the other hand, other classifiers accept images similar to what can be seen under microscopes, such as those in ALL IDB 1 <cit.>. In order to become practical models, classifiers of the first type only judge the image of single cells, while what is available are microscopic images containing a large number of white blood cells. Therefore, not only are object detection methods required in these cases, but it is also necessary to aggregate the results of all cells in a way to make a judgment about the patient. It seems that the second category of models is in better condition. These models do not need object detection methods, but the second problem somehow still exists for them. There is a possibility that in a patient with ALL, there are no signs of the disease in a single microscopic image and that different parts of the patient's blood sample must be examined to make an accurate diagnosis. This is exactly what expert doctors do in this situation. In other words, labels are weak in this case, and a multiple-instance learning setup is needed. Therefore, a third category of models that handle multiple images of the same patient should be considered, and our model belongs to this category of models. Although there are some examples of multiple-instance learning methods for other problems related to diagnosis from blood microscopic images, there are only a few methods in the literature for diagnosing leukemia. Therefore, our work aims to fill this gap and explore the potential of this approach for improving leukemia diagnosis results <cit.>. Finally, one of our model's most important strengths is its reliability and sensitivity to related biomarkers, and we achieved these properties by applying a special training method. We demonstrated that removing blast cells from microscopic images of patients with ALL made the model have difficulty diagnosing the patient as having ALL, whereas removing normal cells made the model accurately diagnose the patient as having ALL. Since our model evaluates patients based on multiple images, testing this property required a completely independent dataset, as ALL IDB's samples are single images. We utilized a subset of Raabin's dataset as an out-of-distribution test set. In general, testing deep learning models on out-of-distribution test sets is a challenging task, and most methods tend to fail during these tests <cit.>. However, our model achieved acceptable accuracy in this challenging setting. § TRAINING CONSIDERATIONS FOR SMALL MEDICAL DATASETS In medical applications of machine learning, in addition to common evaluation metrics such as accuracy and precision, a potential qualitative criterion for assessing model reliability is the similarity between the model's decision-making process and that of a human expert. Evaluating models based on this criterion can provide insight into their performance and trustworthiness. As an illustration of this approach, we conducted a reimplementation of the method proposed by Ahmed et al. <cit.> using their dataset and visualized the model's attention map with the GradCAM algorithm <cit.>. Through our analysis, we identified that the model makes decisions based on unexpected patterns in the input image, which we refer to as "shortcuts," and that these shortcuts do not have any significant medical meaning. This highlights a potential flaw in these models, specifically, the issue of overfitting. Overfitting can occur for a variety of reasons, including model complexity and dataset quality. Complex models are required for extracting high-level features from image data for proper processing; however, when the dataset is small in comparison to the model's complexity, the model's variance increases, resulting in overfitting. Another factor that can contribute to overfitting is when the training data is not clean, causing the model to learn shortcuts. To avoid this issue, it is crucial to use a clean dataset that minimizes the chances of spurious correlations. To address the potential causes of overfitting, two potential solutions are to increase the dataset size through augmentation methods and to manually clean the dataset. It should be noted that we have an implicit assumption that we aim to train a classifier that performs similarly to the human expert classifier. This means that, in the expert's opinion, applied augmentation should not change the label of the image. In our case, cell morphology is an important criterion for classifying a specific cell as normal or diseased. As a result, we are not allowed to use augmentations like shearing that change the cell's morphology, and we have to use augmentations like rotating and translating instead. However, we can see that the majority of the proposed methods have not paid enough attention to this point. Using these considerations, we discovered that the issues with shortcuts persisted and that we needed to find another solution. Two of these visualizations are shown in Fig. <ref>. In the following, we express our method to overcome this problem. § METHOD §.§ Pipeline We presented a pipeline that performs the decision-making process step by step. This approach allows us to observe the model's procedure for producing the final result and helps us design each step based on the actual process used by hematologists. Since the presence of blast cells plays a decisive role in the existence of this disease, specialists look for these cells among white blood cells when examining the patient's microscopic image and making a decision based on that. The first step in the computer simulation of this process should be a cell detector that detects white blood cells in the input image. The second step is to analyze each cell image using the criterion that is sensitive to whether a cell is a blast or not. In the third step, we must summarize the previous step's results and describe the patient's condition using a set of parameters. Finally, based on the generated report, we must determine the existence of this disease. Fig. <ref> depicts the overall layout of this pipeline. The following goes over each step in detail. §.§.§ Object Detection For the white blood cell detector, we used a pre-trained Faster RCNN network with a ResNet50 backbone <cit.>. We fine-tuned it using the ALL IDB 1 dataset, which was manually annotated. §.§.§ Feature Extraction To extract informative features from images, large networks with numerous training parameters are often required. However, training these networks from scratch on small datasets often leads to overfitting. A common solution is to use pre-trained networks on ImageNet, which are global feature extractors that produce rich feature vectors for each input image. In this study, we utilized a pre-trained AlexNet network with fixed weights as the global feature extractor <cit.>. To customize these feature vectors for our problem, we added a trainable 256-node, fully connected layer that takes these feature vectors as input. In our study, we refrained from fine-tuning the global feature extractor since the computationally intensive process of training the LSTM-based architectures does not permit simultaneous optimization of all weights. §.§.§ Profiling There are various ways to aggregate the extracted features from the cell images of a patient. In this work, we used an LSTM-based architecture for this purpose. As the LSTM network accepts each series length as its input, this model doesn't have a problem with the different numbers of white blood cell images from different patients. In addition, it allows us to analyze the set of microscopic images for each patient, increasing the model's reliability and accuracy. It is reasonable because hematologists do not just look at one part of a patient's blood sample; instead, they move the blood slide under the microscope and make decisions based on what they see at several points. We used the LSTM network with 256 internal nodes, which produces a vector of length 64 as a patient's feature vector. §.§.§ Final Classification To classify the patient's feature vector, we used only one fully connected layer with two nodes in the final classification. §.§ Dataset Our research utilized two different datasets: the ALL IDB and the Raabin dataset. The ALL IDB dataset is the most widely used dataset in the literature, consisting of two subsets. The first subset, ALL IDB 1, contains 108 normal or diseased blood microscopic images, with blast cell centers annotated in ALL cases. The second subset, ALL IDB 2, includes 260 images of single white blood cells labeled as normal or cancerous. The Raabin dataset, on the other hand, comprises 938 single-cell images of normal white blood cells. §.§.§ Cell Detection Dataset To train the faster RCNN network, a dataset with blood microscopic images containing bounding boxes around white blood cells is required. Although ALL IDB 1 is a suitable option, it only provides annotations for blast cells, not normal ones. Therefore, we manually annotated the normal cells in these images to utilize this dataset for our purpose. §.§.§ Generated Dataset For training LSTM Inputs in the form of series of the same size are required for training LSTM networks. Furthermore, because LSTM networks are data-hungry models, a large dataset of image series of the same length is required for successful convergence. Because there is no such dataset in our case, we decided to create one. We generate a dataset based on the following assumption: Assumption: The series of white blood cell images belong to ALL class if and only if there were at least one blast cell in it. Based on this assumption, we generate different training image series of the same length by randomly selecting proper cell images from ALL IDB 2 and single cell images from the Raabin dataset. We choose a different number of cells because we expect the number of cells in each input image to be different. To make the sequences the same size, we add the required number of empty images to the set of selected images. Because LSTM networks are sensitive to the order of inputs, and in our case, the order of images is unimportant, we decided to shuffle the image series to reduce LSTM sensitivity to the image series order. Fig. <ref> depicts an example of one of these image series.  §.§ Training The proposed pipeline's training was completed in three stages. Faster RCNN was trained on its dataset to become a white blood cell detector in the first step. LSTM and the classifier had been trained in two other steps. These two steps are explained below. §.§.§ Making sensitivity to the blast cells To train the network to pay attention to blast inputs, we first trained the LSTM and classifier on a one-length image series. Because of our dataset generation assumption, the classifier output should be considered cancer if and only if the input image is a blast cell. If this training process is successful, we expect the LSTM and classifier to extract information from AlexNet features that correlates with whether the input cell is a blast or not. §.§.§ Training for analyzing image series We optimize our network on the generated image series of length 15 in the final training step. Our goal in this step is to teach the model to apply what it learned about blast cells in the previous stage to the analysis of a series of images. § RESULTS §.§ Classification Performance To ensure a valid evaluation of our model on the ALL IDB 1 dataset, which includes cropped cells from the ALL IDB 2 dataset used in training, we first removed the corresponding cells from the evaluation set. Fig. <ref> provides a sample of this modified dataset. Our model achieved accuracy and F1-score of 96.15% and 94.24%, respectively. Table <ref> shows the accuracy achieved by our method in comparison to other methods. In the following, we evaluated our model on an out-of-distribution test set consisting of multiple images for each patient, requiring classification in a multiple-instance setup. To do this, we utilized a subset of the Raabin Leukemia dataset, which contains numerous microscopic images for each patient. Since the number of patients is small in this dataset, we decided to split all images of each patient and considered them as separate patients. We did this in a way that the total white blood cells of different patients are equal and called this constant number partition size. On a partition size of 50, our model attained an accuracy of 72.88% and an F1-score of 71.01%. The average accuracy and F1-score across partition sizes ranging from 20 to 100 were found to be 71.78% and 70.35%, respectively. It is important to note that testing on an out-of-distribution test set is a challenging task, and it is common for model performance to decrease significantly under such conditions. It is necessary to mention the outcome of the cell detector section here. The faster RCNN was trained on 85 percent of the ALL IDB 1 images and achieved a mean average precision (mAP) of 96.03% on the remaining 15 percent of ALL IDB 1 images. §.§ Sensitivity to Blast Cells Our special training method has led us to expect that our model will be sensitive to ALL biomarkers, particularly blast cells. To put this hypothesis to the test, we designed a test that involved removing blast cells from the image series of ALL patients to see if it would make it difficult for our model to identify this new sample as belonging to the ALL class. We also expected that removing normal cells from these images would improve the model's performance. To perform this test, we required the coordinates of blast and normal cells to be annotated in each dataset. While these annotations were available for the ALL IDB 1 dataset, the Raabin leukemia dataset did not have such annotations. Therefore, we trained a Faster RCNN on ALL IDB 1 data to detect blast and normal cells. This object detector achieved a mean average precision of 93.41% on the test set split from ALL IDB 1. Our test on the ALL IDB 1 dataset showed that the model's recall under blast removal, normal removal, and no attack conditions was 43.90%, 97.56%, and 97.56%, respectively. The observed decrease of 53.66% in recall under blast removal indicates that our hypothesis was correct and that the model was indeed sensitive to blast cells. For this test on the Raabin dataset, we evaluated the model's performance on groups of images of varying sizes. For each patient, we selected as many images as the group size and used object detection to form three different cell series: one with only blast cells, one with only normal cells, and one with all cells. It should be noted that the length of these three cell series for each patient is not equal, and the sum of the first two is equal to the third. We plotted the recall of the model under blast removal, normal cell removal, and no attack conditions. Our results showed that removing blast cells reduced the model's recall by an average of 18.84% across all group sizes. Conversely, removing normal cells has a large effect on model recall and, on average, increases it by 18.69%. This finding validates our model's proposed hypothesis on an even out-of-distribution test set and demonstrates the significance of blast cells in our model's decision-making process. Fig. <ref> provides a visual representation of our findings. It should be noted that while we used the results of the trained detector on the ALL IDB 1 dataset to evaluate the model's performance on the Raabin leukemia dataset, there is no ground truth for the Raabin dataset. Therefore, an error in object detection may be present in the obtained results. § ABLATION STUDY §.§ Cell numbers effect Based on our understanding that blast cells are not uniformly distributed throughout a blood smear, we hypothesize that increasing the number of white blood cells per patient will improve our model's performance in detecting ALL. Fig. <ref> shows the accuracy plot for varying the number of images per patient, and as expected, we observe a positive correlation between the number of images and model performance. In all subsequent reports, the reported metrics are the averages of the results obtained for group sizes ranging from 20 to 100, since evaluations were performed on different group sizes. §.§ Different feature extractors Our method can employ several pre-trained feature extractors, including AlexNet, InceptionV3 <cit.>, ResNet50 <cit.>, VGG16 <cit.>, and ViT-base-patch-16 <cit.>. We conducted a comparison of these models, and the results are presented in Table <ref>. From this table, it is evident that the AlexNet feature extractor yields the best performance. We will conduct further tests using this feature extractor in the subsequent sections. §.§ LSTM effect The main reason for using LSTM in our architecture was to add the ability to aggregate the extracted results in our model. It may seem that since in the first step of training, we trained the model on single-cell images to learn how to identify blast cells, in the second step of training the model has generalized this result with a counting approach. In other words, it would be possible that the LSTM layer may not perform more than a linear operator. How to test this hypothesis is a bit unclear, but we can still define some tests. For each group of patient images, we fed the extracted cells individually to the model for labeling as either blast or normal cells. As a result, each patient in the test set was assigned two values, representing the number of normal and blast cells, respectively. Using this data, we trained a perceptron to classify each patient. Since the training and testing sets were identical for this perceptron, the resulting accuracy can be considered ideal. The average accuracy of the ideal perceptron was found to be 8.51 percent lower than the average accuracy of the LSTM model. Therefore, it can be concluded that the LSTM layer is capable of more than just linear operations. §.§ Pre-training effect It is necessary to investigate the potential benefits of pre-training, the first step of our training process. We hypothesize that pre-training can accelerate model convergence by providing a controlled environment in which the model can recognize the important biomarker for ALL, which is blast cells, and learn to differentiate them from normal cells. To evaluate this hypothesis, we trained two models, one with pre-training and one without. All other conditions were kept identical. Our results indicate that the model without pre-training has an average accuracy that is 2.05 percent lower than that of the pre-trained model. Therefore, pre-training appears to be a beneficial step in our training process, leading to improved model accuracy. §.§ Impact of training series length To assess the impact of training series length on model performance, we trained several classifiers with varying series lengths ranging from 1 to 32. The results of these classifiers are shown in Fig. <ref>. The plot suggests that there is an initial positive correlation between series length and model performance. However, beyond a certain threshold, the impact of series length on performance diminishes, and longer series lengths do not significantly improve the model's accuracy. § CONCLUSION In this work, we aimed to develop a machine learning-based model for diagnosing acute lymphoblastic leukemia from blood smear images. Since the size of the datasets is small in this field, training networks in an end-to-end manner leads the model to find shortcuts for making decisions instead of using medically meaningful patterns. To address this issue, we introduce a pipeline inspired by the hematologists' approach, consisting of four main steps: detecting white blood cells, analyzing each cell, aggregating results, and decision-making. Compared to end-to-end training, this approach has several advantages. First and foremost, the training process is a kind of search among all feasible classifiers, and if we want to achieve a classifier similar to a human expert, we must constrain this search space. Each data point is a kind of constraint, and that's why the size of the dataset becomes important. In our problem, we do not have access to such large datasets, so we should apply the constraints in another way. We did this by constraining the classifier architecture to the described pipeline. In addition, this approach allows us to monitor the performance of individual components and to find and fix possible faults. Another important thing that we did for training our pipeline was to train the final classifier in two stages. The first step can be considered an auxiliary task that makes the classifier sensitive to the biomarkers of ALL, and the second step is for the model to learn to generalize the knowledge learned in the first step. Finally, we show that our model is sensitive to ALL biomarkers. Furthermore, we analyzed the impact of our design choices, such as the use of the AlexNet feature extractor, the LSTM layer, the pre-training step, and the length of the training series. In our work, we addressed the need to redefine the problem of acute lymphoblastic leukemia (ALL) diagnosis as a multiple-instance learning problem, which has not been done before. To overcome this, we generated a suitable training dataset and evaluated our model on an out-of-distribution test set, achieving acceptable results. It seems that the model's sensitivity to staining is its major weakness, and further work should focus on addressing this issue to improve its performance. As a result, one potential solution could be to train feature extractors that are less sensitive to staining. unsrt
http://arxiv.org/abs/2307.07305v1
20230714122957
Charged Lifshitz black holes from general covariance breaking
[ "D. C. Moreira", "A. S. Lemos", "F. A. Brito" ]
hep-th
[ "hep-th", "gr-qc" ]
http://arxiv.org/abs/2307.04326v1
20230710033943
Automotive Radar Mutual Interference Mitigation Based on Hough Transform in Time-Frequency Domain
[ "Yanbing Li", "Weichuan Zhang", "Lianying Ji" ]
eess.SP
[ "eess.SP" ]
Automotive Radar Mutual Interference Mitigation Based on Hough Transform in Time-Frequency Domain Yanbing Li, Member, IEEE, Weichuan Zhang, Member, IEEE, and Lianying Ji This work was supported by the Fundamental Research Funds for the Central Universities 2022RC008. Yanbing Li is with the School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, 100044, China (e-mail: [email protected]). Weichuan Zhang is with the Institute for Integrated and Intelligent Systems, Griffith University, QLD, Australia. (e-mail: [email protected]). Lianying Ji is with the Beijing Muniu Linghang Technology Company, Beijing, 100192, China (e-mail: [email protected]). August 12, 2023 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== With the development of autonomous driving technology, automotive radar has received unprecedented attention due to its day-and-night and all-weather working capability. It is worthwhile to note that more and more vehicles are equipped with automotive radars, resulting in mutual interference between radars. The interference reduces radar target detection performance, making perception information unreliable. In this paper, a novel interference mitigation method based on power-weighted Hough transform is proposed for solving the radar mutual interference and improving the safety of autonomous driving systems. Firstly, the frequency modulation characteristics of interference signals and target echo signals are analyzed, and differences between the two signals are introduced. Secondly, based on the straight line detection technique, the power of the mutual interference signal in time-frequency domain is accumulated, and the accurate position of the interference is located. Finally, the target echo is recovered by autoregressive model. Compared with existing state-of-the-art methods, the proposed method has the ability to retain more useful signals after the interference mitigation, and achieve better interference detection robustness under low signal-to-noise ratio conditions. Simulation experiments and real scenario experiments verify the effectiveness of the proposed method and show its superiority. Automotive radar, Hough transform, interference mitigation, millimeter-wave radar, time-frequency spectrogram. § INTRODUCTION Radar, as an environmental sensing technology, has been introduced in more and more civil fields such as automotive radar, traffic radar, and security radar. On one hand, this is due to the development of chip technology, especially millimeter-wave chip technology. These advances have made it possible to reduce radar design cost and difficulty, which allows radar manufacturers to iterate their products rapidly <cit.>. On the other hand, the trend of intelligence has led to an unprecedented emphasis on perception technology in many aspects of people’s life, which provides necessary and reliable perception data for the post-processing stage. One of the most representative civilian applications is automotive radar. From low level assisted driving to high level autonomous driving, radars are included as an important sensor in autonomous driving solutions <cit.>. It is well known that no single sensor has ability to acquire all desired information well in the real-world scenarios of all conditions. Along this way, multi-sensor fusion techniques are increasingly being used for autonomous driving. As one of the three mainstream sensors, i.e., cameras, radars, and lidars, radars have ability to day-and-night and all-weather work, which is not well demonstrated by the other sensors. Meanwhile, radars have advantage in the radial distance and velocity measurement of targets, which is complementary to the information of the other sensors. A typical autonomous driving solution is equipped with seven millimeter-wave radars in a vehicle, which contains one long-range radar for forward looking, two mid-range radars for both forward and rearward looking, and four short-range radars in the four corners for 360 degree coverage <cit.>. This configuration allows radar sensors on a single vehicle to radiate in all directions on roads. In this case, with the development of autonomous driving, the deployment rate of automotive radars will increase rapidly in the future. As a result, influence among radars become inevitable <cit.>. Interference among radars may lead to target detection degradation and increase the likelihood of target loss in severe cases, which is unacceptable for traffic safety <cit.>. Generally, there are two main categories of radar interference. One category is caused by radar devices interfering with each other, and the other category is spoofing attacks performed by jamming devices. The latter is similar to electronic warfare in military applications and is usually introduced in malicious attacking <cit.>. A research on the suppression of malicious jamming such as digital radio frequency memory (DRFM) jamming is discussed in <cit.>. Compared with spoofing attack jamming, the problem of mutual interference between radars is more common in practical scenarios, especially in high-density traffic flow scenarios. Many research analyzing mutual interference of automotive radars can be found in <cit.>. These sources discuss the occurrence probability of mutual interference between radars, calculate the theoretical value of interference power, and illustrate the interference signal in the time domain, the frequency domain and the time-frequency (TF) domain, respectively. These studies deepen our understanding of the mutual interference for automotive radars and indicate that the mutual interference will worsen signal quality and signal-to-noise ratio (SNR), thereby affecting the target detection ability of radars <cit.>. Methods used for solving the aforementioned issues can be categorized into two groups according to the degree of dependence on radar system architecture. The first group is coupled with the radar system, and its implementation usually requires specific software and hardware architectures. Approaches based on transmit waveforms such as orthogonal noise and phase-coded are proposed in <cit.>. Another waveform optimization approach is proposed in <cit.>. These methods suppress interference based on the special structure of waveforms. Digital beamforming methods based on radar antenna array structure are discussed in <cit.>, in which interference in specific directions can be suppressed by the directivity of a formed beam. Because interference sources and targets are usually in the same or close direction in a traffic scene, the digital beamforming methods face angle resolution challenges. Inspired by the biological behavior of bats, a heuristic frequency hopping technique is introduced in <cit.>. When interference occurs, a radar with higher frequency shifts its frequency upwards, while a radar with lower frequency shifts its frequency downwards. This strategy has a higher success rate for interference mitigation than random hopping way. Alternately, radar and communication cooperation is employed for solving mutual interference <cit.>. A distributed network protocol that enables the radar and communication systems to work together is designed, then the avoidance of mutual interference among radars can be achieved due to information sharing. The above-mentioned methods can realize interference mitigation by designing specific system functions, which achieves good effect in designated situations. However, these methods require constraints on radar system design, thereby increasing the development cost and difficulty of radar products. Another group of methods does not customize the radar software and hardware, but uses signal processing techniques, i.e., signal detection and reconstruction, for suppressing interference on the existing radar system architecture, which has good versatility in practice. In terms of the acquisition domain of interference information, these methods can generally be divided into time domain, frequency domain, and TF domain methods. An adaptive noise canceller that uses interference information in negative frequencies to cancel the interference in positive frequencies is proposed in <cit.>. This is a typical implementation of interference mitigation in the frequency domain. Besides frequency domain methods, most of current interference mitigation methods are implemented in the time domain and the TF domain. Zeroing or adding a raised cosine window for the disturbed part of a received signal is adopted in <cit.>. These two ways achieve the attenuation of interference power, yet lose useful signals in the overlapped part with the interference. Wavelet decomposition and denoising is used in <cit.> for removing the interference. Due to the decomposition characteristics of the wavelet transform, useful signals in the undisturbed components can be well retained. Signal reconstruction by autoregressive (AR) model is proposed in <cit.>, which has ability to extrapolate useful signals in the interfered part and retrieve more target information than the zeroing and the windowed methods, however, reconstruction quality will be degraded when a interfered segment is wide. Another signal reconstruction method named iterative method with adaptive thresholding (IMAT) is proposed in <cit.> for overcoming the signal gap introduce by zeroing. The IMAT method is a sparse reconstruction technique from main frequency components essentially. All the methods mentioned above obtain interference information from the time domain, and suppress the interference accordingly. More recently, a research in <cit.> shows that more structural information of interference can be observed in the TF domain. In this case, more differences between the target echo and the interference can be extracted in the TF domain than in the time domain. TF analysis of a received signal in a interference scenario is performed for locating interference time span region in <cit.>, followed by beat frequencies interpolation for recovering the target echo. Another TF analysis based method is introduced in <cit.>. Here the interference is located by a constant false alarm rate (CFAR) detector, followed by a reconstruction process by zeroing, amplitude correction, and Burg-based signal extrapolation, respectively. Experimental results demonstrate that the methods based on TF analysis are superior in interference mitigation performance to time domain methods. Although the existing TF domain methods <cit.> have shown superiority to the time domain methods <cit.> in performance, we still have to resolve whether the characteristic information of the interference in the TF domain is fully exploited. For instance, the CFAR based method <cit.> detects and suppresses interference in frequency slices along the TF spectrogram, without considering the time-frequency variation characteristics of the interference. In this case, interference detection is based on the ratio of the interference power at a certain point to the noise level. A good interference detection performance can be obtained under high interference-to-noise ratio (INR) conditions. However, when the interference power is weak, e.g., the interferer radar is far from the victim radar, the projection of the interference power onto each frequency slice may not be enough for supporting accurate interference detection in the TF domain. In this way, degraded interference mitigation performance may be occurred in low INR conditions for the CFAR based method. Based on aforementioned facts, our main question are: (1) Is there a joint time and frequency characteristic of the interference in the TF domain? And whether this time-frequency characteristic can be effectively extracted for detecting and mitigating the interference? (2) Can the INR be improved for enhancing the interference detection performance? Focusing on these two questions, our research demonstrates that the interference has obvious joint time and frequency structural characteristics on the TF plane, that is, it appears as a straight line with a large slope. In addition, inspired by the incoherent integration method in radar target detection <cit.>, the line structure characteristics of the interference can be used to accumulated the interference power in the TF domain, thus achieving good interference detection performance. In this paper, the mutual interference of frequency modulated continuous wave (FMCW) automotive radars based on the TF domain is discussed, and a robust interference detection and mitigation approach by power-weighted Hough transform is proposed. To the best of our knowledge, so far there is no research that considers the structure information in the TF domain to robustly detect and locate interference, especially in weak interference and low SNR conditions. Compared with the existing interference mitigation methods based on signal processing technology, the contributions of this paper are as follows: * The first mutual interference detection method for automotive radar in terms of structure information in the TF domain is proposed. By analyzing interference signals in a radar receiver, we conclude that the interference in baseband has a linear frequency modulation (LFM) characteristic, i.e., it behaves as a straight line in the TF domain. Based on this structure feature, the Hough transform is used to locate the accurate position of the interference in the TF domain. * For the first time, the way of power accumulation is introduced into the problem of the interference detection in the TF domain. The classical Hough transform is modified for the TF spectrogram of an FMCW radar signal, namely intensity information is introduced into the Hough transform for the power accumulation. After achieving the interference power accumulation in the Hough parameter space, the INR increases, hence improving the stability of the interference detection * Compared with the interference mitigation methods based on the time domain, the proposed method has the ability to handle the case of multiple interference. Furthermore, the proposed method is also effective when interference duty cycle is large. The rest of the paper is organized as follows. Section <ref> introduces the signal models of the FMCW radar signal and the mutual interference. Then an interference mitigation algorithm based on power-weighted Hough transform is presented in Section <ref>. Numerical simulations and experimental data based results are shown and discussed in Sections <ref> and <ref>, respectively, to evaluate the interference mitigation performance of the proposed method. Finally, Section <ref> concludes this paper. § LINEAR FMCW SIGNAL MODEL IN RADAR MUTUAL INTERFERENCE CASES §.§ Linear FMCW Signal Model without Interference A LFM signal, also named a chirp signal, is the most common waveform used in a FMCW radar system in real applications <cit.>. Usually, a set of LFM signal sequences is transmitted from a radar antenna to sense environment. The single transmitted LFM signal is s_t(t) =√(2 P_t)cos [2 (t)] =√(2 P_t)cos[2 (f_c+1/2 k t) t], where f_c is the central carrier frequency, P_t is the transmitted power, and k is the chirp rate which equals the ratio of the chirp sweep bandwidth B to the chirp sweep time T, i.e., k=B/T. The frequency of the transmitted signal is f_t(t) =d (t)/d t =f_c+k t . Thus a frequency modulation direction is defined as up-chirp when k>0 and down-chirp when k<0, respectively. An echo scattered by a target contains added amplitude and Doppler information related to the target’s radar cross section (RCS) and velocity, respectively. For a single-target scenario, the power of the target echo related to free space attenuation is P_e=P_t G^2^2/(4 )^3 R^4, where is the wavelength of the transmitted signal, G is the antenna gain on the line of sight (LOS), is the target RCS representing the ability to scatter the power of electromagnetic waves, and R is the distance between the radar and the target on the LOS. The target distance causes a delay between the target echo and the radar reference signal, which is =2(R+v t)/c, where c is the light speed, v is the relative velocity between the target and the radar on the LOS which causes the Doppler frequency shift. From (<ref>) and (<ref>), the echo with one target is s_e(t)=√(2 P_e)cos [2 (t-)], When there exists multiple targets, the echo signal is the superposition of these target’s echoes. §.§ Linear FMCW Signal Model with Interference When there is interference, the target echo and the interference are superimposed, and then received by a receiver antenna. For a single-interference scenario, without loss of generality, assuming a interferer radar has the same radio frequency (RF) and antenna specifications with a victim radar, i.e., the two radars have the same transmitted power P_t, wavelength , and antenna gain G, then the interference power in the receiver of the victim radar is P_i=P_t G^2^2/(4 )^2 R_i^2, where R_i is the distance between the interferer radar and the victim radar on the LOS. It is worthwhile to note that R_i will be equal to R when the interferer radar is installed on the target. Accordingly, the interference signal is s_i(t)= √(2 P_i)cos[2 _i(t-_i)] = √(2 P_i)cos[2 f_c i(t-_i)+1/2 k_i(t-_i)^2], where f_c i and k_i are the central carrier frequency and the chirp rate of the interferer radar respectively. _i is the time delay between the interference and the reference signal. When there exists multiple interference radars, the total interference signal is the superposition of each interference represented in (<ref>). According to (<ref>) and (<ref>), the signal-to-interference ratio (SIR) at the victim radar receiver is SIR=P_e/P_i=R_i^2/4 R^4. From (<ref>) and (<ref>), the total signal received by the radar receiver is s_r(t)=s_e(t)+s_i(t)+g(t), where g(t) is the receiver noise. Dechirp processing of the received signal is achieved by using a low noise amplifier (LNA) and mixing with the reference signal. From (<ref>), (<ref>), (<ref>), and (<ref>), a beat-frequency signal in baseband can be derived as (<ref>), where _b and _b i are the constant phase terms. Accordingly, the beat frequency introduced by the target is <cit.> f_b=k , and the beat frequency introduced by the interference is f_b i=f_c-f_c i+k_i_i+1/2(k-k_i) t , which is a LFM signal. Substituting (<ref>) and (<ref>) into (<ref>), the beat-frequency signal can be rewritten as s_b(t)=A_bcos(2 f_b t+_b)+A_b icos(2 f_b i t+_b i)+g(t), where A_b=2 √(P_t P_e) and A_b i=2 √(P_t P_i) are the power of the beat-frequency signal for the target and the interference respectively. Thus, the total beat-frequency signal consists of three parts, namely the target, the interference, and the noise. After the dechirp processing, the beat frequency signal is filtered by a low-pass filter (LPF) whose function is to prevent signal aliasing during subsequent analog-to-digital sampling by an analog-to-digital converter (ADC). Then three fast Fourier transform (FFT) processes, i.e., range FFT, Doppler FFT, and spatial FFT, are applied to the digital signal for estimating the distance, the velocity, and the angle information of the target <cit.>. A schematic diagram of the FMCW radar system is shown in Fig. <ref>. § INTRODUCTION TO INTERFERENCE MITIGATION METHOD §.§ Signal Characteristics and Method Motivation Car detection in a typical mutual interference scenario is shown in Table <ref>. A car with an interferer radar is present at 100m, and another interferer radar is present at a distance of 2000m. In this case, for a ego radar, SIRs between the car echo and the interference produced by the mounted radar and the distant radar are -41dB and -15dB according to (<ref>), respectively. These SIR levels indicate the interference power is greater than that of the target echo due to one-way propagation effect shown in (<ref>) and (<ref>). As a result, the interferer radar may have an impact on the target detection even if it is far away from the ego radar. In addition, TF features of the target echo and the inference before and after the dechirp processing are shown in Fig. <ref>. As a result of the dechirp processing, it can be seen from (<ref>), (<ref>), and (<ref>) that the target echo consists of a single-frequency signal, while the interference shows the characteristic of a LFM signal. After low-pass filtering, only signals in passband, which is represented by yellow area in Fig. <ref>, are retained. In this case, the target echo exists in the entire time domain as a single beat frequency signal if its beat frequency is smaller than the cut-off frequency of the LPF. However, since the LFM range of the interference is greater than the LPF passband, the interference will be intercepted by the LPF, which makes the interference exhibits finite range in time, as shown in the second row of Fig. <ref>. In summary, the target echo and the interference have following characteristics in the automotive radar mutual interference case: * The target echo in baseband is a single frequency signal, which demonstrates a straight line parallel to the time axis in the TF domain. * The interference in baseband is a LFM and time limited signal, which demonstrates a straight line with a large slope in the TF domain. * The interference power is usually greater than that of the target echo due to the difference in signal propagation path. This indicates an automotive radar may be interfered by other radars within a range of kilometers. In this case, the dynamic range of the interference power is large, i.e., both the strong and the weak interferences exist in the received signal of the victim radar. Based on the signal characteristic analysis in the automotive radar mutual interference scenario, the dynamic range of the interference power is large in a practical scenario, which brings difficulties to interference detection. The existing interference mitigation methods, such as the wavelet and the CFAR based methods, all take advantage of the larger interference power with respect to that of the target echo for interference detecting. However, the detection performance of the existing methods decreases with lower interference power. Inspired by the noncoherent integration processing in radar target detection applications <cit.>, the interference detection can be performed by exploiting the line feature of the interference in the TF domain, and accumulating the interference power. Based on this motivation, we propose a Hough transform based interference detection approach in a power accumulation sense. In this way, the interference detection performance can be improved by using the accumulation effect on straight line points in the Hough parameter space. §.§ Interference Detection and Localization Based on Power-Weighted Hough Transform The characteristics of the interference and the target echo can be obtained by TF analysis technique. STFT is a widely used TF analysis technique due to its good linearity and computational simplicity <cit.>. In this paper, the TF analysis of the received signal is obtained using STFT and the discrete version implemented in practice is <cit.> S_r(, m)=∑_n=-∞^∞ s_r(n) w(n- D) e^-j 2 m/N n , where w(n) is a analysis window (e.g., Hamming window <cit.>), N is the number of frequency samples, D is the hop size between successive DFTs, and denotes the time index. Then, the power spectrogram of the received signal is P(, m)= |S_r(, m)|^2 = S_r(, m) ·conj(S_r(, m)), where conj(·) is a conjugate operation. As long as the power spectrogram P(, m) is considered as a special image, it can be used as the input of subsequent Hough transform. As an effective geometric shape detection method <cit.>,<cit.>, Hough transform has been widely used in many fields such as image processing <cit.> and lane detection based on radar images <cit.>. The classical Hough transform detects straight lines in binary images. It projects straight lines into a parameter space to accumulate the scores of points and the straight lines can be obtained by threshold detection in the parameter space. The line function in the Hough transform is defined as =x cos ()+y sin () , where and are the distance from the line to the origin and the angle of the line respectively. The coordinate (x, y) is used to describe pixel position for the input image, while each point (, ) in the Hough parameter space represents a line in the image. If the line exists in the image, the score of the corresponding point in the parameter space can be measured as H(, )= ∑_(x,y)(x, y) , with (x, y)= {[ 1, if (x, y) is on L; 0, otherwise ]. , where L denotes that the line satisfies (<ref>). Unlike ordinary images, the intensity value of each pixel in the power spectrogram represents the distribution of signal power in the TF domain. From (<ref>), if a signal has a certain power and chirp characteristics at the same time, it appears as a straight line with the corresponding power in the power spectrogram P(, m). Due to this TF feature, accumulating power information in the Hough parameter space is utilized for improving the performance of the interference detection. The power-weighted score in the Hough parameter space is H_P(, )=∑_(, m)∈ P P(, m) (, m) . In addition, considering the slope of the line corresponding to the target echo in the TF spectrogram is close to 0, only lines with large slopes are detected to ensure they correspond to the interference. When the Hough parameter matrix is obtained, the lines can be extracted by threshold detection. With some prior information, the threshold can be determined in a feasible way as follows. In real scenarios, we set a maximum RCS value related to a interested target as _max and calculate the theoretical value of the target echo power according to (<ref>), then the detection threshold is determined as Thd= P_t G^2^2_max/(4 )^3 R^4, where is the threshold factor which can be determined in a radar test stage. After obtaining the detection threshold, the lines corresponding to the interference can be extracted if H_p(, )>T, and the interference locations in the spectrogram are found according to (<ref>) is {[ =, if sin ()=0; m =- () +/sin (), otherwise ]. . §.§ Interference Mitigation and Target Echo Recovery Based on the detection results of interference lines, values at interference locations are discarded to achieve interference suppression. Meanwhile, a signal recovery process can be realized by interpolating the discarded locations using neighborhood samples. For each specific frequency bin slice of the spectrogram, an AR model along the time axis of the spectrogram <cit.> is used for realizing a signal interpolation. The AR model is defined as S_rec(, m)=∑__n=1^q__n S_r(-_n, m)+, where q is the number of neighboring samples, is the prediction coefficient and is the residual. AR coefficients can be obtained by least squares. Therefore, the prediction values are obtained in terms of the solved AR model and the gaps at the corresponding interference locations are filled by the predicted signals for achieving the interference mitigation. After traversing all frequency slices by the interference mitigation process, an interference-free spectrogram is obtained. Finally, a reconstructed target echo without interference is obtained by applying inverse STFT (ISTFT) to the interference-free spectrogram. The ISTFT is computed by taking the inverse Fourier transform of each time bin slice of the interference-free spectrogram and overlap-adding the inverted signals <cit.> as s_rec(n)=∑_=-∞^∞ w_i(n- D) 1/N∑_m=0^N-1 S_rec(, m) e^j 2 m/N n, where w_i(n) defines a synthesis window which is the inverse of the analysis window w(n). In summary, the proposed interference mitigation flow is shown in Algorithm <ref>. § NUMERICAL SIMULATION RESULTS §.§ Simulation Description and Evaluation Metrics Numerical simulation is one of the two approaches used for evaluating the performance of the proposed interference mitigation method. An FMCW radar signal flow simulation based on Fig. <ref> is carried out, which includes waveform generation, amplification, and emission, free space propagation, target back scattering and interference superposition, low-noise amplification, dechirp processing, lowpass filtering and ADC sampling. Two sampling frequencies are utilized for simulating analog and digital signals respectively, i.e., a large analog frequency (AF) such as 2GHz, is applied for analog signal simulation while an intermediate frequency (IF) is used for analog-to-digital sampling. The main radar parameter settings used in the simulation are shown in Table <ref>. In the simulation scenario, the interferer radar one and the interferer radar two are set at 30m and 150m from the victim radar respectively. Two types of targets are set for evaluating different methods as follow: * A stationary target is presumed to be located at 150m from the ego radar. It is mainly used for evaluating interference mitigation effects in a single chirp signal in Section <ref>, and the influence of different SNRs on interference mitigation performance in Section <ref>. * A moving target with a speed of 11m/s is presumed to be located at 100m for evaluating velocity measurement performance in Section <ref>. The performance of the proposed method is compared with seven state-of-the-art methods which included five time domain methods and two TF domain methods. Among them, zeroing, raised cosine window (CW) <cit.>, time domain AR (T-AR) <cit.>, wavelet decomposition <cit.> and IMAT <cit.> methods are implemented in the time domain, STFT beat frequencies interpolation by AR model (STFT-AR) and CFAR-Burg methods are implemented in the TF domain <cit.>. In order to ensure the comparability of each method, a CFAR detector is used for detecting interference positions for all the time domain methods. The interference mitigation performance in both the time and the frequency domains are evaluated using two time domain metrics and two frequency domain metrics. The first metric is cosine similarity (CS) defined as CS=s_rec^* s_e/s_rec_2×s_e_2, where s_e is the target echo, s_rec is the recovered signal of s_e, and * denotes the conjugate transposition. The CS is a metric of an angle between two signal vectors, which can be used to represent the correlation of the two vectors. The closer the CS is to 1, the more correlated the two vectors are. The second time domain metric is error vector magnitude (EVM) defined as EVM=s_rec-s_e_2/s_e_2. The EVM is employed to describe the difference between an ideal signal and a recovered signal. A small EVM value means an accurate reconstruction. The last two metrics, namely peak sidelobe ratio (PSLR) and integrated sidelobe ratio (ISLR) <cit.> are employed to evaluate interference mitigation performance in the frequency domain via range profiles. The PSLR is defined as PSLR=10 ×log 10(max_m∉ [a,b]F^2(m)/max_m∈ [a,b] F^2(m)), where F is the spectrum of s_rec and the interval [a, b] bounds the main lobe of the spectrum. The PSLR is employed to describe a ratio between the power of the max sidelobes point with respect to that of the main lobe. The ISLR is defined as ISLR=10 ×log10(∑_m=1^a F^2(m)+∑_m=b^M F^2(m)/∑_m=a^b F^2(m)), The ISLR describes a ratio between the energy of all sidelobes with respect to that of the main lobe. For both the PSLR and the ISLR, smaller values indicate lower sidelobe levels, which represents good interference mitigation performance in our applications. §.§ Noise-Free Simulation Results Noise-free simulation is used for evaluating the interference mitigation performance firstly. In this case, there are only the target echo and the interference in the received signal, which allows us to quantitatively evaluate the effects of the different interference mitigation methods. The noise-free signals are shown in Fig. <ref>. The TF distributions of the target echo and two types of the interference in the analog domain are shown in Fig. <ref> (a), the frequencies of the interference and the target echo cross in different time. The interference signals that fell into LPF passband near these intersections are retained, and then received by the radar receiver. The output of LPF and ADC are shown in Fig. <ref> (b) and Fig. <ref> (c) respectively. The interference mitigation effects of the proposed method are shown in Fig. <ref>. The spectrogram of the received signal with interference is shown in Fig. <ref> (a). The power accumulation and the peak detection result in the Hough parameter space is demonstrated in Fig. <ref> (b). Three peaks corresponding to interference lines are detected, and the interference lines in the TF domain are well indicated as shown in Fig. <ref> (c). Based on these locations, the interference is finely mitigated by the AR model reconstruction process. In this process, the order of the AR model is determined by Akaike information criterion <cit.>. The interference mitigation result in the TF domain is shown in Fig. <ref> (d). Compared with the spectrogram of the received signal with interference as shown in Fig. <ref> (a), the target echo is retained and the interference contaminated areas are reconstructed effectively after the interference mitigation by the proposed method. The results of the eight methods on the four performance metrics are summarized in Table <ref>. It can be observed from Table <ref> that the four time domain methods (i.e., zeroing, CW, T-AR, and IMAT) contain large reconstruction error as shown in the EVM. Furthermore, their correlations with the target echo are poor as shown in the CS. The reason is that the amount of interference information that can be extracted from the time domain is limited. Compared with the four time domain methods, the wavelet method uses the wavelet coefficients of different resolution layers to suppress interference. In this case, the interference power is decomposed into different components, and the useful signals in those components with less interference are preserved. However, the wavelet method still works in the time domain, and does not make full use of the TF characteristics. Unlike the time domain methods, the STFT-AR, the CFAR-Burg, and the proposed methods perform interference mitigation in the TF domain and, therefore, have the ability to exploit more information for accurately locating the interference and retaining more useful signals. Thus the large CS, the small EVM and ISLR are obtained from the three TF domain methods in the noise-free experiment. The frequency spectrum, i.e., the range profile, of the recovered signal are shown in Fig. <ref>. All the eight methods can effectively suppress the interference. Among the five time domain methods, the wavelet method has the best sidelobe levels. Although the IMAT method has the lower sidelobe at the target location of 150m, its sidelobe level deteriorates more rapidly at the distant range. The three TF domain methods have better sidelobe levels than those of the time domain methods and are very close to the signal without interference. §.§ Simulation Results in Different SNR Levels In this experiment, the interference power is reduced to the same level as the target echo power. This is used for simulating a distant interference scenario and verifying the mitigation performance of the different tested methods for weak interference. Moreover, different SNR level simulations are implemented for evaluating the robust performance of the tested methods. Gaussian white noise with different SNR levels are added into the received signal, and Monte Carlo simulations are repeated for evaluating the statistical performance under specific SNR levels. There are a total of 256 independent noise adding experiments for each SNR level. After all the SNR experiments, the results of the four metrics versus the SNR are shown in Fig. <ref>. When the SNR is greater than -5dB, for the zeroing, the CW, the T-AR and the IMAT methods, the CS, the EVM, the PSLR and the ISLR of the recovered signal are about 0.8, 0.7, -18, and -3 respectively. It can be found from Fig. <ref> that the wavelet method have achieved better results than the other four time domain methods. Three TF domain methods, namely the STFT-AR, the CFAR-Burg and the proposed methods, achieve the best performance. In this case, the CS is greater than 0.95, the EVM is less than 0.25, the PSLR is less than -32, and the ISLR is less than -15. From these results, the TF domain methods are better than the time domain methods in performance because more information of the interference can be utilized and more accurate interference location can be detected. When the SNR is low, i.e., smaller than -15dB, the signal recovery performance of all the tested methods is degraded. However, the TF domain methods still maintained advantage over the time domain methods. As shown in Fig. <ref>, in the case of -15dB SNR, the performance of the TF domain methods is still better than that of the time domain methods at high SNR on all the four metrics. In addition, with the decrease of the SNRs, the proposed method has a superiority in the performance and robustness of the interference mitigation to the STFT-AR and the CFAR-Burg methods. For example, when the SNR is -25dB, the CS of the proposed method is about 16% higher than those of the STFT-AR and the CFAR-Burg methods, and achieves a smaller statistical standard deviation in the Monte Carlo simulations. Similar results are observed on the EVM, the PSLR, and the ISLR as shown in Fig. <ref>. The interference detection results for the STFT-AR, the CFAR-Burg, and the proposed methods in the TF domain under SNR of -5dB are shown in Fig. <ref>. It can be seen that the proposed method has better interference location detection accuracy than that of the STFT-AR and the CFAR-Burg methods. In low SNR conditions, the power-weighted Hough transform is equivalent to the power accumulation along a straight line in the TF domain. As a result, the INR is improved after Hough transform and the interference is detected robustly. Unlike the proposed method, there is no accumulation of the interference power to improve the INR in the CFAR-Burg method. Thus it encounters false alarms caused by noise in the low SNR conditions. Moreover, in the frequency slice where the target echo is located, a failure to correctly estimate the noise level lead to a missed detection of the interference for the CFAR-Burg method as shown in Fig. <ref> (c). These corner cases cause the interference mitigation degradation and further affect the signal recovery performance. As for the STFT-AR method, since there is no explicit interference detection and localization process in <cit.>, we manually labeled the interference locations for comparison as shown in Fig. <ref> (b). This is an ideal situation. Therefore, the performance in practice will be worse due to detection errors of the interference locations. Compared with the CFAR-Burg and the proposed methods, the STFT-AR method removes all the frequency bins in a certain time range in the TF domain. This operation causes a loss of useful signal information adjacent to the interference locations, and further lead to a decrease in interference mitigation performance. §.§ Moving Target Simulation Results A moving target simulation is used for evaluating the performance of target velocity measurement by chirp sequences before and after interference mitigation. In the simulation, the moving target locates at 100m with a velocity of 11m/s. A total of 256 chirps are set up as a range-Doppler (RD) processing unit. In the evaluation of the interference mitigation performance with multiple chirps, the STFT-AR method is not used because the interference locations vary in each chirp and can not be marked manually. In the experiment, the interference mitigation process was firstly performed by traversing each chirp to obtain interference-free chirp sequence data, and then the range FFT and the Doppler FFT processing mentioned in Section <ref> was realized on the interference-free chirp sequence data for obtaining RD responses. The RD responses corresponding to the tested methods are shown in Fig. <ref> (c) to Fig. <ref> (i). As a reference, the RD responses under the interference-free and the interference conditions are given in Fig. <ref> (a) and Fig. <ref> (b), respectively. The CFAR-Burg and the proposed method have better interference mitigation effects in the RD responses than those of the time domain methods, i.e. the zeroing, the CW, the T-AR, the wavelet, and the IMAT methods. The reason is that, on the one hand, the interference can be more accurately located in the TF domain, and on the other hand, the CFAR-Burg and the proposed methods utilize the uncontaminated signals to interpolate the contaminated gaps. Thus the two methods maintain the phase coherence of the chirp sequence, and obtain the high target SNR in the RD responses. Moreover, compared with the false alarms of the CFAR-Burg method as shown in Fig. <ref> (c), the proposed method utilizes the linear structure of the interference in the TF domain and accumulates the interference power in the Hough parameter space for robustly interference detecting, which is able to avoid the false alarms. Therefore, the proposed method has better performance in reconstructing the signal and obtains a higher SNR in the RD response. Clearer results are given in Fig. <ref>. In the RD responses, the range slice of the moving target is extracted for obtaining velocity profiles, then the velocity profiles of the tested methods are shown in Fig. <ref>. Similar to the RD response analysis, the proposed method obtains the best interference mitigation performance in the velocity profiles. Quantitative results are given in Table <ref>. Since the velocity profile is the output of the Doppler FFT processing, the frequency domain metrics, i.e., the PSLR and the ISLR are used to measure the interference mitigation performance for the tested methods. The PSLRs of the time domain methods are distributed between -30dB and -23dB. Among them, the zeroing, the CW and the T-AR methods have the PSLRs around -25dB. These three methods only perform interference detection in the time domain which has the least amount of interference information. The wavelet method decomposes the time domain signal into components and performs interference detection in the components which can retain some useful signal information. As a result, the wavelet method achieves the PSLR level of -29 dB, which is the best performance among the time domain methods. The IMAT method is a sparse reconstruction method for the time domain signal, and this sparse approach makes it to obtain the PSLR level of -27 dB in the velocity profile. Compared with the time domain methods, the TF domain methods significantly improve the PSLR level. The CFAR-Burg and the proposed methods obtain the PSLR levels of -47dB and -52dB, respectively. Since the false alarms occurred in the CFAR-Burg method are avoided in the proposed method, the proposed method obtains the closest PSLR level to the ground truth. Similar results are obtained in the ISLR metric as shown in Table <ref>. § EXPERIMENT RESULTS FOR REAL SCENARIO In this experiment, real sceario data are collected for verifying the effectiveness of the proposed method. Three 77GHz millimeter-wave radars, from https://www.muniutech.cn/vehicle?category_id=9Muniu Technology Co., Ltd., are used for data collection. Among these radars, one is used as a victim radar, the other two are used as interferer radars. Experimental data of the victim radar is recorded. The device positions in the scenario are shown in Fig. <ref> (a). The interferer radars were set on the left and the right sides relative to the LOS of the victim radar. The distance from the victim radar to the interferer radar one and the interferer radar two were 20m and 30m, respectively. Radar configurations are shown in Table <ref>. Considering the ease of implementation on actual signal processing chips, all the radars were set to have the same sweep time but different sweep bandwidth to generate LFM signals with different slopes. The victim radar was configured as the up-frequency modulation mode with the sweep bandwidth of 300MHz. Interferer radar one was configured as the down-frequency modulation mode with the sweep bandwidth of 300MHz, and interferer radar two was configured as the up-frequency modulation mode with the sweep bandwidth of 500MHz. The pulse repetition time (PRT) of the radars was set to be different to increase the probability of the mutual interference. For a single chirp, the number of sampling points is 400. A window length of 32 is used for the STFT and a step between the sliding windows is set to 4. A signal in each window is applied to 128-point FFT to obtain the TF spectrogram. For a chirp sequence, a total of 128 chirps are included as a coherent processing unit for a RD response. §.§ Stationary Target Experiment A corner reflector was placed at 20m in front of the victim radar to simulate a typical strong target. The time domain signal and the TF spectrogram of the received signal are shown in Fig. <ref>. It can be seen that two forms of interference related to the interferer radars are observed in the time domain as shown in Fig. <ref> (a). The interference with short duration is introduced by the interferer radar one, and the one with long duration is introduced by the interferer radar two. Fig. <ref> (b) shows the TF features of the received signal that includes LFM-like interference and the single frequency-like target echo. The range profile after interference mitigation for all the tested methods are shown in Fig. <ref>. Overall, the TF domain methods achieve better interference mitigation performance in experiment data than that of the time domain methods. The TF domain methods result in a lower noise floor level about -30dB near the target, where the time domain methods have the noise floor level about -25dB. Fig. <ref> shows the PSLR and the ISLR of the corner reflector in range profile, the quantitative results can be seen that the TF domain methods are superior to the time domain methods in both the PSLR and the ISLR, except the PSLR of the CFAR-Burg method. Unlike the PSLR, which reflects the side lobe level at a certain point, the ISLR reflects the average value of the side lobe level within a certain range, so it is more accurate for evaluating the interference suppression performance. Therefore, even though the PSLR of the CFAR-Burg method is higher than some time domain methods, it still achieved better performance in overall. For the TF domain methods, the proposed method achieves the best performance on the PSLR and the ISLR. The effects before and after interference localization and mitigation for the three TF domain methods are shown in Fig. <ref>. For the STFT-AR method, the interference location in this experiment is manually marked since no method for interference detection is given in the original literature <cit.>, so it achieves a better interference mitigation effect. However, the performance of the STFT-AR method will be lower than the results in this paper in practice, since the interference detection are not as accurate as manual marking. For the CFAR-Burg method, two factors affect the interference location detecting as shown in Fig. <ref> (c). One factor is that two adjacent interference signals in the same frequency slice will raise detection thresholds for each other during the CFAR detecting, causing interference missed detection at certain frequencies. The other factor is false alarms caused by the low SNR, which lead to the loss of useful information in the TF domain. Compared with the STFT-AR and the CFAR-Burg methods, the proposed method detects the interference location more accurately in the measured data due to the utilization of the structural information in the TF domain, therefore the best interference mitigation effect is achieved. §.§ Pedestrian Experiment A pedestrian walking back and forth at a range of 30m to 40m from the victim radar is used to evaluate the performance of the tested methods in the interference scenario as shown in Fig. <ref> (b). The RD responses are obtained for a coherent processing unit, i.e., the 128 chirps, with 400 time sampling points per chirp, by performing the range FFT and the Doppler FFT with Hamming window <cit.>. The processing flow of the RD responses is similar to the simulation implemented in Section <ref>. For the pedestrian, the RD responses obtained by the tested methods are shown in Fig. <ref>. In the pedestrian experiment, the presence of a strong target such as the corner reflector makes the difference in power between the target echo and the interference no longer significant, which increases the difficulty of interference detection and localization. In this case, the interference mitigation by the time domain methods, despite the improvement, is not sufficient to achieve the required SNR for pedestrian detecting. Therefore, the time domain methods can not provide an effective measurement of the pedestrian’s range and velocity in the RD response as shown in Fig. <ref> (b) to Fig. <ref> (f). For the CFAR-Burg method, the pedestrian just barely appeared in the RD response after interference mitigation due to the existence of false alarms and interference missed detection problems as shown in Fig. <ref> (g). The interference missed detection is a failure point of the CA-CFAR approach in interference-dense scenarios. When the locations of multiple interference are close to each other in time as shown in Fig. <ref> (c), it causes the CFAR detector to overestimate noise levels, which raises the detection threshold and lead to the missed detection. The same phenomenon of the interference missed detection can be seen in the original literature of the CFAR-Burg method [34]. For the proposed method, the interference can be better mitigated in the pedestrian experiment, resulting in the correct detection of the pedestrian’s range and velocity as shown in Fig. <ref> (h). §.§ Algorithmic Runtime Analysis The runtime results of the tested methods are evaluated by using the data from the corner reflector experiment in Section <ref>. For a chirp signal with interference, the interference mitigation process from the tested methods is applied and the corresponding runtime is recorded. The MATLAB version used in the experiment is R2021a and the computer configurations are AMD Ryzen 7 5800H CPU and 16GB DDR4 3200MHz RAM. The runtime results of the tested methods are shown in Table <ref>. Overall, the TF domain methods have longer runtime than that of the time domain methods because they expand an one-dimension time signal into a two-dimension TF spectrogram, and implement interference mitigation processes in the TF domain. For the STFT-AR method, since there is no interference detection and localization step, its runtime is mainly consumed in the STFT and the signal reconstruction process. For the CFAR-Burg and the proposed methods, due to the presence of the interference detection and localization steps, their runtime have a large increase compared with the STFT-AR method, which indicates the interference detection and localization in the TF domain is the most time-consuming parts of the TF methods. In addition, the Hough transform used in the proposed method detects lines by a search process in a two-dimensional parameter space and therefore has the longest algorithm running time. However, since the search grids of the Hough parameter space are independent of each other, parallel processing can be considered for reducing the runtime in practice. § CONCLUSIONS In this paper, the mutual interference of automotive radars in the TF domain is analyzed. Based on the linear characteristic of the interference in the TF domain, a power-weighted Hough transform interference detection approach is proposed, and then the AR model based predicting is used for interference mitigation. Compared with the existing interference mitigation methods implemented in the time domain, the proposed method has the ability to locate the interference more accurately in the TF domain, and retains more useful signals in the interference mitigation process. Compared with the STFT-AR and the CFAR-Burg methods implemented in the TF domain, the proposed method accumulates interference power based on structural information for improving detection and location performance. As a result, the target echo can be recovered more accurately and robustly under low SNR conditions. IEEEtran [ < g r a p h i c s > ]Yanbing Li (M'22) received the M.S. and Ph.D. degrees in signal and information processing from Xidian University, Xi'an, China, in 2009 and 2013, respectively. He is now an associate professor with the School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China. His research interests include radar system design, radar signal processing, radar target recognition, and the applications of radar sensing techniques in autonomous driving, intelligent transportation and internet of things. [ < g r a p h i c s > ]Weichuan Zhang received the M.S. degree in signal and information processing from the Southwest Jiaotong University in China and the Ph.D. degree in signal and information processing in National Lab of Radar Signal Processing, Xidian University, China. He is a research fellow at Griffith University, QLD, Australia. His research interests include computer vision, image analysis, and pattern recognition. He is a member of the IEEE. [ < g r a p h i c s > ]Lianying Ji received the B.S. degree from the Dalian Maritime University, in 2004, and the Ph.D. degree from the Beijing Institute of Technology in 2009. He has been with the School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, since 2009. During 2010, He was visitor researcher in China-Singapore Institute of Digital Media, Singapore. He is now the CTO of Beijing Muniu Linghang Technology Company. His technical contributions have been in the area of both biomedical information processing and  mmWave radar signal processing.
http://arxiv.org/abs/2307.03868v2
20230708003704
Automated Stability Analysis of Piecewise Affine Dynamics Using Vertices
[ "Pouya Samanipour", "Hasan A. Poonawala" ]
eess.SY
[ "eess.SY", "cs.SY" ]
: Zero-Shot Sketch-to-3D Shape Generation Aditya Sanghi Pradeep Kumar Jayaraman[3] Arianna Rampini[3] Joseph Lambourne Hooman Shayani Evan Atherton Saeid Asgari Taghanaki Autodesk Research ===================================================================================================================================================================================== This paper presents an automated algorithm to analyze the stability of piecewise affine () dynamical systems due to their broad applications. We parametrize the Lyapunov function as a function, with polytopic regions defined by the dynamics. Using this parametrization, Stability conditions can be expressed as linear constraints restricted to polytopes so that the search for a Lyapunov function involves solving a linear program. However, a valid Lyapunov function might not be found given these polytopic regions. A natural response is to increase the size of the parametrization of the Lyapunov function by dividing regions and solving the new linear program. This paper proposes two new methods to divide each polytope into smaller ones. The first approach divides a polytope based on the sign of the derivative of the candidate Lyapunov function, while the second divides it based on the change in the vector field of the dynamical system. In addition, we propose using Delaunay triangulation to achieve automated division of regions and preserve the continuity of the Lyapunov function. Examples involving learned models and explicit MPC controllers demonstrate that the proposed method of dividing regions leads to valid Lyapunov functions with fewer regions than existing methods, reducing the computational time taken for stability analysis. § INTRODUCTION Piecewise affine () dynamical systems have gained popularity in robotics <cit.> and the automotive industry <cit.> due to their wide applications. concepts are utilized in advanced controllers, including gain-scheduled flight control systems<cit.> and Takagi-Sugeno fuzzy systems<cit.>. Affine systems with control saturation can be expressed using dynamics, enabling effective synthesis of controllers through explicit model predictive control (MPC)<cit.>. However, obtaining a Lyapunov function for stability guarantees with explicit MPC can be challenging. Alternatively, there is an increasing trend in using supervised machine learning methods for learning dynamics and controllers<cit.>. Neural networks (NN) with the rectified linear unit (ReLU) activation functions have been employed to convert closed-loop dynamics into PWA dynamics<cit.>. The stability of these methods, however, is not guaranteed, emphasizing the need to develop an automated approach to finding Lyapunov functions for learned models, including ReLU networks and explicit MPC. Sampling-based methods <cit.> are prevalent for learning Lyapunov functions. The Lyapunov function is learned from finite samples, and this function must meet the stability conditions at all states, therefore verification is a critical component of the analysis. Verification can be performed in an inexact manner using relaxed convex problems <cit.> or in an exact manner using Satisfiability Modulo theories (SMT) and Mixed-Integer Programs (MIP)<cit.>. The exact verifier certifies the Lyapunov function or generates counterexamples violating the stability conditions. Counterexamples can be incorporated into training samples for iterative learning. However, the computational complexity of the verifier remains a challenge. An alternative to the learning approach is to parameterize the Lyapunov function and solve it as an optimization problem <cit.>. The Sum of Squares (SOS) method is employed to find the Lyapunov function for nonlinear dynamics <cit.>, but it can be computationally complex. A piecewise quadratic (PWQ) parameterization of the candidate Lyapunov function is proposed in<cit.>. However, these methods must deal with the conservatism of the S-procedure, and the results are limited to two-dimensional examples. Instead of relying on the PWQ Lyapunov function, <cit.> utilized the function to parameterize the Lyapunov function. An algorithm has been developed for finding a Lyapunov function using partition refinement in <cit.>. A method for calculating the Lyapunov function for conewise dynamics was proposed by <cit.>. The dynamics and controller of have been parameterized as a ReLU in <cit.>. The Lyapunov function and the controller are found by parameterizing Lyapunov conditions as quantifier-free constraints for a bilinear quadratic optimization problem<cit.>. Although the Lyapunov condition for the Lyapunov function can be expressed without conservatism, the PWQ Lyapunov function receives more attention in the literature. The refinement process in the context of Lyapunov stability analysis presents several challenges, such as preserving the continuity of the candidate Lyapunov function and dividing complex polytopes effectively. We propose the following contributions to address the challenges in the refinement and continuity of the Lyapunov function. Contributions The paper introduces two novel methods for dividing cells during the search for valid Lyapunov functions. The first method utilizes the derivative of the Lyapunov function as a criterion to divide a cell, while the second method analyzes the vector field of the dynamics to do so. By examining the behavior of the Lyapunov function derivative or vector field, these methods determine suitable locations for proposing new vertices that will define new cells, since we use the vertex representation for polytopes. Furthermore, the paper proposes using Delaunay triangulation to automate the refinement process for cells. The proposed refinement methods offer the advantage of finding valid Lyapunov functions with fewer refinements compared to existing techniques. The efficacy of the search procedure is demonstrated through non-trivial examples, where valid Lyapunov functions are successfully identified within reasonable computation times. Additionally, the paper evaluates the effectiveness of the proposed approach in determining the region of attraction (ROA) by comparing the results with other methods. The comparison showcases the capability of the proposed approach to identify the ROA using the Lyapunov functions. The contributions of this paper improve the refinement process addressing challenges in the parameterization of the Lyapunov function. § PRELIMINARIES In this paper, we examine the stability analysis problem for dynamical systems described by piecewise affine functions as follows: ẋ = (x). where x ∈ℝ^n is the state variable, and term denotes a piecewise affine function. We focus on continuous functions with polytopic cells. The rest of this section formally describes functions. *Notation An index for each element in the set S constitutes the set (S). The convex hull, the interior, the boundary, and the closure of the set S are denoted by S, S, ∂ S, and S respectively. The transpose of matrix A is A^T. ⟨·,·⟩ denotes the inner product, ∠(·,·) is the angle between two vectors, and |·|_2 is the standard L_2 norm. It should be noted that the symbol ≽ is the element-wise version of ≥. §.§ Partitions And Refinements In this paper, we define a partition as a collection of subsets {_i }_i ∈, where each _i is a closed subset of ^n and int(X_i)∩ int(X_j)=∅, ∀ i, j ∈ and i≠ j. The domain of the partition, Dom(), is the union of all the cells in . Given two partitions = { Y_i}_i ∈ I and = {_j}_j ∈ J of a set S = =, we say that is a refinement of if _j ∩ Y_i ≠∅ implies that _j ⊆ Y_i. We denote the set of all refinements of as Ref()<cit.>. §.§ Piecewise Affine Functions We explicitly parameterize a piecewise affine function (x) by a partition = {_i }_i ∈ and a collection of matrices 𝐀_ = {_i }_i ∈ and vectors 𝐚_ = {_i }_i ∈ such that (x) = _i + _i, if ∈_i, where _i = {x ∈^n _i + _i ≽ 0 }. Note that a generic function may not be continuous unless we appropriately constrain the parameters _i, _i, _i, and _i<cit.>. It is assumed that any function in this paper with this explicit form meets such constraints and is always continuous. Additionally, we consider the origin to be the equilibrium, thus denoting index sets I_0 and I_1 for cells containing and not containing the origin respectively. Also, we assume that all the cells are bounded. Therefore, we can use the vertex representation for all cells. A vertex is a facet of dimension 0 for a cell <cit.>. Each cell of a partition can be represented using its vertices: X_i= (X_i), where (X_i) represents the set of vertices of the cell X_i. § MAIN ALGORITHM This section presents an overview of the stability analysis algorithm, which aims to construct an optimization problem to discover the Lyapunov function. The algorithm consists of two main components: the formulation of an optimization problem to find a valid Lyapunov function and a refinement process to enhance the flexibility of the Lyapunov function. A detailed description of these components is provided in the subsequent section. For better comprehension, a pseudo-code representation of the algorithm is presented in Algorithm <ref>. The termination condition of the algorithm is determined by two criteria: either a valid Lyapunov function is found, or the optimization process exceeds the predefined timeout threshold of 3600 seconds. It should be emphasized that in the case of unstable systems, the algorithm needs to be manually terminated. § OPTIMIZATION BASED SEARCH FOR LYAPUNOV FUNCTION In this section, first, we describe the general idea of the stability analysis and the Lyapunov function. In the next step, we parameterize the Lyapunov function as a function. Then we present the stability condition for dynamics with a candidate Lyapunov function. In <ref>, We convert the stability analysis problem to a linear optimization problem. We construct the optimization problem to be always feasible; however, only a specific solution is accepted as a valid Lyapunov function. Furthermore, we proposed new refinement approaches <ref> to increase the capacity of the candidate Lyapunov functions, facilitating the search for valid Lyapunov functions. §.§ Lyapunov function The Lyapunov stability theory is well known for its application to the analysis of nonlinear dynamical systems <cit.>. Assume that V:D→ R is a continuously differentiable function, and x=0 is the equilibrium point of equation (<ref>). In this case, equation (<ref>) will be asymptotically stable if and only if V is strictly positive definite and strictly decreasing ∀ x ∈ D-{0}. §.§ Lyapunov function In the paper, we investigate the use of Lyapunov functions on a bounded partition that aligns with the dynamics (<ref>) structure. This assumption can be used to further reduce computation costs by taking advantage of the convexity property. Specifically, if all cells in the partition are bounded, an affine function is considered positive on a particular cell X_i if and only if it is positive on all vertices of X_i<cit.>. This observation allows for simplified analysis and computation of the Lyapunov function. Consider a candidate Lyapunov function such that: V(x)={[ p_i^Tx+q_i for i ∈ I_1; p_i^Tx for i ∈ I_0. ]. In the equation above, the function V(x) is continuous and differentiable in the interior of the cell. It is possible to calculate the derivative of the candidate Lyapunov function, along the dynamic, ẋ=f(x), in the interior of cell X-{0}: ℒ_f V = ⟨∇ V , f(x) ⟩ , where ∇ V is the gradient of V(x), and ℒ is the lie derivative. When V(x) is differentiable at x, let the local affine Lyapunov function be V(x) = p^T x + q, and the dynamics be ẋ = A x + a. The derivative of the Lyapunov function along the trajectories can be calculated as follows: V̇=p^T(Ax+a). Let {X_i}_i∈ I be a partition of a bounded subset of ℝ^n into convex polytopes with vertices v_k. * The Lyapunov function (<ref>) will be positive definite iff: p_i^T v_k+q_i>0 for i ∈ I_1, v_k ∈(X_i) p_i^T v_k>0 for i ∈ I_0, v_k ∈(X_i). * V̇, (<ref>), will be a negative definite function iff: p_i^T (A_i v_k+a_i) <0 for i ∈, v_k ∈(X_i). These results can be derived directly as a result of parameterizing the candidate Lyapunov function in the affine form. The last step is to force the Lyapunov function to be continuous. To achieve this goal, the candidate Lyapunov function (<ref>) must meet the following requirements. V_i(v_k)=V_j(v_k) , i≠ j ∈, v_k ∈(X_i)∩(X_j). A Lyapunov function (<ref>) in the partition is considered valid if there exist p_i and q_i satisfy (<ref>)-(<ref>) for every v_k ≠ 0. In this formulation (<ref>) guarantees that the Lyapunov function will be positive definite. Additionally, (<ref>) guarantees continuity. In this case, the Lyapunov function is Liptchitz continuous, but it is not differentiable at the boundary. As a result of the Lipchitz continuity of the Lyapunov function, we are able to use the Clarke generalized gradient and Clarke generalized derivative<cit.>. According to Clarke generalized gradient ∂ V(x) for the Lyapunov function (<ref>) can be described as: ∂ V(x)=conv({p_i:i∈ I(p),x∈ X_i}) The Clarke generalized derivative along F for the differential inclusion ẋ∈ F(x) is provided by <cit.> V̇_F={p^Tf:p∈∂ V(x),f∈ F(x)}. For points x≠0 if F is a singleton function then (<ref>) guarantees that V̇_F<0, ∀ p ∈∂ V(x). As shown by <cit.>, the maximum of the (<ref>) upper-bounded the decrease of the Lyapunov function along solutions of the dynamical systems. Therefore, we may conclude that the Lyapunov function is decreasing along all the trajectories of the dynamical systems. For more detail, please see <cit.>. The origin is assumed always to be defined as a vertex in _0. The assumption that the origin is always defined as a vertex of a cell ensures that we can always find a positive-definite Lyapunov function. Another assumption is that if a vertex v_k∈(X_i) and v_k∈ X_i∩ X_j, then v_k∈(X_j). This assumption is required to preserve the continuity of the Lyapunov function using (<ref>). Details can be found in <ref>. §.§ Optimization problem The constraints (<ref>)-(<ref>) on variables p_i and q_i from (<ref>) may be infeasible due to conditions (<ref>) associated with the decrease of the Lyapunov function along solutions. Slack variables are added to these constraints to ensure feasibility. Consequently, we can formulate the search process for the Lyapunov function as follows: min_ p_i, q_i,τ_i ∑_i=1^Nτ_i Subject to: p_i^T (A_i v_k+a_i)-τ_i <-ϵ_1 ∀ i ∈ I_1, v_k ∈(X_i) p_i^T A_i v_k-τ_i <-ϵ_1 ∀ i ∈ I_0, v_k ∈(X_i) p_i^T v_k+q_i>ϵ_2 ∀ i ∈ I_1, v_k ∈(X_i) p_i^Tv_k>ϵ_2 ∀ i ∈ I_0,v_k ∈(X_i) V_i(v_k)=V_j(v_k) ∀ v_k ∈(X_i)∩(X_j) , i≠ j τ_i≥ 0 ∀ i ∈ where τ_i is the slack variable associated to cell X_i, and ϵ_1,ϵ_2>0. By design, we can state the following result. The optimization problem in (<ref>) is always feasible. This result is by the construction of the optimization problem. The solution to this optimization problem yields a valid Lyapunov function if and only if all the slack variables are zero. If the cost function is non-zero, the Lyapunov function is non-decreasing at some vertices. In fact, no Lyapunov function associated with the current partition exists. It may be possible to refine the partition, meaning to divide regions within it, in order to increase the capacity of the Lyapunov function and then repeat the search using this higher-capacity function. In the following section, the refinement process is described in detail. §.§ Refinement A refinement of the current partition is intended to enhance the flexibility of the Lyapunov function search process. To achieve flexibility, a cell X_i with a nonzero slack variable will be divided into smaller sub-cells. In order to keep things simple, we assume that the refinement of X_i will result in the creation of two new subcells, X_i_1 and X_i_2. For each subcell, we can parameterize the Lyapunov function as V_i_1=p_i_1^Tx+q_i_1 and V_i_2=p_i_2^Tx+q_i_2. As a result, the candidate Lyapunov function for cell X_i has a higher capacity function that is more flexible. Furthermore, the new Lyapunov function has more parameters, p_i and q_i, as well as constraints. Increasing the number of parameters and constraints might increase the computational complexity for solving (<ref>) since the computational complexity of linear optimization with n parameters and accuracy parameter ϵ is O(n^3/4log(n/ϵ)) <cit.>. Therefore, to implement refinement, it is necessary to use an intelligent approach, since otherwise, the complexity of the computation may increase. We utilize a vertex representation for the refinement process to represent cells within a partition, which are convex polytopes. The process of refinement for cells involves adding a new vertex on the cell's boundary and then forming new sub-cells. For this section, we will begin by defining a few concepts and definitions that will be useful for describing these two steps. The first concept is the simplex region, a bounded region with the smallest possible number of vertices in ℝ^d. Polytope's faces of dimension one are called edges<cit.>. In cell X_i, we define the edges by the set (X_i), and each edge is represented by a pair of vertices (v_j,v_k), where v_j,v_k∈(X_i). The edges of convex polytopes can be obtained by using MILP as described in <cit.>. It is worth emphasizing that the edges containing the origin, where v_j=0 or v_k=0, are not taken into account in the set of edges (X_i). By making this assumption, we ensure that refinement will not be applied to edges containing the origin. Therefore, if X_i∈ I_0, its subcells will always contain the origin after refinement. For the cell X_i, with dynamic ẋ=A_ix+a_i and the candidate Lyapunov function V_i=p_i^Tx+q_i we can define the following sets and functions. * We can find the vector field and the derivative of the Lyapunov function at a vertex, v_j, using the following functions. (X_i,v_j)= A_iv_j+a_i, (X_i,v_j)= p_i^T(X_i,v_j), where (X_i,v_j)∈ℝ^n is the vector field of the local dynamic at the vertex v_j, and (X_i,v_j) is the derivative of the Lyapunov function at the specified vertex in X_i. * The vertices of the longest edge of a cell can be obtained using the following function: L_max(i)= _(v_j,v_k)∈(X_i) (| v_j-v_k |_2) * The following function can be used to capture changes in the sign of the derivative of a candidate Lyapunov function: (i)= 1 sgn((X_i,v_j)) ≥ 0, ∀ v_j ∈(X_i), -1 sgn((X_i,v_j)) ≤ 0, ∀ v_j ∈(X_i), 0 otherwise, where the sgn(x) is the standard sign function. This function generates zero whenever the sign of the derivative of the candidate Lyapunov function in the cell X_i changes. Otherwise, this function generates either 1 or -1 depending on the sign of the derivative of the candidate Lyapunov function within the cell X_i. * With (i) being 0, the following set may also be used to provide the vertices of the edges where the sign of the derivative of the Lyapunov function has changed. c_V(i)={(v_j,v_k) (i)=0, ∀ (v_j,v_k) ∈(X_i), (X_i,v_j)(X_i,v_k)<0}. There are multiple edges where the sign of the derivative of the Lyapunov function has changed if (i)=0. * The following equation can be used to determine the vertices for an edge with the largest variation in the derivative of a candidate Lyapunov function. ΔV̇_max(i)= {(v_j,v_k) _(v_j,v_k)∈(X_i)|(X_i,v_j)-(X_i,v_k)|}. * We aim to determine the edge along which the vector fields exhibit the greatest range of angle variations. Therefore, we use the following function to find the edge with the smallest cosine between the vector field at its vertices. cos_min(i)= {(v_j,v_k) (v_j,v_k)∈(X_i)min⟨(X_i,v_j),(X_i,v_k)⟩/|(X_i,v_j)|_2|(X_i,v_k)|_2} Now, we can delve into the refinement process. §.§.§ Finding new vertices The first step in the refinement process is to introduce new vertices on the boundary of cells in I_s={i ∈ I τ_i>0}. The new vertex for the cell X_i can be obtained using the following equation: v_new_i=α v_j+β v_k, which is a linear combination of two vertices of an edge. Based on the splitting approach which will be introduced in this section, α,β, v_j, and v_k in (<ref>) could be different. Here we consider three different approaches for finding the new vertices. Naive refinement The first algorithm is inspired from <cit.>. The original algorithm was described for simplex regions and restricted to 2-D problems. In order to make the comparison possible we generalize the method for all types of regions. The process of refinement for the cell X_i based on <cit.> is described in Algorithm <ref>. The naive algorithm adds a new vertex exclusively to the longest edge, denoted as L_max, of cell X_i that has the largest slack variable, as determined by (<ref>). This method creates sub-cells with the largest possible volume without considering the candidate Lyapunov function or local dynamics. Consequently, it may lead to unsatisfactory results. Selecting vertices randomly could increase computational complexity without necessarily improving the refinement process. Thus, it is crucial to choose new vertices intelligently. Lyapunov-based refinement To address the challenge with the naive refinement, a new approach is proposed, leveraging the candidate Lyapunov function to make more informed decisions regarding selecting new vertices. The basic principle behind this method is that for every cell X_i where i∈ I_s finding a set of points P(i)={v_new_i(X_i,v_new_i)=0, v_new_i∈∂ X_i}. Using these points, v_new_i∈ P(i), the cell X_i could be divided into the two sub-cells, X_i_1 and X_i_2, where (i_2)=-(i_1). In the case of (i)=0, we know that P(i)≠∅. Therefore, we can find these points using the following convex problem in cell X_i. max_α,β 0 s.t. α(X_i,v_j)+β(X_i,v_k)=0, α+β=1, 0≤α,β≤1, (v_j,v_k) ∈(X_i). An explanation of how to find i, v_j and v_k in (<ref>) and other details about finding a new vertex using Lyapunov-based refinement can be found in Algorithm <ref>. If (i)=1, then P(i)=∅, so we choose the new vertex at the edge obtained from ΔV̇_max(i). In contrast to the previous method that focused only on the cell with the largest slack variable, the Lyapunov-based refinement is now applied to all cells with nonzero slack variables, denoted as i ∈ I_s. As a result of this broader approach, each relevant cell will be refined based on its individual candidate Lyapunov function. However, it is important to note that the coefficient vector p_i used in the refinement process may change significantly in the next iteration. Therefore, this approach may not be suitable in all cases, as the optimization process in the subsequent steps can alter the candidate Lyapunov function. Vector field refinement To address the problem with the Lyapunov-based refinement, the search for new vertices should be conducted using a method that is not influenced by the optimization process in the subsequent steps. The proposed method leverages the vector field information of the dynamics, which remains unchanged during the optimization process. The underlying heuristic behind this method is that the direction or magnitude of the vector fields along an edge may undergo significant changes within a cell X_i where i ∈ I_s. Consequently, a higher-capacity PWA function may be required to represent the Lyapunov function within X_i accurately. As illustrated in Fig.<ref>, the vector field direction in a cell can exhibit substantial variations, such as a flip from v_1 to v_2. In such cases, a simple function may struggle to approximate the level set accurately. To mitigate this, the method is adding a new vertex, v_new_i, between (v_1,v_2) where ∠((X_i,v_1),(X_i,v_new_i))=∠((X_i,v_2),(X_i,v_new_i)). Consequently, after each refinement process, the greatest angle between the vector fields of an edge in cell X_i is divided in half. The process of finding a new vertex using the vector field refinement is outlined in Algorithm <ref>, which provides a detailed description of the method. Before moving on to the next step, storing the new vertices created by these algorithms in the following buffer is necessary. B={v_new_i∈ℝ^n i ∈ I_s}. Now we can proceed to the next step, which is forming sub-cells. §.§.§ Forming sub-cells In order to form sub-cells, Johannson<cit.> proposed remedies for 2-D systems; however, this method is limited to simplex cells. It was suggested that triangulation methods be used for non-simplex regions in <cit.>, but no specific method or implementation is presented. It has also been proposed in <cit.> to apply Delaunay triangulation to all cells; however, the results have been limited to 2-D examples. We apply Delaunay triangulation to overcome the challenges associated with forming sub-cells for non-simplex cells and cells in higher dimensions (n>2), which would be challenging to accomplish manually. The Delaunay triangulation of a set of points in ℝ^d is defined to be the triangulation such that the circumcircle of every triangle in the triangulation contains no point from the set in its interior. Such a unique triangulation exists for every point set in ℝ^d, and it is the dual of the Voronoi diagram. Moreover, the Delaunay triangulation will maximize the minimum angle in each triangle<cit.>. DT((X_i)) is the notation for implementing Delaunay triangulation using the vertices of the cell X_i. The process of implementing Delaunay triangulation for a single cell is illustrated in Fig.<ref>. Delaunay triangulation will also handle the continuity of the Lyapunov function if the partition is composed of multiple cells. To illustrate how continuity is preserved, let us consider the v_new_i as the new vertex obtained using (<ref>) for the cell X_i. If v_new_i∈ X_i∩ X_j, then v_new_i must also be considered as a new vertex for the cell X_j, and DT((X_i)∪ v_new_i) and DT((X_j)∪ v_new_i) should be implemented. Consequently, even after refinement, continuity would be guaranteed by (<ref>). Generally, in order to implement Delaunay triangulation within the current partition, we have to follow the following steps. * First, we must obtain the following set containing cells that required refinement. I_split= {i: X_i ∩ B≠∅, i ∈ I()}. * Then, we need to find the vertices located on the boundary of the cell X_i where i∈ I_split using the following set. 𝒱_new(i)= {v_new_j v_new_j=X_i∩ B, i∈ I_split}. * Then, we can form the new sub-cells using DT((X_i)∪𝒱_new(i)) for i ∈ I_split. The process of refinement based on the naive approach using Delaunay triangulation is shown in Fig. <ref>. As can be seen, the sub-cells are created just in the simplex cells. However, the Lyapunov-based and vector-field methods perform differently as shown in Fig.<ref>. and Fig.<ref>. respectively. § RESULTS The paper presents seven examples to demonstrate the search performance for a Lyapunov function using the algorithm described in Algorithm <ref>. The computations are implemented using the Mosek optimization package <cit.> and Python 3.9 on a computer with a 2.1 GHz processor and 8 GB RAM. During the computations, a tolerance of 10^-8 is used to determine if a number is nonzero. In all the examples, the values of ϵ_1 and ϵ_2 are set to 10^-4. These examples aim to showcase the effectiveness and efficiency of the proposed algorithm in finding valid Lyapunov functions within reasonable computation times. [4-D Example <cit.>] For this example, we will use the 4-D MPC example presented in <cit.> as follows: x_t+1= [ 0.4346 -0.2313 -0.6404 0.3405; -0.6731 0.1045 -0.0613 0.3400; -0.0568 0.7065 -0.086 0.0159; 0.3511 0.1404 0.2980 1.0416 ]x_t+ [ 0.4346,-0.6731,-0.0568,0.3511 ]u_t. It includes the same details as <cit.>, such as a state constraint of ‖ x ‖_∞≤ 4, an input constraint of ‖ u ‖_∞≤ 1, a prediction horizon of T=10, a stage cost of Q=10I and R=1. Explicit MPC produces a dynamic with 193 cells. To ensure that the origin is a vertex, we refined the cell with the origin on its interior first. Our next step is to convert the discrete-time dynamics into continuous-time dynamics with a sampling time t_s=0.01. Finally, We searched for the continuous Lyapunov function using Algorithm <ref> with all refinement techniques. The Algorithm <ref> timed out after 2000 seconds using the naive refinement after 31 iterations. Algorithm <ref> found the Lyapunov function in 1200 seconds using the Lyapunov-based refinement with 5874. With the vector field refinement, the Algorithm <ref> found the solution in 280 seconds by generating 3086 cells. In comparison with <cit.>, the Lyapunov function using vector-field refinement requires a shorter computational time. [4-D controllable canonical dynamic] Following is a simple 4-D example with stable canonical controllable dynamics with condition number 10 to illustrate the meaningful difference between the refinement methods. ẋ= [ 0 1 0 0; 0 0 1 0; 0 0 0 0; -24 -50 -35 -10 ]x, where ‖ x ‖_∞≤ 5 and the initial partition includes 16 simplex cells around the origin with the dynamic (<ref>). The search Algorithm <ref> found the valid Lyapunov function after 43 seconds with 1054 cells created as a result of vector field refinement, whereas Lyapunov-based refinement required 106 seconds with 2743 cells, and naive search required 1546 seconds with 6943 cells. [Path Following Wheeled Vehicle<cit.>] The following kinematic model is used to analyze the stability of a path following wheeled vehicle in <cit.>: ḋ_e=ν sin(θ_e), θ̇_e=ω-νκ(s) cos(θ_e)/1-d_eκ(s). In equation (<ref>), we have the state variables θ_e, which represents the angle error, and d_e, which represents the distance error. The control input is denoted as ω. In this study, we used a single-hidden layer ReLU with 50 neurons as described in <cit.> in order to identify the dynamic (<ref>) with the NN controller <cit.> in the region ‖ x ‖_∞≤ 0.8. Moreover, we used the vertex-based method along with vector field refinement to obtain the Lyapunov function. As can be seen in Fig. <ref>, a comparison was made between the ROA obtained by the proposed method and the ROA obtained using the NN Lyapunov function <cit.>. [Multi-agent consensus] The Hegselmann-Krause model is a widely studied model in the literature, which involves N autonomous agents with state variables ξ_i. Each agent's dynamics are given by the equation: ξ̇_̇i̇ = ∑_j=1^Nϕ(ξ_i,ξ_j)(ξ_j-ξ_i) where i ranges from 1 to N, and ϕ:[0,1]^2→{0,1} represents a weight function as defined in the reference <cit.>. The stability analysis results for this model are presented in Fig. <ref>. We observed that a valid Lyapunov function can be obtained without requiring any refinement. Therefore, the choice of different splitting approaches does not have any impact on this particular example. The details are provided in Table <ref>. [2-D example from <cit.>,<cit.>,<cit.>] This system has been presented in four different regions as follows: Z_1={x∈ℝ^2: -x_1+x_2≥ 0, x_1+x_2≥ 0} Z_2={x∈ℝ^2: -x_1+x_2≥ 0, -x_1-x_2≥ 0} Z_3={x∈ℝ^2: x_1-x_2≥ 0, -x_1-x_2≥ 0} Z_4={x∈ℝ^2: x_1-x_2≥ 0, x_1+x_2≥ 0} and the dynamics are as follows: Ω_p:ẋ={[ [ -0.1 1; -5 -0.1 ]x if x∈ Z_1 or x∈ Z_3; [ -0.1 5; -1 -0.1 ]x if x∈ Z_2 or x∈ Z_4. ]. The level sets and the vector fields are shown in Fig.<ref>. The Lyapunov function was obtained by refining the cells. In this example, all three refinement methods perform similarly in finding the Lyapunov function. The details about this example are presented in Table<ref>. The refinement process creates 128 cells in the partition. [Explicit model-predictive controller<cit.>] In this study, the stability of the following discrete time dynamic is investigated using explicit MPC, similar to <cit.>. x_t+1 = [ 1 1; 0 1 ]x_t+[ 1; 0.5 ]u_t As in <cit.>, the MPC problem has the same specification such as stage cost, actuator, and state constraints. We use the MPT3 toolbox <cit.> in Matlab to obtain an explicit controller. A sampling time of t_s=0.01s was used to obtain the continuous form of the dynamic (<ref>) with the explicit MPC controller. The dynamics generated by the explicit MPC have a cell where the origin is not on the vertices. As a result, we refine this cell with the origin as a new vertex, DT((X_i)∪ 0), and then start the Algorithm <ref>. Fig.<ref>. depicts the level sets of the Lyapunov function. The Lyapunov function was found by all three refinement algorithms within one second. The Lyapunov-based refinement and the vector-field refinement, however, produce a greater number of cells than the naive refinement. [Inverted Pendulum<cit.>] It is common in the literature to use an inverted pendulum as an example with the following state-space model: [ ẋ_1; ẋ_2 ] = [ x_2; - c/m x_2 - g l^2 sin(x_1) ]+[ 0; 1/ml^2 ]u where m = 0.15 kg, l = 0.5 m, c = 0.1 N s/rad, and g = 9.81 m/s^2 <cit.>. First, we used a single-hidden layer ReLU neural network consisting of 20 neurons in the region ‖ x ‖_∞≤ 4 to identify the uncontrolled dynamics. Subsequently, we designed a ReLU neural network controller as described in <cit.>. By incorporating the ReLU NN controller into the system, we were able to achieve stability. We searched for the Lyapunov function using Algorithm <ref> with the Vector-field refinement. The results are compared with Linear-quadratic regulator (LQR)<cit.> and NN Lyapunov function<cit.> in Fig.<ref>. The Lyapunov function obtained using the proposed approach has a larger ROA. It is important to note that the valid region for <cit.> and <cit.> is ‖ x ‖_2≤ 4. Moreover, the computational time for each example is presented in the TABLE <ref>. Having run each simulation ten times, the computational time is the average time elapsed. TABLE <ref> provides the number of cells created by each refinement technique. The Vector field refinement performs better in terms of computation time and number of cells than the Lyapunov-based and naive approaches specifically in 4-D examples. § DISCUSSION We have shown the effectiveness of our automated approach for stability verification through various examples. Our proposed refinement methods outperform existing techniques, and although our method does not specifically aim to maximize the region of attraction, our results are comparable to other methods. However, there are challenges to consider when applying this algorithm to a wider range of problems. *Limitations The computational complexity and performance of the proposed algorithm depend on the increase in the number of cells and optimization parameters during the refinement process. In a space ℝ^n, the number of cells should satisfy m ≥ 2^n. The simplest case, where the origin is surrounded by 2^n simplex cells, results in an optimization problem with 2^n × (n+1) parameters and 2^n+1× n inequality constraints. The number of constraints increases with the presence of non-simplex cells. The computational complexity of the optimization process significantly depends on the dimensionality n, leading to longer computation times as cells are further divided. In some cases, the algorithm may encounter challenges and longer computation times due to increased complexity. To compare the results of different examples in terms of computational time and the number of cells, we introduce the following concepts: T_opt_i = ∑_j=1^i t_optj/∑_i=1^N t_opt_i Nr_i = n_r_i/∑_i=1^N n_r_i. T_ opt represents the normalized accumulative optimization time, N_r indicates the normalized number of regions, t_opt represents the time spent finding the solution with MOSEK, n_r represents the number of regions, and N represents the total number of iterations to solve the optimization problem. The subscripts i and j indicate the optimization iteration. The relationship between the normalized accumulative optimization time (T_opt) and the normalized number of cells (N_r) is investigated in three different examples in Fig.<ref>. The graphs demonstrate almost linear behavior for Example <ref> and Example <ref>, while Example <ref> exhibits an almost exponential trend. This indicates that increasing the number of cells could present a significant challenge for our proposed technique. Additionally, the refinement process may result in cells with nearly coplanar vertices, which can introduce numerical difficulties. It is essential to consider these complexities and challenges when applying our algorithm to various systems. § CONCLUSION This paper presents a computational framework for obtaining valid Lyapunov functions. The framework addresses the challenges of formulating the Lyapunov conditions as a linear optimization problem, which does not always guarantee a valid Lyapunov function. To overcome this limitation, two novel refinement methods are proposed, enhancing the flexibility of the candidate Lyapunov function. We used the Delaunay triangulation to automate the refinement process. We demonstrated that the proposed approach is effective based on experiments and comparisons with alternative approaches. The experiments successfully solve a 4-D example in a short time, highlighting the practicality and efficiency of the framework. The proposed framework offers a more effective method for generating valid Lyapunov functions, offering flexibility through refinement methods and automating the process. IEEEtran
http://arxiv.org/abs/2307.05423v1
20230711164102
Let's shake on it: Extracting secure shared keys from Wi-Fi CSI
[ "Tomer Avrahami", "Ofer Amrani", "Avishai Wool" ]
cs.CR
[ "cs.CR" ]
Let's shake on it: Extracting secure shared keys from Wi-Fi CSI Tomer Avrahami School of Electrical Engineering, Tel Aviv University, ISRAEL Email: [email protected] Ofer Amrani School of Electrical Engineering, Tel Aviv University, ISRAEL Email: [email protected] Avishai Wool School of Electrical Engineering, Tel Aviv University, ISRAEL EMail: [email protected] August 12, 2023 ================================================================================================================================================================================================================================================================================================================================ A shared secret key is necessary for encrypted communications. Since Wi-Fi relies on OFDM, we suggest a method to generate such a key by utilizing Wi-Fi's channel state information (CSI). CSI is typically reciprocal but very sensitive to location: While the legitimate Alice and Bob observe the same CSI, an eavesdropper Eve observes an uncorrelated CSI when positioned over 0.5 wavelength away. We show that if endpoint Bob is shaken, sufficient diversity is induced in the CSI so that it can serve as a source of true randomness. Then we show that the CSI among neighboring sub-carriers is correlated, so we select a small set of judiciously-spaced sub-carriers, and use a majority rule around each. We demonstrate that Alice and Bob observe a 5-15% bit mismatch rate (BMR) in the extracted bitstream while Eve observes a BMR of around 50% even when placed within 10cm of Alice. We employ the cryptography-oriented definition of min-entropy to estimate the number of secure bits within the bitstream, and use the Cascade algorithm of quantum-key-distribution to reconcile Alice and Bob's bitstreams, while quantifying the number of bits leaked by the algorithm. Accounting for both the min-entropy and the cascade leakage we quantify the Secured Bit Generation Rate of our method. We conducted extensive tests in an indoor environment. Our system exhibits a secure bit generation rate of 1.2–1.6 bits per packet, at distances ranging from 0.5m–9m, and can generate a secure shared 128-bit key with 20sec of device shaking. § INTRODUCTION §.§ Motivation Secured Wi-Fi networks, whose access-points employ security measures such as WPA2 or WPA3 <cit.> require a Preshared Secret encryption key (PSK) that is shared among the access point and all the mobile devices on the network. Typically the PSK is manually configured into the access point, and shared with all the mobile devices either out-of-band or via WPS <cit.>. Thus the compromise of the key on a single mobile device could lead to a compromise of the whole Wi-Fi network. In this paper we demonstrate the feasibility of a more secure architecture, in which every mobile device has a separate, randomly-generated key, shared only with the access point—without the logistical challenges of manually sharing multiple separate keys. Since Wi-Fi relies on Orthogonal Frequency Division Multiplexing (OFDM), we suggest to generate the PSK based on Wi-Fi's channel state information (CSI). The CSI is reciprocal but very sensitive to location, so while the access point Alice and the mobile device Bob observe the same CSI, an eavesdropper Eve will observe an uncorrelated CSI as long as she is outside the spatial correlation distance—approximately 7cm for Wi-Fi at 2.4GHz band and 3.5cm at 5GHz band. §.§ Method Overview The goals of our system design are as follows: * The CSI source should be treated as a True Random Bit Generator (TRBG) and measured as such, using cryptographically-accepted metrics such as the min-entropy to estimate the resulting key's cryptographic strength. * The extracted key needs to be agreed-upon by Alice and Bob. This precludes using common TRBG practices like using the least-significant measurement bits to enhance randomness: doing so will create an excessive bit-mismatch-rate (BMR) due to channel and measurement noise. * The extracted key must be private from the eavesdropping adversary Eve. To achieve these goals we use the following procedure: * Devices Alice and Bob exchange N packets while the user shakes device Bob, and both extract the CSI from the packets they receive. Shaking the device changes the channel between Alice and Bob and introduces randomness into the CSI, making it a suitable source for a TRBG. * We observed that adjacent sub-carriers have correlated CSI. Therefore Alice and Bob use a carefully-selected set of k well-separated sub-carriers, and quantize their CSI into q bits per sub-carrier, to obtain a raw bitstream (Alice and Bob perform this extraction separately). * Alice and Bob run the Cascade algorithm <cit.> to reconcile the raw bitstream: technically Bob reconciles his bitstream to match Alice's. * Alice and Bob compute a cryptographic hash, e.g., SHA-256 <cit.>, on the reconcilled bitstream to compress all the secure bits into a key that can serve as the PSK between Alice and Bob. When analyzing the cryptographic strength of the resulting key we make the following observations. First, the Cascade algorithm in step (3) leaks information, but it precisely quantifies the number of bits it leaks. Therefore the number of Cascade-leaked bits is subtracted from the total number of bits collected in step (2) as a basic bound B_0 on the number of secure information bits. Moreover, as the k· q-wide bit patterns extracted from packets in step (2) are temporally correlated, we calculate their min-entropy and use it to conservatively reduce B_0, to get a bound B on the number of remaining secure information bits embedded in the bit stream. As long as B is larger than the hash length then the PSK has the cryptographic strength that its length implies. The value B, normalized by the number of packets N, is the Secure Bit Generation Rate (SBGR), measured in bits per packet. As a concrete example, in many experiments we used N=300 packets sent in each direction. Via extensive testing we found that using k=4 sub-carriers and q=2 bits per sub-carrier (i.e., extracting 8 bits per packet) yields a good working point, so the size of the raw bitstream is 2400 bits. In this setting, Cascade leaked ≈ 1000 bits, and we measured a min-entropy of ≈ 2.7 out of 8, leaving B≈ 472 secure bits in the reconciled bitstream, well above the PSK length of 256, and providing SBGR=1.6 secure bits per packet. Note that it is not informative to run empirical tests of randomness such as the NIST test suite <cit.> on the resulting PSK: such tests would only measure the effectiveness of the cryptographic hash function—the output of a good hash function like SHA-256 will always pass all such tests, regardless of its input. §.§ Adversary Model We assume that the adversary Eve is passive, and does not attempt to disrupt the key agreement protocol. Eve is also assumed to be equipped with radio receivers that are as sensitive as, or better than, those of Alice and Bob. Thus Eve can eavesdrop on the CSI-bearing packets and try to estimate the CSI characterizing the channel between Alice and Bob. However we assume that Eve is outside the spatial correlation distance D_c from both Alice and Bob: D_c = 0.5λ where λ is the wavelength <cit.>. For Wi-Fi at 2.4GHz (λ=12.5cm) we have D_c=6.25cm and for Wi-Fi at 5GHz (λ=6cm) D_c=3cm. Consequently, as long as Eve is at least 7cm away from both Alice and Bob, the corresponding channels she experiences, in typical settings, are decorrelated from the channel between Alice and Bob. As Bob is continuously moving, we assume it would be impossible for Eve to imitate this movement. So, the best location Eve can be in is as close to Alice as possible. In our experiments, we placed Eve 10cm from Alice, which is the Raspberry Pi case's width. It is assumed that Eve is physically positioned within the Wi-Fi network coverage of both Alice and Bob, i.e., she is in the same room as Alice and Bob. Eve can therefore eavesdrop on all the key reconciliation messages exchanged by the Cascade algorithm <cit.> without any errors. Since these messages are sent before a cryptographic key is agreed-upon, they are sent in cleartext, so any embedded information is considered to be leaked to Eve. §.§ Contributions An overarching contribution of our work is that is it based on extensive field testing in a live indoor Wi-Fi network. We modified the firmware on 3 Raspberry Pi devices to extract the CSI data from the hardware to make it available in the driver software. We then conducted multiple experiments in a variety of configurations and recorded the data in a corpus of size ≈ 60MB. We plan to make both the data and code available to the community at <https://github.com/tomer-avrahami/shake-on-it> . We first show that if Alice and Bob are stationary, then the CSI data is of low diversity. However, if endpoint Bob is shaken, there is enough diversity in the CSI values to serve as a source of true randomness for key generation. Prior research indicated that the CSI is highly correlated among neighboring sub-carriers, which we verified experimentally. To address this challenge we chose to employ the CSI only from a small set of non-adjacent sub-carriers. We conducted an exhaustive search to identify the best number of sub-carriers, k, and which k sub-carriers to use, in order to minimize the correlation among them. Next we investigated how to set the parameter q, the number of quantization bits to use for a given CSI value, and its effect on the bit mismatch rate (BMR). To improve the BMR, we use the fact that adjacent sub-carriers are correlated to our advantage: we use a majority rule over the values extracted from a neighborhood of 2m+1 sub-carriers around the selected sub-carrier, and investigated the effect of the parameter m on the BMR. With this framework we demonstrate that Alice and Bob observe a BMR of 5-15% while Eve observes a BMR of around 50%, even if she is placed within 10cm of Alice. Based on the above-mentioned observations, our next major contribution is the design of our key extraction and agreement method: Shaking device Bob while transmitting and recording N CSI-carrying packets, extracting the bitstreams from them, using the QKD Cascade algorithm <cit.> to reconcile the bitstreams, and computing a PSK as a cryptographic hash of the reconciled bitstream. We evaluate the security of our scheme as follows. While the Cascade algorithm leaks information, it precisely quantifies the number of bits it leaks. Thus the number of Cascade-leaked bits is subtracted from the number of information bits collected as a basic bound B_0 on the number of secure information bits. Further, We observe that the extracted bits are not uniformly distributed. Therefore we use the cryptography-oriented definition of min-entropy and use it to conservatively reduce B_0, to get a bound B on the number of remaining secure information bits embedded in the bitstream. The value B, normalized by the number of packets N, is the Secured Bit Generation Rate, measured in bits per packet. Finally we evaluate the performance of the system at various distances, and the time it takes to generate a secure shared key. Putting it all together, our system exhibits an SBGR of ≈1.2 to 1.6 bits per packet, at distances of 1m–9m in an indoor environment, and can hence generate a secure shared 128-bit key within 20sec of device shaking. § BACKGROUND §.§ Orthogonal Frequency Division Multiplexing (OFDM) in Wi-Fi At the OFDM transmitter, the incoming data stream is split into multiple narrow and orthogonally overlapped sub-carriers. The data on each sub-carrier is then modulated, i.e. converted, to the time domain by using inverse Fast Fourier Transform (IFFT). The time-domain signal is then up-converted to radio frequency and transmitted through the channel. At the receiver, after frequency down-conversion, the signal is converted back to the frequency domain via FFT. Orthogonality, viewed in the frequency domain, is achieved in OFDM by choosing the symbol length and the frequency separation between the sub-carriers such that the peak of each sub-carrier falls on the nulls of the others. Number of sub-carriers depend on the selected PHY protocol. For example, IEEE 802.11n, 2.4GHz band, High-Throughput, 20MHz uses 64 sub-carriers (pilots and guard-bands included) with spacing of 312.5kHz, leaving 56 usable sub-carriers. In IEEE 802.11n, 5GHz band, High-Throughput 40MHz frequency range uses total of 128 sub-carriers, while 114 are usable. See Table <ref> for more details. In our experiments we used 20MHz channel bandwidth over 2.4GHZ band or 40MHZ channel bandwidth over 5G band. See Table <ref> for experiments setup description. §.§ Channel State Information (CSI) A characteristic of Wi-Fi is the usage of Orthogonal Frequency Division Multiplexing (OFDM) as a bandwidth-efficient technology for supporting high data rates. For its proper operation, OFDM technology requires the calculation of Channel State Information (CSI) for each sub-carrier on each antenna. The CSI describes what the transmitted signal has undergone while passing through the channel and reveals the combined effect due to scattering, fading, and power decay. An OFDM system viewed in the frequency domain can be modeled by y=Hx+n, where y and x are the received and transmitted vectors respectively, H is the channel matrix and n is an additive white Gaussian noise (AWGN) vector. To successfully detect the message x from the received signal y, distorted by fading and noise, OFDM receivers first need to estimate the channel. This is achieved by transmitting predetermined symbols, a.k.a. preamble, or pilots. Thus, the CSI, given for all sub-carriers in the form of the matrix (Ĥ), can be estimated in view of Equation (<ref>). Since OFDM reception requires accurate estimation of the CSI, it is safe to assume that this information is also available for other uses. The Wi-Fi device drivers for the Raspberry Pi we experimented with extract the CSI data computed by the hardware and provide it to the upper layers of software—enabling its use for shared-key generation. §.§ Channel coherence time and reciprocity The channel coherence time T_c is commonly defined as the time in which the channel can be considered constant <cit.>—which for our purposes implies that the channel is reciprocal. We conservatively assume that this is the time it takes a moving antenna to traverse 0.25λ. We can over-estimate the momentary velocity of a hand shaking a mobile device at 3m/s, so for Wi-Fi at 2.4GHz (λ=12.5cm) we get T_c≈ 10.4ms, and for 5GHz (λ=6cm) we get T_c≈ 5ms. So if the round-trip time over Bob and Alice's CSI-carrying messages is below 5ms then both parties observe a reciprocal channel and measure highly correlated CSI. §.§ Min-entropy A crucial requirement of using CSI for cryptographic key generation is that CSI data, viewed as a bitstream, appears as resulting from a true random bit generator (TRBG). Therefore there is a need to estimate the amount of entropy in the extracted bit stream. Following <cit.> we argue that in an adversarial setting, the correct measure of entropy is not the classical Shannon entropy, but rather the min-entropy. Min-entropy is a measure of the worst-case randomness in a random source, and using it is more conservative and more secure. It is calculated by using only the item with the highest probability of any possible outcome. The min-entropy of an independent discrete random variable X that takes values from the set A={x_1,x_2,…,x_k} with probability Pr(X=x_i) = p_i for i=1,…,k is defined as: H= min_i(-log_2p_i)=-log_2(max_i(p_i)) If X has min-entropy H, then the probability of observing any particular value for X is no greater than 2^-H. The maximum possible value for the min-entropy of a random variable with k distinct values is log_2k, which is attained when the random variable has a uniform probability distribution, i.e., p_1 = p_2 = … = p_k =1/k. §.§ Cascade In the context of quantum key distribution (QKD) there is an analogous situation to our setup. Alice and Bob have two channels: a private quantum channel that introduces errors, and a separate public channel that is error free. Alice and Bob each record a raw bitstream from the private channel, and then reconcile the keys by exchanging extra information on the public channel. The information transmitted on public channel is assumed available to the eavesdropper (Eve). So, the amount of information revealed must be considered in the security analysis of the protocol. The first key reconciliation algorithm for QKD (BBBSS) was presented in <cit.>, and subsequently improved by <cit.> and called the Cascade algorithm. This protocol consists of a sequence of rounds. Between every two rounds the bitsreams are permuted randomly. In each round Alice and Bob divide their bitstreams into blocks and exchange the parity of each block. When they find a block with different parity, they perform binary search on this block to eliminate one error within it, and then re-do the error correction of previous rounds using the newly corrected bit. Thus a single corrected bit in round i can lead to the correction of several other bits (in other blocks). Note that during a single round the parties process multiple blocks in parallel and the parity values of all the blocks may be packed into a single message on the public channel. Cascade code in this work is based on a Cascade-Python implementation forked from <cit.>. § SYSTEM SETUP AND BASIC MEASUREMENTS §.§ System setup Alice and Bob exchange packets wirelessly, while Eve ipassively eavesdrops on the communication. Alice plays the part of the access point so it is fixed in place, while Bob can be shaken. We found that the most accurate data Eve can hope to eavesdrop is obtained when she statically positions herself a few centimeters away from Alice. §.§.§ Hardware Our setup consists of four components: * Alice, Bob, and Eve are Raspberry pi 4 devices * A controller, which is a Python3 program running on a Dell laptop running Windows 10. All the devices are connected to the same wireless LAN, which serves as the public channel among devices and also is used to control the experiments. We selected the Raspberry PI since its Broadcom-made Wi-Fi chip supports a patched driver developed by Nexmon <cit.>, which allows exporting the CSI to the software layers. §.§.§ Firmware Based on the Nexmon project <cit.>, we loaded our own firmware version to the Broadcom Wi-Fi chip. This version enables: * Monitor mode: allows sniffing all the Wi-Fi packets. * Injection from host: allows injecting custom Wi-Fi packets triggered by the host operating system. * Injection from driver: allows injecting custom Wi-Fi packet triggered by logic within the Wi-Fi chip firmware. * CSI access: allows reading CSI values from the hardware registers and sending it to the host software. * Wi-Fi channel and bandwidth control. All three devices use the same network driver firmware code, to which we added a “quick-reply” feature. If this feature is enabled, the device replies with a response packet whenever a new legitimate packet arrives. This feature was enabled only in Bob's firmware. As Bob's response is sent directly from the network driver firmware, the time duration between Alice's packet and Bob's packet is kept well below 5ms, which is essential for maintaining channel reciprocity (recall Section <ref>). As can be seen in Figure <ref>, With this feature we measured the time from the end of Alice's packet to Bob's response to be ≈150μs, and each packet transmission duration is 164μs yielding an end-to-end round trip time of ≈ 480μs. In our experiments we used 139-byte QOS packets whose payload was constant. Note that with CSI access, the firmware driver on the receiver side overwrites the buffer containing the packet payload with the CSI information before delivering it to the calling software. Thus the payload in the packets used to exchange CSI is unavailable to the receiver. §.§ The effect of shaking One of the goals of the shared-key generation is that the key should be based on a true random bit generator (TRBG). Several tests we conducted in static environments have validated, as found in previous works  <cit.>, that the RF channel parameters remain near-constant for a long time: Practically a fixed CSI value is measured, perturbed by non controllable environmental noise and interference—see Figure <ref> (top). However, when Bob's position with respect to Alice changes rapidly enough by manually shaking it, much more diversity is included in the CSI data—see Figure <ref> (bottom). §.§ Can Eve eavesdrop? An important goal of the system is to keep the key private from Eve. In several experiments we demonstrated that as long as Eve is outside the spatial correlation distance D_c, the CSI data she can observe is decorrelated from the observations of Alice and Bob. Figure <ref> shows the CSI amplitude of a single sub-carrier over time as measured by Alice, Bob, and twice by Eve: once from Alice and once from Bob. It is evident from the figure that Alice and Bob measure similar CSI amplitudes, while Eve's measurements are near constant toward Alice and very divergent from the Alice-Bob channel towards Bob. Note also that Bob's movement induces a large variance in the value. Once we apply the quantization on the selected sub-carriers (to be described below) to extract a bitstream, we can calculate the bit mismatch rate (BMR) observed by Bob and compare it to that observed by Eve. Our measurements show that while Bob observes a BMR of 5%-15%, Eve observes a BMR of ≈50% as can be seen in Figure <ref>. §.§ Sub-carriers selection As already known from previous works, the CSI values of neighboring sub-carriers are correlated. To evaluate this phenomenon in our environment, Figure <ref> shows that indeed the correlation between adjacent sub-carriers is strong, and correlation strength is oscillating as function of sub-carriers index. This empirically-generated figure is well aligned with the analytical graphs presented in <cit.>. Therefore, unlike implementations such as <cit.>, we do not extract bits from all available sub-carriers—doing so would produce an unfounded sense of security since the extra bits do not add true randomness to the system. Instead, we conducted an exhaustive search for relatively small subsets of non-neighboring sub-carriers which provide high randomness, using min-entropy (recall Section <ref>) as our randomness measurement score. As can be seen in Figure <ref>, as we increase k, the number of sub-carriers, we do get more entropy—however the ratio between the added entropy and added bits decreases sharply. As a concrete example, when k=4 we found that selecting sub-carriers -25, -7, 11, 27 yields 2.8 bits of entropy. However, we also use the fact that neighboring sub-carriers are correlated to reduce errors: As we shall see in Section <ref> we use a majority-rule decision among 2m+1 neighboring sub-carriers. § THE METHOD §.§ Protocol parameters When Alice and Bob initialize, the following parameters should be defined: * N - number of packets. * Wi-Fi channel - Wi-Fi channel to be used. For example, channel 6. * Wi-Fi channel bandwidth - Wi-Fi channel bandwidth. For example, 20MHz. * Sub-carriers list - list of sub carriers which will be used in this session. * k - number of sub-carriers in use. * q - number of quantization bits * m - majority margin, number of sub-carriers to use for majority rule from each side of the main sub-carrier §.§ SBGR—secured bit generation rate We argue that the raw bit generation rate, based simply on the number of extracted bits per packet, is not an effective performance indicator since it doesn't take security into account. Instead we define a new key performance indicator we denote by SBGR: the Secure Bit Generation Rate. The SBGR starts with the number of raw extracted bits, but takes two aspects into consideration: * During key reconciliation using Cascade (recall Section <ref>) information bits are leaked, and Cascade returns their exact number. Thus we deduct the number of leaked bits from the number of raw bits and normalize by the number of packets N. * Despite the selection of only k sub-carriers, some correlation remains in the extracted bits. To take this effect into account we multiply by the ratio between minimum entropy and maximum possible entropy value. Putting it all together, SBGR is calculated by SBGR = rawBits-leakedBits/N×minEntropy/k· q Where N,k,q are as defined in Section <ref>, rawBits is the length of the raw bitstream, leakedBits is number of bits leaked during key reconciliation process, and minEntropy is the min-entropy value (out of the maximum, which is k· q). §.§ CSI data collection §.§.§ Packets sync Alice and Bob need to make sure they are using the same packets to build the security key. Alice can know if a specific packet exchange succeeded according to Bob response packet. Bob, on the other hand, needs Alice's approval to make sure the packet exchange succeeded. We achieve this by a simple req-ack with retrial mechanism: * Alice sends packet n. The packet's data includes the value n. * Bob gets the packet, saves it, and responds with an ack packet which also includes the value n. * If Alice got Bob's ack packet she continuous to packet n+1, otherwise she resends packet n. * In case of a retrial, Bob overwrites packet n's CSI data and resends an ack for packet n. In our experiments, a separate device was used as a synchronization controller to make sure Eve wirelessly receives all the possible data as well. Thus a packet exchange is successful only if Bob received Alice's packet, Alice received Bob's packet, and Eve received both Alice's and Bob's packets. §.§.§ CSI Calibration The CSI data measured by the devices cannot be directly used since the Automatic Gain Control (AGC) scheme at the receiver modifies the amplitude of the original CSI: The CSI value reported by the device, is multiplied by an unknown factor whose value changes as the user moves. However, since the RSS is reported before AGC occurs, we can reconstruct the CSI by re-scaling it using the received RSS value. Following Gao et al. <cit.>, we multiply the extracted CSI of all sub-carriers by: √(/∑_n=1^numSC^2 ) §.§ Secured key extraction After collecting N packets, Alice and Bob can initiate the proposed key extraction protocol. The protocol has four steps: (i) Quantization (ii) Majority rule (iii) Key reconciliation (Cascade) and (iv) Shared secret key extraction. §.§.§ Quantization The quantization process is carried out for each sub-carrier separately. We use equally-sized percentiles to define quantization bins. Assuming q quantization bits we set 2^q-1 quantization levels associated with 2^q bins. The quantization levels QL_i are calculated as follows: QL_i = Percentile(i*100/(2^q)) where i denotes the quantization level index i = [1..2^q-1] and Percentile(x) returns the value where x percentage of the samples are below this value. We encode each CSI measurement that is in quantization bin i by the q-bit symbol representing i using a Gray code. One advantage of using percentiles for quantization is that each symbol has the same frequency, supporting a uniform distribution of bit values. Note that each device sets its own quantization levels using its own data, eliminating the need to calibrate individual device-dependent thresholds. Bit mismatches are typically caused by Alice and Bob's measurements falling into different, yet adjacent, bins: The Gray encoding ensures that such mismatches only flip one out of the q bits, which helps reduce the bit mismatch rate. As can be seen at Figure <ref>, q=2 quantization bits is the optimal selection: a larger q increase the BMR which causes the SBGR to drop (due to additional leakage in the reconciliation). Choosing q=1 decreases the amount of data. §.§.§ Majority rule As we discussed in Section <ref>, adjacent sub-carriers are strongly correlated so we selected only k non-neighboring sub-carriers, which we call the main sub-carriers. However, we employ this correlation to reduce errors by making a majority rule decision among (2m+1) sub-carriers centered around each main sub-carrier. Let l be the index of a main sub-carrier, then the extracted symbol y is chosen by: y=argmax_a(sum_j(sc_j==a)) Where j gets 2m+1 values in range [l-m,…,l+m], and sc_j is the symbol extracted from the j^th sub-carrier. Ties are broken arbitrarily. Note that with k main sub-carriers, if the minimal distance between them is d sub-carriers then we should use at most m<d/2 in the majority rule to avoid using the same sub-carrier in the majority rule calculation of two main sub-carriers. E.g., with k=4 and the main sub-carriers from the example in Section <ref> we have d=16 so we need m<8. As evident from Figure <ref>, using the majority rule indeed reduces the bit mismatch rate and thus improves the SBGR, with the best results achieved with 9 sub-carriers (m=4). §.§.§ Key reconciliation The raw bitstream is a concatenation of all the quantized bits collected as described above. Both Alice and Bob generate their own bitstreams. Due to the channel reciprocity their bitstreams are very similar, but not identical. Bit mismatches are due to quantization errors, interference and noise, and also motion: e.g., Bob sent his response from outside of the coherence distance. In order to reconcile all the mismatches, Alice and Bob agree that Alice is the one holding the “correct bitstream”, and Bob corrects his bitstream using the Cascade algorithm <cit.>. In order to reduce the number of Cascade rounds we modified the Cascade implementation of <cit.> as follows: * Early termination: Upon initiating the process, Alice sends Bob a hash of her (“correct”) bitstream. After each Cascade round, Bob compares the hash of his reconciled bitstream equals with the one Alice sent. If the two are identical Bob sends a success indication to Alice and the process stops. This feature saves both run-time and amount of leaked bits. * Increasing the number of rounds: The original Cascade algorithm <cit.> uses 4 rounds. Following Yan et al. <cit.> we increased the number of rounds to 10, but the additional rounds use an initial block size of half of the bitstream length. In most cases this change is immaterial since 4 rounds suffice to reach full reconciliation, but in rare cases this helps ensure that Cascade converges. * After the final round, if Bob observes that the bitstream hashes are not identical then he sends a “failure” indication to Alice. §.§.§ Shared secret Key extraction At the end of a successful key reconciliation process, each device holds a bitstream whose length is L=(N· k· q). As discussed in Section <ref>, the number of secure bits embedded in the bitstream after the Cascade leakage, and accounting for the min-entropy, is only L^* = N· SBGR, which is significantly less than L. So neither the bitstream itself, nor pieces of it, should be used directly as cryptographic keys. Instead, we invoke a strong hash function such as the SHA-256 on the whole bitstream to compress all the available randomness into a 256-bit shared key: as long as L^* > 256 we can claim that the shared key indeed has 256 cryptographically secure bits. § RESULTS AND DISCUSSION All the experiments were carried out in a typical indoor environment—a residential apartment in a crowded city. The Experiments were carried out in two different Wi-Fi setups, as detailed in Table <ref>. §.§ Operational range We tested our setup in a variety of realistic indoor ranges and Figure <ref> shows the achieved SBGR. As can be seen, the optimal working range is 2–4m, but the system still performs well even at a distance of 9 meters. §.§ Time to key Fast shared key generation is a critical performance parameter of a system when there is human involvement needing to shake the device. Figure <ref> shows the average time to generate 128- and 256-bit keys at 3 meters, for the two different setups as described in Table <ref>. The time to key consists of several contributors: * Time between adjacent cycles: ≈ 280ms in our setup. * The number of Cascade requests that Bob sends, which depends on the number of bits to reconcile. * The round trip-time for a Cascade packet: ≈200ms. Based on some initial experiments we believe that in “production code”, the round-trip time can be reduced by at least 60%. Figure <ref> shows that the times needed to generate 128 and 256 bits, in the 2.4GHz band (20MHz bandwidth) and 5GHz (40MHz bandwidth) are 20-50s, even in our non-optimized implementation. § RELATED WORK The topic of physical layer secret key generation has been investigated by the research community, using various techniques and focusing on diverse technologies. The surveys of <cit.> provide a taxonomy of approaches. Several works analyzed and simulated the use of CSI for generating a secure key. The closest ones to our approach are CGC <cit.> and KEEP <cit.>, so we also include Table <ref> in which we compare our results with theirs. The CGC work of Liu et al. <cit.> focused on reducing CSI mismatch errors by removing the non-reciprocity component learned from a small number of probe packets. They reported a very high bit generation rate by using CSI data from all sub-carriers to extract 60 bits per packet. However, as <cit.> analyzed, and our work demonstrated in practice (recall Section <ref>), subcarrier data is highly correlated, especially in a static environment, so such high-rate bits cannot be assumed to be secure: hence the indication of “?” in Table <ref>. Nevertheless CGC made some valuable observations on reducing the mismatch rate: it would be interesting to add such a mechanism to our work. This is deferred to future work. The KEEP work of Xi et al. <cit.> addresses the same scenario as our work, of device-to-device shared secret key generation. They addressed the correlation between adjacent sub-carriers by using all the sub-carriers to generate a single bit of data, which eventually resulted in a slow bit generation rate of 0.5 bits per packet. More importantly, though, the KEEP solution severely over-estimates the strength of the resulting keys. KEEP creates a long key by concatenating multiple short CSI-derived blocks, of 16 or 32 bits each. Since the protocol exposes the hash of each of the blocks, the long key's cryptographic strength is in fact determined by the size of the block, since the blocks can be cracked in sequence. So, for example, a key of 256 bits that is generated from 8 32-bits blocks only has a strength of 32 bits. The TDS system of Wei et al. <cit.> also relies on CSI and is based on the coherence distance principle, which states that two devices located less than  0.25λ from each other (3.1cm at 2.4GHz band), will experience the same channel. However, unlike in our scenario, their solution relies on a third device generating packets, and is only feasible when both Alice and Bob are practically touching each other so their antennas are within the coherence distance. The work of Zhang et al. <cit.> simulated and analyzed a key generation process using one sub-carrier to achieve a shared key between Alice and Bob in a slow-fading channel. They showed that in order to achieve temporal de-correlation between the CSI of successive packets one must have ≈200ms delay between them. This result does not contradict our belief that our system can operate with ≈100ms cycle time: since Bob is shaken the channel is rapidly changing. Recently more portable CSI extraction tools have become available, such as the ESP CSI tool <cit.>, or the Nexmon CSI tool <cit.>—which is the one we also used. Yuan et al. <cit.> focused on CSI robustness of the esp32 CSI tool and used it to analyze reciprocity on different scenarios, while Cao et al. <cit.> focused on its full system firmware development. Li et al. <cit.> shows channel reciprocity in different static environments using the Nexmon firmware <cit.>, however they did not analyze a full key-generation system performance and didn't relate to the key strength or the performance. Their results strengthen our work and match our observations on channel reciprocity. The idea of shaking WiFi devices for key agreement already appears in the “Shake them up!” work of Castelluccia et al. <cit.>. However their scenario is different than ours: they assume the legitimate user can hold both devices simultaneously while the adversary is several meters away. The devices transmit messages with spoofed source MAC addresses, and the claim of security is based on the (challenging) premise that the adversary cannot identify which of the two devices is transmitting. Several other works used human hand shaking to generate randomness to securely pair two mobile devices. For example,  <cit.> and  <cit.>, used it while extracting the random data from accelerator sensors. Besides CSI, some authors used the basic RSSI (receiver signal state indicator) to extract achieve key agreement <cit.>,<cit.>. CSI data was also used for fingerprinting-based self-localization employing the Wi-Fi infrastructure <cit.>. The basic idea behind these methods is to hold an on-site training phase in which a CSI “fingerprint” is collected for each geographic location. The localization phase includes measurement of the CSI and finding the best match to the fingerprints database. In <cit.> and <cit.> a method for CSI-based indoor range estimation is suggested. CSI was also used for direction-finding of rogue Wi-Fi hotspots <cit.>. § CONCLUSIONS In this work we introduced and implemented a practical end-to-end scheme for generating a symetric key between two WiFi devices based on (existing) CSI measurements. We demonstrated that the randomness in CSI values obtained in a static scenario is insufficient for secure key generation. In order to significantly alleviate this limitation, we proposed shaking the endpoint Bob. Thus, sufficient variation in the CSI values is achieved and a source of true random (bit) information is obtained. For analyzing the cryptographic strength of the resulting key we defined the Secure Bit Generation Rate, SBGR. This measure takes into account the total number of bits collected, number of packets, amount of leaked bits due to Cascade reconciliation, and the cryptographically-accepted metric of min-entropy. We conducted over 60 end-to-end experiments under different practical scenarios and verified that the proposed scheme performs well for a variety of distances (between endpoints), under different physical layer (PHY) settings. All-in-all we demonstrate that a strong symmetric key can be obtained in a reasonable time. IEEEtran
http://arxiv.org/abs/2307.04725v1
20230710173416
AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning
[ "Yuwei Guo", "Ceyuan Yang", "Anyi Rao", "Yaohui Wang", "Yu Qiao", "Dahua Lin", "Bo Dai" ]
cs.CV
[ "cs.CV", "cs.GR", "cs.LG" ]
[ AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning Yuwei Guo^1,2 Ceyuan Yang^1* Anyi Rao^3 Yaohui Wang^1 Yu Qiao^1 Dahua Lin^1,2 Bo Dai^1 ^1Shanghai AI Laboratory ^2The Chinese University of Hong Kong ^3Stanford University <https://animatediff.github.io/> =================================================================================================================================================================================================================================================================================== type=figure < g r a p h i c s > figure We present , an effective framework for extending personalized text-to-image (T2I) models into an animation generator without model-specific tuning. Once learned motion priors from large video datasets, can be inserted into personalized T2I models either trained by the user or downloaded directly from platforms like CivitAI <cit.> or Huggingface <cit.> and generate animation clips with proper motions. ] ^*Corresponding Author. With the advance of text-to-image models (e.g., Stable Diffusion <cit.>) and corresponding personalization techniques such as DreamBooth <cit.> and LoRA <cit.>, everyone can manifest their imagination into high-quality images at an affordable cost. Subsequently, there is a great demand for image animation techniques to further combine generated static images with motion dynamics. In this report, we propose a practical framework to animate most of the existing personalized text-to-image models once and for all, saving efforts in model-specific tuning. At the core of the proposed framework is to insert a newly initialized motion modeling module into the frozen text-to-image model and train it on video clips to distill reasonable motion priors. Once trained, by simply injecting this motion modeling module, all personalized versions derived from the same base T2I readily become text-driven models that produce diverse and personalized animated images. We conduct our evaluation on several public representative personalized text-to-image models across anime pictures and realistic photographs, and demonstrate that our proposed framework helps these models generate temporally smooth animation clips while preserving the domain and diversity of their outputs. Code and pre-trained weights will be publicly available at our https://animatediff.github.io/project page. § INTRODUCTION In recent years, text-to-image (T2I) generative models <cit.> have received unprecedented attention both within and beyond the research community, as they provide high visual quality and the text-driven controllability, i.e., a low-barrier entry point for those non-researcher users such as artists and amateurs to conduct AI-assisted content creation. To further stimulate the creativity of existing T2I generative models, several light-weighted personalization methods, such as DreamBooth <cit.> and LoRA <cit.>, are proposed to enable customized fine-tuning of these models on small datasets with a consumer-grade device such as a laptop with an RTX3080, after which these models can then produce customized content with significantly boosted quality. In this way, users can introduce new concepts or styles to a pre-trained T2I model at a very low cost, resulting in the numerous personalized models contributed by artists and amateurs on model-sharing platforms such as CivitAI <cit.> and Huggingface <cit.>. While personalized text-to-image models trained with DreamBooth or LoRA have successfully drawn attention through their extraordinary visual quality, their outputs are static images. Namely, there is a lack of temporal degree of freedom. Considering the broad applications of animation, we want to know whether we can turn most of the existing personalized T2I models into models that produce animated images while preserving the original visual quality. Recent general text-to-video generation approaches <cit.> propose incorporating temporal modeling into the original T2I models and tuning the models on the video datasets. However, it becomes challenging for personalized T2I models since the users usually cannot afford the sensitive hyper-parameter tuning, personalized video collection, and intensive computational resources. In this work, we present a general method, , to enable the ability to generate animated images for any personalized T2I model, requiring no model-specific tuning efforts and achieving appealing content consistency over time. Given that most personalized T2I models are derived from the same base one (e.g. Stable Diffusion <cit.>) and collecting the corresponding videos for every personalized domain is outright infeasible, we turn to design a motion modeling module that could animate most of personalized T2I models once for all. Concretely, a motion modeling module is introduced into a base T2I model and then fine-tuned on large-scale video clips <cit.>, learning the reasonable motion priors. It is worth noting that the parameters of the base model remain untouched. After the fine-tuning, we demonstrate that the derived personalized T2I could also benefit from the well-learned motion priors, producing smooth and appealing animations. That is, the motion modeling module manages to animate all corresponding personalized T2I models without further efforts in additional data collecting or customized training. We evaluate our on several representative DreamBooth <cit.> and LoRA <cit.> models covering anime pictures and realistic photographs. Without specific tuning, most personalized T2I models could be directly animated by inserting the well-trained motion modeling module. In practice, we also figured out that vanilla attention along the temporal dimension is adequate for the motion modeling module to learn the proper motion priors. We also demonstrate that the motion priors can be generalized to domains such as 3D cartoons and 2D anime. To this end, our could lead to a simple yet effective baseline for personalized animation, where users could quickly obtain the personalized animations, merely bearing the cost of personalizing the image models. § RELATED WORKS Text-to-image diffusion models. In recent years, text-to-image (T2I) diffusion models have gained much popularity both in and beyond the research community, benefited by the large-scale text-image paired data <cit.> and the power of diffusion models <cit.>. Among them, GLIDE <cit.> introduced text conditions to the diffusion model and demonstrated that classifier guidance produces more visually pleasing results. DALLE-2 <cit.> improves text-image alignment via CLIP <cit.> joint feature space. Imagen <cit.> incorporates a large language model <cit.> pre-trained on text corpora and a cascade of diffusion model to achieve photorealistic image generation. Latent diffusion model <cit.>, i.e., Stable Diffusion, proposed to perform the denoising process in an auto-encoder's latent space, effectively reducing the required computation resources while retaining generated images' quality and flexibility. Unlike the above works that share parameters during the generation process, eDiff-I <cit.> trained an ensemble of diffusion models specialized for different synthesis stages. Our method is built upon a pre-trained text-to-image model and can be adapted to any tuning-based personalized version. Personalize text-to-image model. While there have been many powerful T2I generative algorithms, it's still unacceptable for individual users to train their models due to the requirements for large-scale data and computational resources, which are only accessible to large companies and research organizations. Therefore, several methods have been proposed to enable users to introduce new domains (new concepts or styles, which are represented mainly by a small number of images collected by users) into pre-trained T2I models <cit.>. Textual Inversion <cit.> proposed to optimize a word embedding for each concept and freeze the original networks during training. DreamBooth <cit.> is another approach that fine-tunes the whole network with preservation loss as regulation. Custom Diffusion <cit.> improves fine-tuning efficiency by updating only a small subset of parameters and allowing concept merging through closed-form optimization. At the same time, DreamArtist <cit.> reduces the input to a single image. Recently, LoRA <cit.>, a technique designed for language model adaptation, has been utilized for text-to-image model fine-tuning and achieved good visual quality. While these methods are mainly based on parameter tuning, several works have also tried to learn a more general encoder for concept personalization <cit.>. With all these personalization approaches in the research community, our work only focuses on tuning-based methods, i.e., DreamBooth <cit.> and LoRA <cit.>, since they maintain an unchanged feature space of the base model. Personalized T2I animation. Since the setting in this report is newly proposed, there is currently little work targeting it. Though it is a common practice to extend an existing T2I model with temporal structures for video generation, existing works <cit.> update whole parameters in the networks, hurting the domain knowledge of the original T2I model. Recently, several works have reported their application in animating a personalized T2I model. For instance, Tune-a-Video <cit.> solves the one-shot video generation task via slight architecture modifications and sub-network tuning. Text2Video-Zero <cit.> introduces a training-free method to animate a pre-trained T2I model via latent wrapping given a predefined affine matrix. A recent work close to our method is Align-Your-Latents <cit.>, a text-to-video (T2V) model which trains separate temporal layers in a T2I model. Our method adopts a simplified network design and verifies the effectiveness of this line of approach in animating personalized T2I models via extensive evaluation on many personalized models. § METHOD In this section, <ref> first introduces preliminary knowledge about the general text-to-image model and its personalized variants. Next, <ref> presents the formulation of personalized animation and the motivation of our method. Finally, <ref> describes the practical implementation of the motion modeling module in , which animates various personalized models to produce appealing synthesis. §.§ Preliminaries General text-to-image generator. We chose Stable Diffusion (SD), a widely-used text-to-image model, as the general T2I generator in this work. SD is based on the Latent Diffusion Model (LDM) <cit.>, which executes the denoising process in the latent space of an autoencoder, namely ℰ(·) and 𝒟(·), implemented as VQ-GAN <cit.> or VQ-VAE <cit.> pre-trained on large image datasets. This design confers an advantage in reducing computational costs while preserving high visual quality. During the training of the latent diffusion networks, an input image x_0 is initially mapped to the latent space by the frozen encoder, yielding z_0 = ℰ(x_0), then perturbed by a pre-defined Markov process: q(z_t | z_t-1) = 𝒩(z_t; √(1-β_t)z_t-1, β_t𝐼) for t = 1,…, T, with T being the number of steps in the forward diffusion process. The sequence of hyper-parameters β_t determines the noise strength at each step. The above iterative process can be reformulated in a closed-form manner as follows: z_t = √(α̅_̅t̅)z_0 + √(1-α̅_̅t̅)ϵ, ϵ∼𝒩(0, 𝐼) where α̅_̅t̅ = ∏_i=1^tα_t, α_t = 1 - β_t. Stable Diffusion adopts the vanilla training objective as proposed in DDPM <cit.>, which can be expressed as: ℒ = 𝔼_ℰ(x_0), y, ϵ∼𝒩(0, 𝐼), t‖ϵ - ϵ_θ(z_t, t, τ_θ(y)) ‖_2^2 where y is the corresponding textual description, τ_θ(·) is a text encoder mapping the string to a sequence of vectors. In SD, ϵ_θ(·) is implemented with a modified UNet <cit.> that incorporates four downsample/upsample blocks and one middle block, resulting in four resolution levels within the networks' latent space. Each resolution level integrates 2D convolution layers as well as self- and cross-attention mechanisms. Text model τ_θ(·) is implemented using the CLIP <cit.> ViT-L/14 text encoder. Personalized image generation. As general image generation continues to advance, increasing attention has been paid to personalized image generation. DreamBooth <cit.> and LoRA <cit.> are two representative and widely used personalization approaches. To introduce a new domain (new concepts, styles, etc.) to a pre-trained T2I model, a straightforward approach is fine-tuning it on images of that specific domain. However, directly tuning the model without regularization often leads to overfitting or catastrophic forgetting, especially when the dataset is small. To overcome this problem, DreamBooth <cit.> uses a rare string as the indicator to represent the target domain and augments the dataset by adding images generated by the original T2I model. These regularization images are generated without the indicator, thus allowing the model to learn to associate the rare string with the expected domain during fine-tuning. LoRA <cit.>, on the other hand, takes a different approach by attempting to fine-tune the model weights' residual, that is, training Δ W instead of W. The weight after fine-tuning is calculated as W' = W + αΔ W, where α is a hyper-parameter that adjusts the impact of the tuning process, thus providing more freedom for users to control the generated results. To further avoid overfitting and reduce computational costs, Δ W ∈ℝ^m × n is decomposed into two low-rank matrices, namely Δ W = AB^T, where A ∈ℝ^m × r, B ∈ℝ^n × r, r ≪ m, n. In practice, only the projection matrices in the transformer blocks are tuned, further reducing the training and storage costs of a LoRA model. Compared to DreamBooth which stores the whole model parameters once trained, a LoRA model is much more efficient to train and share between users. §.§ Personalized Animation Animating a personalized image model usually requires additional tuning with a corresponding video collection, making it much more challenging. In this section, we target personalized animation, which is formally formulated as: given a personalized T2I moded, e.g., a DreamBooth <cit.> or LoRA <cit.> checkpoint trained by users or downloaded from CivitAI <cit.> or Huggingface <cit.>), the goal is to transform it into an animation generator with little or no training cost while preserving its original domain knowledge and quality. For example, suppose a T2I model is personalized for a specific 2D anime style. In that case, the corresponding animation generator should be capable of generating animation clips of that style with proper motions, such as foreground/background segmentation, character body movements, etc. To achieve this, one naive approach is to inflate a T2I model <cit.> by adding temporal-aware structures and learning reasonable motion priors from large-scale video datasets. However, for the personalized domains, collecting sufficient personalized videos is costly. Meanwhile, limited data would lead to the knowledge loss of the source domain. Therefore, we choose to separately train a generalizable motion modeling module and plug it into the personalized T2I at inference time. By doing so, we avoid specific tuning for each personalized model and retain their knowledge by keeping the pre-trained weights unchanged. Another crucial advantage of such an approach is that once the module is trained, it can be inserted into any personalized T2I upon the same base model with no need for specific tuning, as validated in the following experiments. This is because the personalizing process scarcely modifies the feature space of the base T2I model, which is also demonstrated in ControlNet <cit.>. §.§ Motion Modeling Module Network Inflation. Since the original SD can only process image data batches, model inflation is necessary to make it compatible with our motion modeling module, which takes a 5D video tensor in the shape of batch × channels × frames × height × width as input. To achieve this, we adopt a solution similar to the Video Diffusion Model <cit.>. Specifically, we transform each 2D convolution and attention layer in the original image model into spatial-only pseudo-3D layers by reshaping the frame axis into the batch axis and allowing the network to process each frame independently. Unlike the above, our newly inserted motion module operates across frames in each batch to achieve motion smoothness and content consistency in the animation clips. Details are demonstrated in the <ref>. Module Design. For the network design of our motion modeling module, we aim to enable efficient information exchange across frames. To achieve this, we chose vanilla temporal transformers as the design of our motion module. It is worth noting that we have also experimented with other network designs for the motion module and found that a vanilla temporal transformer is adequate for modeling the motion priors. We leave the search for better motion modules to future works. The vanilla temporal transformer consists of several self-attention blocks operating along the temporal axis (<ref>). When passing through our motion module, the spatial dimensions height and width of the feature map z will first be reshaped to the batch dimension, resulting in batch × height × width sequences at the length of frames. The reshaped feature map will then be projected and go through several self-attention blocks, i.e., z = Attention(Q,K,V)=Softmax(QK^T/√(d))· V where Q = W^Qz, K = W^Kz, and V=W^Vz are three projections of the reshaped feature map. This operation enables the module to capture the temporal dependencies between features at the same location across the temporal axis. To enlarge the receptive field of our motion module, we insert it at every resolution level of the U-shaped diffusion network. Additionally, we add sinusoidal position encoding <cit.> to the self-attention blocks to let the network be aware of the temporal location of the current frame in the animation clip. To insert our module with no harmful effects during training, we zero initialize the output projection layer of the temporal transformer, which is an effective practice validated by ControlNet <cit.>. Training Objective. The training process of our motion modeling module is similar to Latent Diffusion Model <cit.>. Sampled video data x_0^1:N are first encoded into the latent code z_0^1:N frame by frame via the pre-trained autoencoder. Then, the latent codes are noised using the defined forward diffusion schedule: z_t^1:N = √(α̅_̅t̅)z_0^1:N + √(1-α̅_̅t̅)ϵ. The diffusion network inflated with our motion module takes the noised latent codes and corresponding text prompts as input and predicts the noise strength added to the latent code, encouraged by the L2 loss term. The final training objective of our motion modeling module is: ℒ = 𝔼_ℰ(x_0^1:N), y, ϵ∼𝒩(0, 𝐼), t‖ϵ - ϵ_θ(z_t^1:N, t, τ_θ(y)) ‖_2^2 Note that during optimization, the pre-trained weights of the base T2I model are frozen to keep its feature space unchanged. § EXPERIMENTS §.§ Implementation Details Training. We chose Stable Diffusion v1 as our base model to train the motion modeling module, considering most public personalized models are based on this version. We trained the motion module using the WebVid-10M <cit.>, a text-video pair dataset. The video clips in the dataset are first sampled at the stride of 4, then resized and center-cropped to the resolution of 256 × 256. Our experiments show that the module trained on 256 can be generalized to higher resolutions. Therefore we chose 256 as our training resolution since it maintains the balance of training efficiency and visual quality. The final length of the video clips for training was set to 16 frames. During experiments, we discovered that using a diffusion schedule slightly different from the original schedule where the base T2I model was trained helps achieve better visual quality and avoid artifacts such as low saturability and flickering. We hypothesize that slightly modifying the original schedule can help the model better adapt to new tasks (animation) and new data distribution. Thus, we used a linear beta schedule, where β_start = 0.00085 and β_end = 0.012, which is slightly different from that used to train the original SD. Evaluations. To verify the effectiveness and generalizability of our method, we collect several representative personalized Stable Diffusion models (<ref>) from CivitAI <cit.>, a public platform allowing artists to share their personalized models. The domains of these chosen models range from anime and 2D cartoon images to realistic photographs, providing a comprehensive benchmark to evaluate the capability of our method. Once our module is trained, we plug it into the target personalized models and generate animations with designed text prompts. We do not use common text prompts because the personalized models only generate expected content with specific text distribution, meaning the prompts must have certain formats or contain “trigger words". Therefore, we use example prompts provided at the model homepage in the following section to get the models' best performance. §.§ Qualitative Results We present several qualitative results across different models in <ref>. Due to space limitations, we only display four frames of each animation clip. We strongly recommend readers refer to our homepage for better visual quality. The figure shows that our method successfully animates personalized T2I models in diverse domains, from highly stylized anime (1st row) to realistic photographs (4th row), without compromising their domain knowledge. Thanks to the motion priors learned from the video datasets, the motion modeling module can understand the textual prompt and assign appropriate motions to each pixel, such as the motion of sea waves (3rd row) and the leg motion of the Pallas's cat (7th row). We also find that our method can distinguish major subjects from foreground and background in the picture, creating a feeling of vividness and realism. For instance, the character and background blossoms in the first animation move separately, at different speeds, and with different blurring strengths. Our qualitative results demonstrate the generalizability of our motion module for animating personalized T2I models within diverse domains. By inserting our motion module into the personalized model, can generate high-quality animations faithful to the personalized domain while being diverse and visually appealing. §.§ Comparison with Baselines We compare our method with Text2Video-Zero <cit.>, a training-free framework for extending a T2I model for video generation through network inflation and latent warping. Although Tune-a-Video can also be utilized for personalized T2I animation, it requires an additional input video and thus is not considered for comparison. Since T2V-Zero does not rely on any parameter tuning, it is straightforward to adopt it for animating personalized T2I models by replacing the model weights with personalized ones. We generate the animation clips of 16 frames at resolution 512 × 512, using the default hyperparameters provided by the authors. We qualitatively compare the cross-frame content consistency of the baseline and our method on the same personalized model and with the same prompt (“A forbidden castle high up in the mountains, pixel art, intricate details2, hdr, intricate details"). To more accurately demonstrate and compare the fine-grained details of our method and the baseline, we cropped the same subpart of each result and zoomed it in, as illustrated at the left/right bottom of each frame in <ref>. As shown in the figure, both methods retain the domain knowledge of the personalized model, and their frame-level qualities are comparable. However, the result of T2V-Zero, though visually similar, lacks fine-grained cross-frame consistency when compared carefully. For instance, the shape of the foreground rocks (1st row) and the cup on the table (3rd row) changes over time. This inconsistency is much more noticeable when the animation is played as a video clip. In contrast, our method generates temporally consistent content and maintains superior smoothness (2nd, 4th row). Moreover, our approach exhibits more appropriate content changes that align better with the underlying camera motion, further highlighting the effectiveness of our method. This result is reasonable since the baseline does not learn motion priors and achieves visual consistency via rule-based latent warping, while our method inherits knowledge from large video datasets and maintains temporal smoothness through efficient temporal attention. §.§ Ablative Study We conduct an ablative study to verify our choice of noise schedule in the forward diffusion process during training. In the previous section, we mentioned that using a slightly modified diffusion schedule helps achieve better visual quality. Here we experiment with three representative diffusion schedules (<ref>) adopted by previous works and visually compare their corresponding results in <ref>. Among the three diffusion schedules used in our experiments, Schedule A is the schedule for pre-training Stable Diffusion; Schedule B is our choice, which is different from the schedule of SD in how the beta sequence is computed; Schedule C is used in DDPM <cit.> and DiT <cit.> and differs more from SD's pre-training schedule. As demonstrated in <ref>, when using the original schedule of SD for training our motion modeling module (Schedule B), the animation results are with sallow color artifacts. This phenomenon is unusual since, intuitively, using the diffusion schedule aligned with pre-training should be beneficial for the model to retain its feature space already learned. As the schedules deviate more from the pre-training schedule (from Schedule A to Schedule C), the color saturation of the generated animations increases while the range of motion decreases. Among these three configurations, our choice achieves a balance of both visual quality and motion smoothness. Based on these observations, we hypothesize that a slightly modified diffusion schedule in the training stage helps the pre-trained model adapt to new tasks and domains. Our framework's new training objective is reconstructing noise sequences from a diffused video sequence. This can be frame-wisely done without considering the temporal structure of the video sequence, which is the image reconstruction task the T2I model was pre-trained on. Using the same diffusion schedule may mislead the model that it is still optimized for image reconstruction, which slower the training efficiency of our motion modeling module responsible for cross-frame motion modeling, resulting in more flickering animation and color aliasing. § LIMITATIONS AND FUTURE WORKS In our experiments, we observe that most failure cases appear when the domain of the personalized T2I model is far from realistic, e.g., 2D Disney cartoon (<ref>). In these cases, the animation results have apparent artifacts and cannot produce proper motion. We hypothesize this is due to the large distribution gap between the training video (realistic) and the personalized model. A possible solution to this problem is to manually collect several videos in the target domain and slightly fine-tune the motion modeling module, and we left this to future works. § CONCLUSION In this report, we present , a practical framework for enabling personalized text-to-image model animation, which aims to turn most of the existing personalized T2I models into animation generators once and for all. We demonstrate that our framework, which includes a simply designed motion modeling module trained on base T2I, can distill generalizable motion priors from large video datasets. Once trained, our motion module can be inserted into other personalized models to generate animated images with natural and proper motions while being faithful to the corresponding domain. Extensive evaluation on various personalized T2I models also validates the effectiveness and generalizability of our method. As such, provides a simple yet effective baseline for personalized animation, potentially benefiting a wide range of applications. ieee_fullname § ADDITIONAL RESULTS §.§ Model Diversity In <ref>, we show results using the same prompt with the same model, demonstrating that our method does not hurt the diversity of the original model. §.§ Qualitative Results In <ref> and <ref>, we show more results of our method on different personalized models.
http://arxiv.org/abs/2307.04671v1
20230710161455
Ultrafast demagnetization in bulk nickel induced by X-ray photons tuned to Ni $M_{3}$ and $L_3$ absorption edges
[ "Konrad J. Kapcia", "Victor Tkachenko", "Flavio Capotondi", "Alexander Lichtenstein", "Serguei Molodtsov", "Przemysław Piekarz", "Beata Ziaja" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "cond-mat.other", "physics.atom-ph", "physics.comp-ph", "physics.optics" ]
[e-mail: ][email protected] [: ]0000-0001-8842-1886 , Adam Mickiewicz University in Poznań, Uniwersytetu Poznańskiego 2, 61614 Poznań, Poland , Notkestr. 85, 22607 Hamburg, Germany [e-mail: ][email protected] [: ]0000-0002-0245-145X European XFEL GmbH, Holzkoppel 4, 22869 Schenefeld, Germany , Notkestr. 85, 22607 Hamburg, Germany Elettra-Sincrotrone Trieste S.C.p.A, 34149 Trieste, Basovizza, Italy European XFEL GmbH, Holzkoppel 4, 22869 Schenefeld, Germany University of Hamburg, Jungiusstr. 9, 20355 Hamburg, Germany European XFEL GmbH, Holzkoppel 4, 22869 Schenefeld, Germany Institute of Experimental Physics, TU Bergakademie Freiberg, Leipziger Strasse 23, 09599 Freiberg, Germany Center for Efficient High Temperature Processes and Materials Conversion (ZeHS), TU Bergakademie Freiberg, Winklerstrasse 5, 09599 Freiberg, Germany , Radzikowskiego 152, 31-342 Kraków, Poland [e-mail: ][email protected] [: ]0000-0003-0172-0731 , Notkestr. 85, 22607 Hamburg, Germany , Radzikowskiego 152, 31-342 Kraków, Poland Studies of light-induced demagnetization started with the experiment performed by Beaupaire et al. on nickel. Here, we present theoretical predictions for X-ray induced demagnetization of nickel, with X-ray photon energies tuned to its M_3 and L_3 absorption edges. We show that the specific feature in the density of states of the d-band of Ni, a sharp peak located just above the Fermi level, strongly influences the change of the predicted magnetic signal, making it stronger than in the previously studied case of cobalt. We believe that this finding will inspire future experiments investigating magnetic processes in X-ray irradiated nickel. Ultrafast demagnetization in bulk nickel induced by X-ray photons tuned to Ni M_3 and L_3 absorption edges Beata Ziaja August 12, 2023 =========================================================================================================== § INTRODUCTION Ultrafast control of magnetization with lasers remains a hot topic in laser and solid-state physics communities. Apart from traditional terahertz and optical lasers, the state-of-art XUV or X-ray free-electron lasers <cit.> are now also used for demagnetization studies. The main advantage of these lasers is possibility to resonantly excite core electrons to the magnetically sensitive d-band. As the electronic occupation in the the d-band determines the magnetization of the material, the X-ray induced electronic excitation changes the population of spin-up and spin-down electrons in the band. This results in the decrease of the total magnetic moment in the material <cit.>. In our previous studies <cit.>, we modeled the experimentally observed ultrafast decrease of the X-ray scattering signal from X-ray irradiated cobalt which reflected a transient decrease of the cobalt magnetic moment. The XSPIN simulation tool was developed to follow the progressing demagnetization of cobalt. Our studies have shown that the signal decrease can be explained by ultrafast electron-driven demagnetization. In this paper, we will apply our model to another widely-used magnetic material, nickel. Magnetic moments of nickel and cobalt are 0.66 μ_B and 1.70 μ_B respectively <cit.>. As nickel's Curie temperature (627 K) <cit.> strongly differs strongly from that of cobalt (1400 K), such study can reveal a potential effect of the Curie temperature on the demagnetization dynamics. Laser triggered demagnetization of nickel has been studied in various papers, see, e.g., <cit.>. Interestingly, so far, we have not found any relevant experimental data on Ni demagnetization recorded at XFEL facilities. Therefore, the actual comparison between Co and Ni demagnetization will be performed with theoretical predictions only. §.§ Simulation scheme As in our previous works <cit.>, we will use our recently developed XSPIN code to obtain predictions for the 'magnetic signal' from the X-ray irradiated nickel. The electronic density of states is obtained from the density functional theory (DFT) calculations implemented in the Vienna Ab initio Simulation Package (VASP) <cit.>. Average absorbed doses considered in the simulations are chosen not to cause structural changes (atomic dislocations) in the irradiated materials. Therefore, the equilibrium density of states (DOS) can be used throughout the whole simulation (the ”frozen atom” assumption). The occupations of electronic levels change during the material exposure to the X-ray pulse, as due to the photoionization, impact ionization and Auger process, excited electrons leave the band to the continuum. Later, they relax back to the band. As the electrons are heated up by the pulse, they remain hot on femtosecond timescales considered in this study, as their temperature can only decrease through an exchange with the lattice which follows on longer ((sub)ps) timescales. Moreover, due to the assumed common thermalization of all electrons, both spin-up and spin-down ones (following Fermi Dirac distribution with a common temperature and a chemical potential), the numbers of spin-up and spin-down electrons will be different from the corresponding values in the initial state. This thermalization-induced spin flip process, changing the population of spin-up and spin-down electrons, leads to a change of the magnetic signal. For the simulation, we use a simulation box with N = 512 Ni atoms. We provide averaging over 100 000 realizations in the Monte Carlo module. The XFEL pulse is assumed to have a Gaussian temporal profile of the duration of 70 fs FWHM (full width at half maximum) for M-edge case (M_3 = 66.2 eV) and 50 fs FWHM for L-edge case (L_3 = 852.7 eV). The pulse duration was chosen such to compare the XSPIN predictions for nickel with our previous results for cobalt presented in <cit.>. For more details on the simulation parameters, see Tab. <ref>. § RESULTS §.§ Spin-polarized electronic density of states from density functional theory calculations In order to obtain spin-polarized electronic density of states for bulk nickel, we performed first-principle calculation, using the projector augmented wave (PAW) potentials <cit.> and the generalized gradient approximation (GGA) in the Pardew, Burke, and Ernzerhof (PBE) parametrization <cit.>, implemented in the VASP code <cit.>. For the summation over the reciprocal space, we used 27 × 27 × 27 Monkhorst-Pack k–point grid <cit.>. The spin-polarized density of states for fcc bulk Ni (calculated for the experimental bulk value of the lattice constant, a = 3.524 Å) is presented in Figure <ref>. It is in an agreement with other DFT calculations (see also, e.g., <cit.>). For comparison, the density of states for fcc bulk Co (with a = 3.545 Å <cit.>) used in Refs. <cit.> is also presented. The calculated magnetic moments of nickel and cobalt are 0.62 μ_B and 1.61 μ_B, respectively i.e., with a good agreement with those from <cit.>. §.§ Electronic properties of X-ray irradiated nickel Below we present the results on the transient distributions of excited electrons and holes obtained with the XSPIN code for nickel and for cobalt (cf. also <cit.>) irradiated with X rays tuned to their M absorption edges (∼ 67 eV and ∼ 61 eV respectively). Figure <ref> shows: (a) the transient number of polarized high energy electrons (with energies > 15 eV), (b) the number of low energy electrons (with energies < 15 eV), (c) the transient number of deep shell holes (with indicated polarization of electrons previously occupying the holes), and (d) electronic temperature. The photoexcitation dynamics in Co and Ni look qualitatively similar, with a stronger excitation in Co (Figure <ref>a-b) than in Ni. Collisional relaxation in Ni is also weaker than in Co (Figure <ref>c), which leads to the higher electronic temperature in Ni, when compared to Co (Figure <ref>d). §.§ Generalized transient magnetization In order to follow changing magnetic properties of irradiated materials, we have introduced in <cit.> a generalized transient magnetization which reflects the disparity between electronic populations in spin-up and spin-down electronic subsystems in the d-band: M(t) = ∑_ħω_0 - Δ^ħω_0 + Δ[ N^h_↑ (E_i,↑) - N^h_↓ (E_i,↓)], The probed region in d-band extends between ħω_0 - Δ and ħω_0 + Δ, where ħω_0 = ħω_γ - E_edge. Here, ħω_γ is the incoming photon energy, and E_edge is the energy of the resonant core p-level. The summation goes here over discrete levels. Note that we neglect the subleading effect of the different coupling of polarized light to spin-up and spin-down electrons (XMCD) here. Electronic populations are calculated, assuming Fermi-Dirac distribution of electrons. Knowing at every time step t electronic temperature T_e and electronic chemical potential μ, we have N^h_σ (E) = 1 - N_e,σ^low(E) and N_e,σ^low(E) = { 1 + exp[ (E-μ)/k_B T_e] }^(-1). Time evolution of squared generalized magnetization M^2(t) (normalized to its initial value before the pulse at t→-∞; cf. also (<ref>)) for different absorbed doses is presented in Figures <ref> and <ref>. The values of Δ in Ni were taken from experimental measurements. They are: Δ = 0.7 eV for nickel M-edge <cit.> and Δ=1.0 eV for nickel L-edge <cit.>. Once can see that the decrease of magnetization becomes stronger with the increasing absorbed dose, and also strongly changes with incoming photon energy around the absorption edge. Interestingly, if the probed region in d-band includes the sharp peak in the DOS of spin-down electrons near the Fermi level (X-ray photon energies of 67 eV and 853 eV for M- and L-edge respectively; see Tab. <ref>), the observed magnetization change is much stronger than in case when this peak is not included (X-ray photon energies of 68 eV and 854 eV for M- and L-edge respectively). The reason is that the peak ”provides” a large number of unoccupied states for the resonant excitation from p-level, which leads to a stronger decrease of the transient magnetization. Note that the decrease of magnetization is stronger for Ni than for Co (cf. Figure 4 from Ref. <cit.> and Figure 3 from Ref. <cit.> at the absorbed dose of 0.93 eV/atom). The reason is that cobalt DOS does not show such a peak close to the Fermi level, and the reduction of magnetization is, therefore, suppressed. This can also explain the lower Curie temperature for nickel than for cobalt. §.§ Calculation of the mSAXS signal Similarly as in <cit.>, we can calculate the mSAXS signal strength from the generalized magnetization. It is obtained as: S = a ∫ M^2(t) I(t) dt, where I(t) is the X-ray pulse intensity and a is a proportionality coefficient. Pulse fluence is then: F = ∫ I(t) dt. It is proportional to absorbed dose, D ∝ F, where the proportionality coefficient depends on the material parameters as well as on the photon energy. The dose dependence of the normalized signal strength, S_norm = S(D)[D_0/S(D_0)] for the corresponding experimental Δ values is presented in Figure <ref>. The normalization follows Ref. <cit.>, with the reference dose, D_0=10^-4 eV/atom for all considered cases. Similarly as observed for generalized magnetization, the signal strength strongly depends on the fact if the probed region in d-band includes or does not include the sharp peak in the DOS of spin-down electrons near the Fermi level - being distinctly higher in the latter case. This explains also a stronger decrease of S_norm for nickel than for cobalt. § CONCLUSIONS We provided theory predictions for electronic properties of X-ray irradiated Ni at photon energies close to M_3 or L_3 absorption edge, as well as for the resulting magnetization change and the mSAXS scattering strength. The results obtained indicate the same ultrafast demagnetization mechanism (caused by electronic excitation and relaxation) as in cobalt, occurring at a similar timescale. However, due to the difference in the DOS structure of the d-band, the degree of demagnetization for the equivalent dose would be higher in Ni than in Co. This finding is also consistent with the lower Curie temperature for nickel than for cobalt. As in our previous studies on Co, we did not consider here atomic motion, and kept electronic band structure unchanged. This assumption does not hold for the case of high absorbed doses which may induce ultrafast structural changes in irradiated materials. The model should then be developed further, enabling inclusion of atomic dynamics and of the transient band structure. Nevertheless, we expect that these theory predictions will inspire experimental studies on ultrafast X-ray induced demagnetization of nickel, a benchmark magnetic material of various applications. The authors thank Leonard Müller and Andre Philippi-Kobs for helpful discussions at the early stages of the XSPIN model development. K.J.K. thanks the Polish National Agency for Academic Exchange for funding in the frame of the Bekker program (PPN/BEK/2020/1/00184). V.T., A.L., S.M., B.Z. acknowledge the funding received from the Collaboration Grant of the European XFEL and the Institute of Nuclear Physics, Polish Academy of Sciences.
http://arxiv.org/abs/2307.04113v1
20230709080545
Mitosis Detection from Partial Annotation by Dataset Generation via Frame-Order Flipping
[ "Kazuya Nishimura", "Ami Katanaya", "Shinichiro Chuma", "Ryoma Bise" ]
cs.CV
[ "cs.CV" ]
Mitosis Detection from Partial Annotation K. Nishimura et al. Kyushu University, Fukuoka, Japan [email protected] Kyoto University, Kyoto, Japan Mitosis Detection from Partial Annotation by Dataset Generation via Frame-Order Flipping Kazuya Nishimura1 Ami Katanaya2 Shinichiro Chuma2 Ryoma Bise1 Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023 ========================================================================================== Detection of mitosis events plays an important role in biomedical research. Deep-learning-based mitosis detection methods have achieved outstanding performance with a certain amount of labeled data. However, these methods require annotations for each imaging condition. Collecting labeled data involves time-consuming human labor. In this paper, we propose a mitosis detection method that can be trained with partially annotated sequences. The base idea is to generate a fully labeled dataset from the partial labels and train a mitosis detection model with the generated dataset. First, we generate an image pair not containing mitosis events by frame-order flipping. Then, we paste mitosis events to the image pair by alpha-blending pasting and generate a fully labeled dataset. We demonstrate the performance of our method on four datasets, and we confirm that our method outperforms other comparisons which use partially labeled sequences. Code is available at <https://github.com/naivete5656/MDPAFOF>. § INTRODUCTION Fluorescent microscopy is widely used to capture cell nuclei behavior. Mitosis detection is the task of detecting the moment of cell division from time-lapse images (the dotted circles in Fig. <ref>). Mitosis detection from fluorescent sequences is important in biological research, medical diagnosis, and drug development. Conventionally tracking-based methods <cit.> and tracking-free methods <cit.> have been proposed for mitosis detection. Recently, deep-learning-based mitosis-detection methods have achieved outstanding performance <cit.>. However, training deep-learning methods require a certain amount of annotation for each imaging condition, such as types of cells and microscopy and the density of cells. Collecting a sufficient number of labeled data covering the variability of cell type and cell density is time-consuming and labor-intensive. Unlike cell detection and segmentation, which aims to recognize objects from a single image, mitosis detection aims to identify events from time series of images. Thus, it is necessary to observe differences between multiple frames to make mitosis events annotation. Comprehensively annotating mitosis events is time-consuming, and annotators may be missed mitosis events. Thus, we must carefully review the annotations to ensure that they are comprehensive. Partial annotation has been used as a way to reduce the annotation costs of cell and object detection <cit.>. Fig. <ref> shows an example of partially annotated frames. Some mitosis events are annotated (a red-dotted circle), and others are not (light-blue-dotted circles). The annotation costs are low because the annotator only needs to plot a few mitotic positions. In addition, this style of annotation allows for missing annotations. Therefore, it would be effective for mitosis detection. Unlike supervised annotation, partial annotation can not treat unannotated areas as regions not containing mitosis events since the regions may contain mitosis events (Fig. <ref>). The regions naturally affect the training in the partial annotation setting. To avoid the effect of unlabeled objects in unlabeled regions, Qu et al. <cit.> proposed to use a Gaussian masked mean squared loss, which calculates the loss around the annotated regions. The loss function works in tasks in which foreground and background features have clearly different appearances, such as in cell detection. However, it does not work on mitosis detection since the appearance of several non-mitotic cells appears similar to mitosis cells; it produces many false positives. In this paper, we propose a cell-mitosis detection method for fluorescent time-lapse images by generating a fully labeled dataset from partially annotated sequences. We achieve mitosis detection training in a mitosis detection model with the generated dataset. To generate the fully labeled dataset, we should consider two problems: (1) no label indicating regions not containing mitosis cells and (2) few mitosis annotations. We can easily generate the regions not containing mitotic cells by using one image twice. However, such regions do not contribute to identifying mitotic cells and non-mitotic cells since the data do not show natural cell motions. For the training to be effective, the regions not containing mitotic cells should show the natural movements of cells. To generate such regions, we propose frame-order flipping which simply flips the frame order of a consecutive frame pair. As shown in the white rectangles in Fig. <ref>, we can convert a mitosis event to a cell fusion by flipping operation. Hence, the flipped pair is the region not containing mitosis cells. Even though we flipped the frame order, the non-mitotic cells still have natural time-series motion, as shown in the yellow rectangles in Fig. <ref>. In addition, we can make the most of a few partial annotations by using copy-and-paste-based techniques. Unlike regular copy-and-paste augmentation <cit.> for supervised augmentation of instance segmentations which have object mask annotations, we only have point-level annotations. Thus, we propose to use alpha-blending pasting techniques which naturally blend two images. Experiments conducted on four types of fluorescent sequences demonstrate that the proposed method outperforms other methods which use partial labels. Related work Some methods used partially labeled data to train model <cit.>. Qu <cit.> proposed a Gaussian masked mean squared loss, which calculates the loss around the annotated areas. To more accurately identify negative and positive samples, positive unlabeled learning has been used for object detection <cit.>. These methods have used positive unlabeled learning on candidates detected by using partial annotation to identify whether the candidates are labeled objects or backgrounds. However, since the candidates detected by partial annotation include many false positives, the positive unlabeled learning does not work on mitosis detection. the appearance of the mitosis event and backgrounds in the mitosis detection task, it is difficult to estimate positive prior. These methods could not work on mitosis detection. The positive unlabeled learning requires a positive prior. § METHOD: MITOSIS DETECTION WITH PARTIAL LABELS Our method aims to detect coordinates and timing (t, x, y) of mitosis events from fluorescent sequences. For training, we use time-lapse images ℐ = {I_t}_t=1^T and partial labels (a set of annotated mitosis cells). Here, I_t denotes an image at frame t, and T is the total number of frames. Our method generates a fully labeled dataset 𝒟_p= { (I'_t-1, I'_t), 𝒫_t' }^T-1_t=1 from time-lapse images ℐ and partial labels and then trains a mitosis detection model f_θ with the generated dataset. Here, I'_t is a generated image, and 𝒫_t' is a set of mitotic coordinates contained in (I'_t-1, I'_t). Since our method trains the network with partial labels, it can eliminate the costs of checking for missed annotations. §.§ Labeled dataset generation Fig. <ref> shows an overview of our dataset generation. We randomly pick a pair of consecutive frames (I_t-1, I_t) from time-lapse images ℐ. Since the pair may contain unannotated mitosis events, we forcibly convert the pair into a negative pair (i.e., a pair which does not contain mitosis events) by using frame-order flipping. Next, we paste mitosis events to a generated pair using alpha-blending pasting and obtain a generated pair (I'_t-1, I'_t). Since we know the pasted location, we can obtain the mitosis locations 𝒫'_t of the generated pair. Negative pair generation with frame-order flipping: In this step, we generate a pair not containing mitotic cells by using a simple augmentation-based frame-order flipping. Fig. <ref> shows an example of the pair images (I_t-1, I_t). The pair may contain mitosis events. If we assume that the pair does not contain mitotic cells, it affects the training of the mitosis detection model f_θ. To prevent the pair from containing mitosis events, we flip the frame order and treat the flipped pair (I_t, I_t-1) as a pair of negative. Since mitosis is the event that a cell divides into two daughter cells, the mitosis event is transformed into an event in which two cells fuse into one by flipping the order (Fig. <ref>). The flipped event can treat as a non-mitotic event. Note that the motivation behind using frame flipping is to be able to utilize pixels showing the motions of non-mitotic cells negatives by transforming mitosis into other events. Even if the order is flipped, the movements of the non-mitotic cell are still a non-mitotic cell feature, and we consider that these cells are effective for the training of the negative label. Mitosis label utilization with alpha-blending pasting: Next, we paste mitosis events to the flipped pair by using copy-and-paste techniques in order to utilize the positive labels effectively. Copy and paste augmentation has been used for supervised augmentation of instance segmentation <cit.>. Unlike instance segmentation with object masks, we only have locations (t, x, y). A simple solution is to crop images around the mitosis position and copy and paste them to the target image, like in CutMix <cit.>. However, the cropped image naturally contains surrounding objects, and the generated image appears unnatural. Unnatural images cause the detection network to make biased predictions and reduce generalization performance. To avoid this problem, we propose alpha-blending pasting with a Gaussian blending mask. We blend two images by leaving the pixel value in the center and blurring the vicinity of the edge of the image. First, we crop the image around the positive annotations and obtain a set of cropped pair 𝒞 = {(C_t-1^i, C_t^i )}^N_i=0 and initialize (I'_t-1, I'_t)=(I_t, I_t-1) and 𝒫_t'= {}. Here, N is the total number of partial annotations, while C_t-1^i and C_t^i are images before and after the mitosis of the i-th annotation (Fig. <ref>). Define I_t'(l⃗^j), I_t-1'(l⃗^j) as a cropped pair image at the j-th random spatial location l⃗^j. We crop each image centered at l⃗^j to a size that is the same as that of C_t^i. We update the randomly selected patch I_t'(l⃗^j), I_t-1'(l⃗^j) by blending a randomly selected cropped pair (C_t-1^i, C_t^i) with the following formula: I_t'(l⃗^j) = (1-α) ⊙I_t'(l⃗^j) + α⊙C_t^i, where α is a Gaussian blending mask (Fig. <ref>). We generate the blending mask by blurring a binary mask around the annotation with a Gaussian filter. We use a random sigma value for the Gaussian filter. Then, we add the paste location l⃗^j to the set 𝒫_t'. We repeat this process random k times. §.§ Mitosis detection with generated dataset We modified a heatmap-based cell detection method <cit.> to work as a mitosis detection method. Fig. <ref> is an illustration of our mitosis detection model. Given two consecutive frames (I'_t-1, I'_t), the network output heatmap Ĥ_t. We treat the channel axis as the time axis for the input. The first channel is I'_t-1, and the second is I'_t. First, we generate individual heatmaps H_t^j for each pasted coordinate l⃗^j = (l^j_x, l^j_y). H_t^j is defined as H_t^j(p_x, p_y) = exp( -(l_x^j - p_x) ^2 + (l_y^j - p_y) ^ 2/σ^2), where p_x and p_y are the coordinates of H_t^j and σ is a hyper parameter that controls the spread of the peak. The ground truth of the heatmap at t is generated by taking the maximum through the individual heatmaps, H_t = max_j (H^j_t) (H_t in Fig. <ref>). The network is trained with the mean square error loss between the ground truth H_t and the output of the network Ĥ_t. We can find the mitosis position by finding a local maximum of the heatmap. § EXPERIMENTS Dataset: We evaluated our method on four datasets. The first set is HeLa <cit.>, in which live cell images of HeLa cells expressing H2B-GFP were captured with 1100 × 700 resolution <cit.> [We used the publicly available CTC data-set <http://celltrackingchallenge.net/>. We only use HeLa since the number of mitosis events in other cells is small.]. Each sequence contains 92 fluorescent images with 141 mitosis events on average. The second set is ES, in which live cell images of mouse embryonic stem cells expressing H2B-mCherry were captured with 1024 × 1024 resolution. Each sequence contains 41 fluorescent images with 33 mitosis events on average. The third set is ES-D in which mouse embryonic stem cells expressing H2B-mCherry were induced to differentiate and used to capture live cell images. Each sequence contains 61 fluorescent images with 18 on average events on average. The fourth set is Fib, in which live cell images of mouse fibroblast cells expressing H2B-mCherry were captured with 1024 × 1024 resolution. Each sequence contains 42 fluorescent images with 11 mitosis events on average. Each dataset consists of four sequences of images. We performed four-fold cross-validation in which two sequences were used as training data, one as validation data, and one as test data. As shown in Fig. <ref>, the appearance and density are different depending on the dataset. Implementation details: We implemented our method within the Pytorch framework <cit.> and used a UNet-based architecture <cit.> for the mitosis-detection network. The model was trained with the Adam optimizer with a learning rate of 1e-3. σ, which controls the spread of the heatmap, was 6. The cropping size of the positive annotations was 40 pixels. We randomly change the number of pasting operations k between 1 and 10. We used random flipping, random cropping, and brightness change for the augmentation. Evaluation metrics: We evaluated our method using the F1 score <cit.>, which is widely used in mitosis detection. Given ground-truth coordinates and detected coordinates, we performed one-by-one matching. If the distance of the matched pair was within spatially 15 pixels and temporally 6, we associated the closest coordinate pairs. We treated the matched pair as true positives (TP), unassociated coordinates as false positives (FP), and unassociated ground-truth coordinates as false negatives (FN). Comparisons: We conducted four comparisons that involved training the model with partially labeled data. For the first method, we trained the model by treating unlabeled pixels as non-mitosis ones (Baseline <cit.>). The second method used the Gaussian masked loss (GM <cit.>). The masked loss was calculated on the masked pixels around the positive-label pixels. Thus, the method ignored unlabeled pixels. The third method used positive unlabeled learning to identify mitosis from candidates obtained by the detection model trained with the masked loss (PU <cit.>). The fourth method generated pseudo-labels from the results of positive unlabeled learning and retrained the detection model with the pseudo-label (PU-I <cit.>). In Table <ref>, we compared our method with previous methods in one and five-shot settings. We used N samples per sequence in the N-shot settings. For a robust comparison, we sampled one or five mitosis annotations under five seed conditions and took the average. Overall, our method outperformed all compared methods in F1 metric. GM <cit.>, PU <cit.>, and PU-I <cit.> are designed for detecting objects against simple backgrounds. Therefore, these methods are not suited to a mitosis detection task and are inferior to the baseline. The baseline <cit.> treats unlabeled pixels as non-mitosis cell pixels. In the partially labeled setting, unlabeled pixels contain unannotated mitosis events, and unannotated mitosis affects performance. Unlike cell detection, mitosis detection requires identifying mitosis events from various non-mitotic cell motions, including motions that appear mitotic appearances. Although GM <cit.> can ignore unlabeled mitosis pixels with the masked loss, it is difficult to identify such non-mitosis motions. Therefore, GM estimates produce many false positives. PU <cit.> uses positive unlabeled learning to eliminate false positives from candidates obtained from the detection results with partial labels. However, positive unlabeled learning requires a positive prior in the candidates and a certain amount of randomly sampled positive samples. Since the candidates contain many false positives, the positive prior is difficult to estimate. In addition, there is no guarantee that positive unlabeled learning can work correctly with the selected N-shot annotations. Moreover, since positive unlabeled learning does not work in the mitosis detection task, PU-I <cit.> can not select accurate pseudo labels. Unlike these methods, our method can estimate mitosis events accurately. Since our method generates a fully labeled dataset from a partial label, it effectively uses a few partial annotations. Effectiveness of each module: We performed an ablation study on the HeLa dataset to investigate the effectiveness of the proposed module. We used random augmentation (i.e., random elastic transformation <cit.>, brightness change, and gaussian noise) instead of using frame-order flipping (FOF). We generated I_t^aug by augmenting I_t and input the pair (I_t, I_t^aug) to the network. In the w/o ABP setting, we directly pasted cropped images on the target image as in CutMix <cit.>. Table <ref> demonstrates that the proposed modules improve mitosis detection performance. Fig. <ref> shows examples of the estimation results for each condition. Without the FOF setting, the detection model estimates a high value for all moving cells, leading to over-detection. Without the ABP setting, the detection model overfits the directly pasted image. The directly pasted image tends to include unnatural boundaries on the edge, leading to missed detections in real images. Robustness against missing annotations: We confirmed the robustness of the proposed method against missing annotations on the ES dataset. We changed the missing annotation rate from 0% to 30%. A comparison with the supervised method in terms of F1-score is shown in Fig. <ref>. The performance of the supervised method deteriorates as the percentage of missing labels increases, whereas the performance of the proposed method remains steady. Since our method flips the frame order, we can avoid the effects of missing annotations. Appearance of generated dataset: Fig. <ref> shows an example of the generated image pair. The cropped mitosis image pairs were pasted on the red-dotted circle. It can be seen that the borders of the original image and the pasted image have been synthesized very naturally. § CONCLUSION We proposed a mitosis detection method using partially labeled sequences with frame-order flipping and alpha-blending pasting. Our frame-order flipping transforms unlabeled data into non-mitosis labeled data through a simple flipping operation. Moreover, we generate various positive labels with a few positive labels by using alpha-blending pasting. Unlike directly using copy-and-paste, our method generates a natural image. Experiments demonstrated that our method outperforms other methods that use partially annotated sequences on four fluorescent microscopy images. Acknowledgements: This work was supported by JSPS KAKENHI Grant Number JP21J21810 and JST ACT-X Grant Number JPMJAX21AK, Japan. splncs04
http://arxiv.org/abs/2307.06299v1
20230712165332
Towards a Certified Proof Checker for Deep Neural Network Verification
[ "Remi Desmartin", "Omri Isac", "Grant Passmore", "Kathrin Stark", "Guy Katz", "Ekaterina Komendantskaya" ]
cs.LO
[ "cs.LO", "cs.LG", "cs.PL" ]
R. Desmartin, O. Isac et al. Heriot-Watt University, Edinburgh The Hebrew University of Jerusalem Imandra Inc., Austin, Texas, USA Towards a Certified Proof Checker for Deep Neural Network Verification Remi Desmartin1Both authors contributed equallyFunded by Imandra Inc. Omri Isac2^*Grant Passmore3 Kathrin Stark1Guy Katz2 Ekaterina Komendantskaya2Funded by EPSRC grant AISEC (EP/T026952/1) and NCSC grant “Neural Network Verification: in search of the missing spec.” August 12, 2023 ================================================================================================================================================================================================================================================================================ Recent developments in deep neural networks (DNNs) have led to their adoption in safety-critical systems, which in turn has heightened the need for guaranteeing their safety. These safety properties of DNNs can be proven using tools developed by the verification community. However, these tools are themselves prone to implementation bugs and numerical stability problems, which make their reliability questionable. To overcome this, some verifiers produce proofs of their results which can be checked by a trusted checker. In this work, we present a novel implementation of a proof checker for DNN verification. It improves on existing implementations by offering numerical stability and greater verifiability. To achieve this, we leverage two key capabilities of Imandra, an industrial theorem prover: its support of infinite precision real arithmetic and its formal verification infrastructure. So far, we have implemented a proof checker in Imandra, specified its correctness properties and started to verify the checker's compliance with them. Our ongoing work focuses on completing the formal verification of the checker and further optimizing its performance. § INTRODUCTION Applications of deep neural networks (DNNs) have grown rapidly in recent years, as they are able to solve computationally hard problems. This has led to their wide use in safety-critical applications like medical imaging <cit.> or autonomous aircraft <cit.>. However, DNNs are hard to trust for safety-critical tasks, notably because small perturbations in their inputs – whether from faulty sensors or malicious adversarial attacks – may cause large variations of their outputs, leading to potentially catastrophic system failures <cit.>. To circumvent this issue, the verification community has developed techniques to guarantee DNN correctness using formal verification, employing mathematically rigorous techniques to analyze DNNs' possible behaviours in order to prove it safe and compliant e.g. <cit.>. Along with these DNN verifiers, the community created an annual competition <cit.> and a standardisation of an ad-hoc format <cit.>. Usually, DNN verifiers consider an optimized DNN and prove input-output properties, e.g., that for inputs within a delimited region of the input space, the network's output will be in a safe set. Besides verifying DNNs at a component level, verification has the power to verify larger systems integrating DNNs. Integration of DNN verifiers in larger verification frameworks has been studied as well <cit.>, and it requires the DNN verifiers to provide results that can be checked by the system-level verifier. Unfortunately, DNN verifiers are susceptible to errors as any other program. One source of problems is floating-point arithmetic used for their internal calculations. While crucial for performance, floating-point arithmetic also leads to numerical instability and is known to compromise soundness <cit.>. As the reliability of DNN verifiers becomes questionable, it is necessary to check that their results are not erroneous. When a DNN verifier concludes there exists a counterexample for a given property, this result can be easily checked by evaluating the counterexample over the network and ensuring the property's violation. However, when a verifier concludes that no counterexample exists, ensuring the correctness of this result becomes more complicated. To overcome this, DNN verifiers may produce proofs for their results, allowing an external program to check their soundness. Producing proofs is a common practice <cit.>, and was recently implemented on top of the Marabou DNN verifier <cit.>. Typically, proof checkers are simpler programs than the DNN verifiers, and hence much easier to inspect and verify. Moreover, while verifiers are usually implemented in performance-oriented languages such as C++, trusted proof checkers could be implemented in languages suitable for verification. Functional programming languages (FPL), such as Haskell, OCaml and Lisp are well-suited for this task, thanks to their deep relationship with logics employed by theorem provers. In fact, some FPLs, such as Agda <cit.>, Coq <cit.>, ACL2 <cit.>, Isabelle <cit.> and Imandra <cit.> are also theorem provers in their own right. Implementing and then verifying a program in such a theorem prover allows to bridge the verification gap, i.e. minimise the discrepancies that can exist between the original (executable) program and its verified (abstract) model <cit.>. In this paper, we describe our ongoing work to design, implement and verify a formally-verifiable and infinitely-precise proof checker for DNN verifiers. We have implemented an adaptation of a checker of proofs produced by the Marabou DNN verifier <cit.> to Imandra <cit.>, a functional programming language coupled with its own industrial-strength theorem prover. Three key features make Imandra a suitable tool: infinite precision real arithmetic, efficient code extraction and the first-class integration of formal verification. Support for infinite precision real arithmetic prevents numerical instability. The ability to extract verified Imandra code to native OCaml improves scalability as it can then benefit from the standard OCaml compiler's optimizations. Finally, with Imandra's integrated formal verification, we can directly analyze the correctness of the proof checker we implement. Note that Imandra as a DNN verifier has already been researched <cit.>. Contributions. Contrary to previous implementations prioritising scalability, our checker can be formally verified by Imandra's prover and its precision is infinite. This increases the checker's reliability and overcomes a main barrier in integrating DNN verifiers in system-level checkers. Since reliability usually compromises scalability, our proof checker supports several checking modes, with different approaches to balance the two. This is done along two orthogonal axes, by optionally: [(i)] * using verified data structures at the expense of computation speed; * accepting some parts of the proof without checking. Our ongoing work is currently focused on formally verifying the proof checker. So far, we have managed to verify that our checker complies with linear algebra theorems, and we attempt to leverage these results to verify the proof checker as a whole in the future. Paper organisation. The rest of this paper is organized as follows. In Section <ref> we provide relevant background on DNN verification and proof production. In Section <ref> and Section <ref> we respectively describe our proof checker, and our ongoing work towards formally verifying it using Imandra. In Section <ref> we conclude our work, and describe our plans for completing our work and for the future. § BACKGROUND §.§ DNN Verification Throughout the paper, we focus on DNNs with ReLU activation functions, though all our work can be extended to DNNs using any piecewise-linear activation functions (e.g., max pooling). We refer the reader to Appendix <ref> for a formal definition of DNNs and activation functions. An example of a DNN appears in Fig. <ref>. The DNN verification problem is the decision problem of deciding whether for a given DNN 𝒩:ℝ^m→ℝ^k and a property P⊆ℝ^m+k, there exists an input x∈ℝ^m such that 𝒩(x)=y ∧ P(x,y). If such x exists, the verification query is satisfiable (); otherwise it is unsatisfiable (). Typically, P represents an erroneous behaviour, thus an input x satisfying the query serves as a counterexample and indicates the network acts as expected. Due to its linear and piecewise-linear structure, a DNN verification query can be reduced to an instance of Linear Programming (LP) <cit.>, representing the affine functions of the DNN, and piecewise-linear constraints that represent the activation functions and the property. This reduction makes algorithms for solving LP instances, coupled with a case-splitting approach for handling the piecewise-linear constraints <cit.>, a prime scheme for DNN verification, which we call LP-based DNN verifiers. The widely used Simplex algorithm <cit.>, is typically used by such verifiers. Based on the problem constraints, the algorithm initiates a matrix A called the tableau, a variable vector x and two bound vectors u,l such that l≤ x ≤ u. The Simplex algorithm then attempts to find a solution to the system: Ax = 0 ∧ l ≤ x ≤ u or concludes that none exists. For clarity, we denote u(x_i),l(x_i) as the upper and lower bounds of the variable x_i, instead of u_i,l_i. Consider the DNN in Fig. <ref> and the property P that holds if and only if (x_1, x_2)∈ [-1,1]^2 ∧ y∈[2,3]. We later show a proof of for this query. We assign variables x_1, x_2, y to the input and output neurons. For all i∈1,2,3 we assign a couple of variables f_i,b_i for the inputs and outputs of the neurons v_i, where f_i=(b_i). We then get the linear constraints and bounds (where some bounds were arbitrarily fixed for simplicity): b_1 = 2x_1, b_2 = x_2, b_3 = f_2 - f_1, y = f_3 -1≤ x_1, x_2, b_2 ≤ 1, 0 ≤ f_2 ≤ 1, -2≤ b_1,b_3 ≤ 2, 0 ≤ f_1, f_3 ≤ 2, 2 ≤ y ≤ 3 and the piecewise linear constraints: ∀ i ∈1,2,3: f_i = (b_i) Then, an LP-based DNN verifier initiates the input for the Simplex algorithm: A = [ 2 0 -1 0 0 0 0 0 0; 0 1 0 -1 0 0 0 0 0; 0 0 0 0 -1 -1 1 0 0; 0 0 0 0 0 0 0 -1 1; ] u = [ 1 1 2 1 2 2 1 3 3 ]^⊺ x = [ x_1 x_2 b_1 b_2 b_3 f_1 f_2 f_3 y ]^⊺ l = [ -1 -1 -2 -1 -2 0 0 0 2 ] ^⊺ In addition to the piecewise-linear constraints ∀ i ∈1,2,3: f_i = (b_i). One of the key tools used by the Simplex algorithm, and consequently by DNN verifiers, is dynamic bound tightening. This procedure allows deducing tighter bounds for each variable and is crucial for the solver's performance. For example, using the above equation f_3=y and the bound u(y) = 2, we can deduce u(f_3)= 2, and further use this bound to deduce other bounds as well. The piecewise-linear constraints introduce rules for tightening bounds as well, which we call Theory-lemmas. For instance, the output variable f_3 of the constraint of the above example is upper bounded by the input variable b_3, whose upper bound is 2. The list of supported lemmas appears in Appendix <ref>. The case-splitting approach is used over the linear pieces of some piecewise-linear constraints, creating several sub-queries with each adding new information to the Simplex algorithm. For example, when performing a split over a constraint of the form y=(x), two sub-queries are created. One is enhanced with y=x∧ x≥0, and the other with y=0∧ x≤0. The use of case-splitting also induces a tree structure for the verification algorithm, with nodes corresponding to the splits applied. On every node, the verifier attempts to conclude the satisfiability of the query based on its linear constraints. If it concludes an answer, then this node represents a leaf. In particular, a tree with all leaves corresponding to an result of Simplex is a search tree of an verification query. §.§ Proof Production for DNN Verification Proof production for is straightforward using a satisfying assignment. On the other hand, when a query is , the verification algorithm induces a search tree, where each leaf corresponds to an result of the Simplex algorithm for that particular leaf. Thus, a proof of is comprised of a matching proof tree where each leaf contains a proof of the matching Simplex result. Proving results of Simplex is based on a constructive version of the Farkas Lemma <cit.>, which identifies the proof for LP instances. Formally, it was proven <cit.> that: Let A ∈ M_m × n (ℝ) and l,x,u ∈ℝ^n, such that A· x = 0 and l ≤ x ≤ u, exactly one of these two options holds: * The case: ∃ x ∈ℝ^n such that A · x = 0 and l ≤ x ≤ u. * The case: ∃ w ∈ℝ^m such that for all l ≤ x ≤ u, w^⊺· A · x < 0, whereas 0 · w = 0. Thus, w is a proof of the constraints' unsatisfiability. Moreover, these vectors can be constructed while executing the Simplex algorithm. To construct the proof vectors, two column vectors are assigned to each variable x_i, denoted x_i, x_i, which are updated during bound tightening. These vectors are used to prove the tightest upper and lower bounds of x_i deduced during the bound tightenings performed by Simplex, based on u,l and A. This mechanism was designed and implemented <cit.>, on top of the Marabou DNN verifier <cit.>. Supporting the complete tree structure of the verification algorithm is done by constructing the proof tree in a similar manner to the search tree — every split performed in the search directly creates a similar split in the proof tree, with updates to the equations and bounds introduced by the split. Proving theory lemmas is done by keeping details about the bound that invoked the lemma together with a Farkas vector proving its deduction and the new learned bound, and adding them to the corresponding proof tree node. § THE IMANDRA PROOF CHECKER Our proof checker is designed to check proofs produced by the Marabou DNN verifier <cit.>, to the best of our knowledge the only proof producing DNN verifier. When given a Marabou proof of as a JSON <cit.> file, the proof checker reconstructs the proof tree using datatypes encoded in Imandra. The proof tree consists of two different node types — a proof node and a proof leaf. Both node types contain a list of lemmas and a corresponding split. In addition, a node contains a list of its children, and a leaf contains a contradiction vector, as constructed by Theorem <ref>. This enables the checker to check the proof tree structure at the type-level. The proof checker also initiates a matrix A called a tableau, vectors of upper and lower bounds u,l and a list of piecewise-linear constraints (see Section <ref>). The checking process consists of traversing the proof tree. For each node, the checker begins by locally updating u,l and A according to the split, and optionally checking the correctness of all lemmas. Lemma checking is similar to checking contradictions, as shown in Example <ref> below (see Appendix <ref> for details). If the node checked is not a leaf, then the checker will check that all its childrens' splits correspond to some piecewise-linear constraint of the problem i.e. one child has a split of the form y=x∧ x≥0 and the other of the form y=0∧ x≤0 for some constraint y=(x). If the checker certifies the node, it will recursively check all its children, passing changes to u,l and A to them. When checking a leaf, the checker checks that the contradiction vector w implies , as stated in Theorem <ref>. As implied from the theorem, the checker will first create the row vector w^⊺· A, and will compute the upper bound of its underlying linear combination of variables w^⊺· A · x. The checker concludes by asserting this upper bound is negative. The checker then concludes that the proof tree represents a correct proof if and only if all nodes passed the checking process. Consider the simple proof in Fig. <ref>. The root contains a single lemma and each leaf contains a contradiction vector, which means the verifier performed a single split. In addition, the proof object contains the tableau A, the bound vectors u,l, and the constraints as presented in Example <ref>. The proof checker begins by checking the lemma of the root. It does so by creating the linear combination [ 0 0 1 0 ]^⊺· A · x = -b_3-f_1+f_2. As the lemma is invoked by the upper bound of b_3, the checker uses the equivalent equation b_3 = f_2 - f_1, which gives the upper bound u(b_3) = u(f_2)-l(f_1) = 1. We can indeed deduce the bound u(f_3) = 1 based on the constraint f_3=(b_3), so the lemma proof is correct. Then, the checker certifies that the splits f_3 = 0 ∧ b_3 ≤ 0 and f_3 = b_3 ∧ b_3 ≥ 0 correspond to the two splits of f_3=(b_3). The checker then begins checking the left leaf. It starts by updating l(b_3)=0 and adding the equation f_3 = b_3 as the row [ 0 0 0 0 1 0 0 -1 0 ] to A. Then, the checker checks the contradiction vector by computing [ 0 0 1 -1 1 ]^⊺· A · x = -f_1+f_2-y. The upper bound of this combination is -l(f_1)+u(f_2)-l(y)=-1 which is negative, thus proving for the leaf according to Theorem <ref>. Checking the right leaf is done similarly. After checking all nodes, the checker asserts the proof tree indeed proves for the whole query. §.§.§ Implementation in Imandra, OCaml Extraction and Evaluation. Porting the proof checker from C++ to Imandra necessitates taking into account the trade-off between scalability and computation. The choice of data structures for common objects – like vectors – is essential in the balance between scalability and efficiency <cit.>. In this work, we experiment with two different implementations for vectors: native OCaml lists, and sparse vectors using Imandra's built-in Map data type, based on binary search trees. The latter has better performance but the former makes it easier to verify, so for now our verification efforts focus on the native list implementation (see Appendix <ref> for more details). Imandra's logic includes theories for arbitrary precision integer and real arithmetic, which are implemented using OCaml's built-in Zarith library <cit.>. As a result, the Imandra implementation of the checker supports arbitrary precision real arithmetic with low overhead. Executing code within Imandra's reasoning environment is helpful during the implementation and verification process, but is not optimized for performance. To that end, imandra-extract is a facility to extract native OCaml code that can be compiled – and optimized – with standard OCaml compilers. The extracted code retains Imandra's semantics, meaning that it still uses infinite precision real arithmetic. An initial comparison of the execution time for checking the same proofs from the ACAS-Xu benchmark <cit.> in the C++ implementation and in the extracted OCaml code with native lists shows that our implementation is about 150 times slower than the original implementation but stays within a reasonable time, i.e. less than 40 minutes for all the examples ran (see Appendix <ref>). Further optimizations and a comprehensive benchmark are ongoing work. § SPECIFICATION OF THE PROOF CHECKER'S CORRECTNESS We aim to verify the two main checks performed by the proof checker when traversing the proof tree (see Section <ref>): contradictions and theory lemmas. Contradictions checking We want to verify that our proof checker identifies correctly when a contradiction vector is a valid proof of , thus satisfying Theorem <ref> (case 2.). Formally, the specification can be given as: For all contradiction vector w, tableau A, bounds u,l, and bounded input l ≤ x ≤ u, if the upper bound of w^T · A · x is negative, then x cannot satisfy the constraints A · x = 0 ∧ l ≤ x≤ u. The Imandra implementation of this specification is given in Listing <ref>. [ frame=single, caption=High-level theorem formalising correctness of contradiction checking. The function check_contradiction is a key component of the proof checker which should return true iff the linear combination of the tableau and contradiction vectors has a negative upper bound., label=lst:contradiction_thm] theorem contra_correct x contra tableau u_bounds l_bounds = is_bounded x u_bounds l_bounds check_contradiction contra tableau u_bounds l_bounds ==> not (null_product tableau x) 0.9 Theory lemmas The goal is to prove that each theory lemma within the proof, indeed corresponds to one of the theory lemmas (Appendix <ref>). Proving the specification necessitates guiding Imandra by providing supporting lemmas, in our case properties of linear algebra. After proving these intermediary lemmas, Imandra's proof automation can apply them automatically, or we can manually specify which lemma to apply. So far we have defined and proved that our checker is coherent with known properties of linear algebra (e.g. Listing <ref>). Our current work focuses on building on top of these lemmas to fully prove the checker's correctness. [frame=single, caption=Definition of lemmas proved in Imandra; the first lemma dot_product_coeff defines the homogeneity of the dot-product operation; it is used to prove the second lemma by using the apply annotation, label=lst:verification] lemma dot_product_coeff x y c = dot_product x (list_mult y c) = c *. dot_product x y [@@auto] lemma dot_product_coeff_eq x y c = dot_product x y = 0. ==> dot_product x (list_mult y c) = 0. [@@auto][@@apply dot_product_coeff x y c] 0.9 § CONCLUSION AND FUTURE WORK We have implemented a checking algorithm for proofs generated by a DNN verifier in the functional programming language of Imandra, enabling the checking algorithm to be infinitely precise and formally verifiable by Imandra's prover. Compared to previous work, our implementation presents two new guarantees: it avoids numerical instability by using arbitrary-precision real numbers instead of floating-point numbers; and its correctness can be formally verified as it is implemented in a theorem prover. As expected, adding safety guarantees comes at a cost of performance, but the extraction of native OCaml minimises the overhead compared to the unverified C++ implementation. Furthermore, using an FPL checker to check proofs produced by a DNN verifier is a first step towards integrating component-level DNN verification into the system-level. Our immediate future work is to continue the verification of the proof checker. In addition, we intend to identify cases where the existing checker implementation fails (e.g. due to numerical instability) and ours correctly checks the proof. Investigating further optimizations is also a promising direction by implementing better performance data structures, such as AVL trees. Appendix § DEEP NEURAL NETWORKS Formally, a DNN is a function 𝒩:ℝ^m→ℝ^k which is a composition of n layers L_0,...,L_n-1. Each layer L_i consists of s_i ∈ℕ nodes, denoted v^1_i,...,v^s_i_i. The assignment for the j^th node in the 1 ≤ i < n-1 layer is computed as v^j_i =f( l=1s_i-1∑ w_i,j,l· v^l_i-1 + b^j_i ) for some non-linear function f:ℝ→ℝ, called the activation function. The neurons in the last, output layer are computed in a similar manner, without using f. The parameters w_i,j,l and b^j_i are predetermined and are called the weights and biases of 𝒩, respectively. One of the most common activation functions is the rectified linear unit (), defined as (x) = max(x,0). § CHECKING THEORY LEMMAS. Empirically, the majority of the proof size and the proof checking process is used for storing the theory lemmas and checking them. Thus, we decided to enable two checking modes, allowing different balances of scalability and reliability. The modes are [(i)] * a complete checking mode in which lemmas correctness are checked, thus prioritising reliability; and * a partial checking mode in which lemmas claims are used without checking, thus prioritising scalability. In both modes, the checker iterates through the list of lemmas and updates u,l locally. If the complete checking mode is enabled, the checker needs to check in each iteration, that the Farkas vector w corresponds to the details of the bound invoked the lemma, and that it can be used to by a theory lemma to update the learned bound. To do so, the checker creates the row vector w^⊺· A, which is equivalent some linear combination w^⊺· A · x := 0=j∑c_j· x_j. Suppose the lemma claims to prove some (say, upper) bound of variable x_i to be of value z. Then, as shown in <cit.> the checker checks that for the equation x_i = j≠ i∑c_j· x_j + (c_i+1)· x_i, the upper bound of x_i is indeed z. If so, the checker continues to pattern match the lemma to any of the theory lemmas for some piecewise-linear constraints. In order to support a lemma, it is required to be hard coded in the checker. Note that the bound computed using the checking process uses the bound vectors u,l only, whereas the bound value z can be deduced using bound tightening performed by the DNN verifier, which can be much tighter than of u,l. The theory lemmas currently supported by our proof checker are all can an be learned using a constraint of the form f = ReLU(b) = max(0,b). In some cases, these lemma can also be derived using a linear combination, if the corresponding equation has been introduced (i.e., the equation f = b). The lemmas are: 0.8 [(i)] * For a positive l(f), l(b) := l(f). * For a positive l(b), l(f) := l(b). * For any u(f), u(b) := u(f). * For non-positive u(b), u(f) := 0. * For a positive u(b), u(f) := u(b). § NATIVE LIST V. SPARSE VECTOR IMPLEMENTATION This appendix details the pros and cons of the two data structures use for implementing vectors: native lists and sparse vectors (with underlying map/BST). Performance. Access to random elements, an operation that is often used in the proof checker, is faster for sparse vectors: the complexity for accessing a random element n in a BST is O(h) (where h is the length of the longest path from the root) agaisnt O(n) for the inductive native list definition. This performance advantage can be seen in the benchmark in Table <ref>, Appendix <ref>. Verification. Native lists have the benefits of having verified functions and properties available out of the box. However, operations on native lists often have to consider the case of lists of different length. This can be done in the code with error handling, or in the verification phase by adding preconditions on list lengths to all verification goals. As a result code complexity and reasoning difficulty are increased. Sparse lists are total functions, so there is no need to verify dimensions of vectors and matrices; however, we need to prove basic properties before we can start reasoning about them. For now our we have focused our verification effort on native lists. Listings <ref> and <ref> show the implementation of the same function using native lists and sparse vectors. Notice that the native list implementation uses explicit induction (in the function comput_row_upper_bound') and has to include the case where lists have different lenghts in its pattern matching, whereas the sparse vector implementation has more succinct (and more efficient) random element access using M.get, and the size of vectors is irrelevant. § EVALUATION ON ACAS-XU PROOFS This table shows the performance for checking proofs generated by Marabou on several verification tasks from the ACAS-Xu benchmark <cit.>. Each task is identified by a network identifier (e.g. N(2,9) and a property number (e.g. p3). The performance for the existing proof checker for Marabou proofs written in C++ <cit.> is compared with our implementation in Imandra. Our implementation is evaluated with different vector implementations (native lists and sparse vectors, as discussed in Section <ref>) and in both verification modes, with and without checking the theory lemmas' correctness (as discussed in Section <ref>). The best mode of our implementation is using sparse vectors with no theory lemmas checking; it is about twice as slow as the original implementation. The best performing full checking of the proof, using native lists, is about 150 times slower than the C++ checker. This performance loss doesn't come as a surprise as the arbitrary precision reals is computationally harder than dealing with fixed precision floating point numbers. One useful insight for our ongoing optimization work is that the sparse vector mode is faster than the native list mode when theory lemmas are checked, but slower when they are not. This guides us to find inefficient code in the latter configuration.
http://arxiv.org/abs/2307.04079v1
20230709011140
Projective Rectangles
[ "Rigoberto Florez", "Thomas Zaslavsky" ]
math.CO
[ "math.CO", "Primary 51E26, Secondary 05B15, 05B35, 05C22, 51A30, 51E20" ]
myheadings Flórez and ZaslavskyProjective Rectangles empty Dept. of Mathematical Sciences, The Citadel, Charleston, South Carolina 29409 [email protected] Dept. of Mathematical Sciences, Binghamton University, Binghamton, New York 13902-6000 [email protected] A projective rectangle is like a projective plane that has different lengths in two directions. We develop the basic theory of projective rectangles including incidence properties, projective subplanes, configuration counts, a partial Desargues's theorem, a construction from projective planes, and alternative formulations. In sequels we study harmonic conjugation and the graphs of lines and subplanes. [2010]Primary 51E26; Secondary 05B15, 05B35, 05C22, 51A30, 51E20 Projective Rectangles Thomas Zaslavsky August 12, 2023 ===================== empty § INTRODUCTION A projective rectangle is like a projective plane, but narrower than it is tall. More precisely, it is like the set of points on a certain kind of family of lines in a projective plane, with their induced lines. Very precisely, it is an axiomatic incidence structure based on adapting axioms of projective geometry. Projective rectangles are found in all known harmonic matroids, such as full algebraic matroids. Harmonic matroids are matroids within which there is harmonic conjugation <cit.>; their definition was inspired by Lindström's article <cit.> about abstract harmonic conjugation. Harmonic conjugation applied to complete lift matroids of group expansions <cit.> of a triangle (for instance, L_2^k, Example <ref>) led us to structures that looked like vertical strips in projective planes—whence the name “projective rectangle” and the impulse to find a general theory of this idea in terms of incidence geometry. Projective rectangles themselves are almost examples of harmonic matroids, seemingly falling short only in special lines, as we prove in the sequel <cit.>. An indication of what we accomplish in this article: First, the axioms (Section <ref>) and basic consequences for incidence geometry (Section <ref>) and counting (Section <ref>). Especially, we see that a projective rectangle, if it is not a projective plane, contains a multitude of maximal projective planes; we call them its “planes”. Section <ref> develops partial Desarguesian properties of projective rectangles, which satisfy limited versions of the two halves of Desargues's Theorem. In Section <ref> we show that the construction based on a subplane and a special point, alluded to above, actually works to produce projective rectangles in planes that are Pappian, i.e., coordinatized by a field; we do not know how far that subplane construction generalizes. The following section treats the narrowest projective rectangles, which are the simplest and best understood. Next are two sections that give alternative viewpoints: in Section <ref> we see that a projective rectangle is essentially a Paschian transversal design and thus is equivalent to a special kind of orthogonal array, and in Section <ref> we take the approach of projective duality by interchanging points and lines, which may suggest new properties but which we have not studied deeply. We have only an elementary understanding of projective rectangles in general, as is shown by the list of significant open problems in Section <ref>. In sequels we treat adjacency graphs and harmonic conjugation. One concerns the graphs of adjacency of lines and of planes <cit.>. Notably, in projective rectangles that are not projective planes the graph of planes, where adjacency means having an ordinary line in common, has striking internal structure that presents a tantalizing vision of higher dimensionality. The other sequel <cit.> explores abstract harmonic conjugation as a theme linking harmonic matroids and projective rectangles. In one direction, a projective rectangle is almost a harmonic matroid. In the other direction, a harmonic matroid contains a projective rectangle if it contains a matroid of a finite-field expansion of a triangle, in particular if it contains a Reid cycle matroid. Our personal interest is mainly in finite systems, but many results apply to infinite projective rectangles. For instance, Section <ref> encompasses infinite systems, while Section <ref> requires finiteness. Our viewpoint is influenced by matroid theory but is largely that of incidence geometry; matroid theory is not needed to read this paper. We wish to acknowledge the inspiration of the elegant and deep short papers <cit.> of Bernt Lindström. Lindström's ideas, as further developed by the first author in his doctoral dissertation and <cit.>, led to this study of projective rectangles. § PROJECTIVE RECTANGLES An incidence structure is a triple (,ℒ,ℐ) of sets with ℐ⊆×ℒ. The elements of are points, the elements of ℒ are lines. A point p and a line l are incident if (p,l) ∈ℐ. A set P of points is said to be collinear if all points in P are in the same line. We say that two distinct lines intersect in a point if they are incident with the same point. A projective rectangle is an incidence structure (,ℒ,ℐ) that satisfies the following axioms: * Every two distinct points are incident with exactly one line. * There exist four points with no three of them collinear. * Every line is incident with at least three distinct points. * There is a special point D. A line incident with D is called special. A line that is not incident with D is called ordinary, and a point that is not D is called ordinary. * Each special line intersects every other line in exactly one point. * If two ordinary lines l_1 and l_2 intersect in a point, then every two lines that intersect both l_1 and l_2 in four distinct points, intersect in a point. A complete quadrilateral is an incidence structure that consists of four lines, no three concurrent, and their six points of intersection. A nearly complete quadrilateral is like a complete quadrilateral but with only five of the intersection points; the sixth intersection point may or may not exist. Axiom (A<ref>) states that almost every nearly complete quadrilateral in a projective rectangle is complete. This is a partial Pasch axiom (e.g., see <cit.>), not the full Pasch axiom because it has an exception when either of the first two lines is special; then the remaining two lines may or may not be concurrent. This exception is what admits projective rectangles that are not projective planes. Section <ref> has more discussion of the significance of Axiom (A<ref>). Notation: We write pq for the unique line that contains two points p and q. After we establish the existence of projective planes in , we use the notation abc… to mean the unique line (if abc… are collinear) or plane (if they are coplanar but not collinear) that contains the points abc…. The projective planes are some familiar examples of projective rectangles. A projective plane is called a trivial projective rectangle. In particular the Fano plane F_7 is the smallest projective rectangle (see Theorem <ref> Part (<ref>)). The non-Fano configuration is not a projective rectangle; it fails Axiom (A<ref>). The matroid L_2^k is another example of a projective rectangle (see Figure <ref>). It has m=3 special lines. Let A:= { a_g | g ∈_2^k }∪{D }, B:= { b_g | g ∈_2^k }∪{D } and C:= { c_g | g ∈_2^k }∪{D }, where we think of _2^k as a multiplicative group, writing gh for the group operation. Let L_2^k be the simple matroid of rank 3 defined on the ground set E:= A∪ B∪ C by its rank-2 flats. The non-trivial rank-2 flats are A, B, C, which are the special lines, and the sets {a_g, b_g h, c_h } with g and h in _2^k, which are the ordinary lines. We note that L_2^k is the complete lift matroid of the group expansion of a triangle, i.e., L_0(_2^k) in the language of <cit.>. We say more about projective rectangles with m=3 and matroids similar to L_2^k in Section <ref>. § PROPERTIES OF PROJECTIVE RECTANGLES In this section we study essential properties of projective rectangles. We begin with basic facts; then we prove that the projective rectangle contains projective planes and we conclude with a section of counting formulas for later use. §.§ Fundamental properties If a projective rectangle with exactly m special lines has one of them with n points, then we say that the order of is (m,n). We do not assume m or n is finite unless we so state. In Theorem <ref> we prove m≤ n; we also prove that every special line has the same number of points, that every ordinary line has the same number of points, and many other elementary facts about points and lines. (If we define ν := n-1 and μ := m-1, then when the projective rectangle is a projective plane, ν=μ= the order of the plane as customarily defined; that is, one less than the number of points in a line.) The following result states basic properties of a projective rectangle. If is a projective rectangle of order (m,n), then the following hold in : * The point set of ∖ D is partitioned by all special lines deleting D. * There are at least three special lines and four ordinary lines. Moreover, there are at least seven points. * If l is a line and p is a point not in l, then the number of distinct lines incident with p intersecting l equals the number of points on l. * Through each ordinary point there passes exactly one special line. * All ordinary lines have the same number of points. The number of points in an ordinary line is equal to the number of special lines, that is, m. * All special lines have the same number of points, i.e., n points, and the same number of ordinary points, i.e., n-1. * There are exactly m(n-1) ordinary points. * The number of lines incident with an ordinary point is equal to the number of points in a special line, that is, n. The number of ordinary lines that contain each ordinary point is n-1. * The number of points in a special line is at least the number of points in an ordinary line; that is, n ≥ m. * There are exactly (n-1)^2 ordinary lines. * For a given point p in an ordinary line l, there are n-2 ordinary lines intersecting l at p. Proof of Part (<ref>). By Axiom (A<ref>), every point p ∈∖ D belongs to the unique special line pD. Proof of Part (<ref>). From Axiom (A<ref>) we know that in there are four points, no three of them collinear. If one is D, each other one with D generates a special line, all of which are distinct by noncollinearity. If none of them is D, the points generate six distinct lines, of which at most two can contain D because no three of the four points are collinear. Thus, the four remaining lines are ordinary lines. Since in one of the ordinary lines there are at least three points, these points form with D three special lines. We have proved that in there are at least three special lines and three ordinary lines. By Axiom (A<ref>), each special line contains at least two ordinary points, so there are at least seven points. Now consider two special lines s, s' and two ordinary points p_1,p_1' on s and p_1',p_2' on s'. The lines p_ip'_j are four distinct ordinary lines. We prove Part (<ref>). From Part (<ref>) we can deduce that in there are a non-incident ordinary point and ordinary line, also that there are a non-incident ordinary point and special line. Let q ∈ l and p∉ l. From (A<ref>) there is exactly one line incident with p that intersects l at q, and all such lines are distinct. We prove Parts (<ref>) and (<ref>). Given an arbitrary ordinary line l, we know by (A<ref>) that each point in l together with D determines a unique special line. Every special line is generated in this way, by (A<ref>). Thus, there is a bijection between the special lines and the points in l. This implies the number of points in any ordinary line equals the number of special lines. We prove Parts (<ref>) and (<ref>). We suppose that l_1 and l_2 are special lines in with n_1 and n_2 points, respectively. Let p be a point non-incident with either of those lines. Part (<ref>) implies that there are n_1 distinct lines intersecting l_1 that are incident with p. Those n_1 lines also intersect l_2. Indeed, one of those lines is special and the remaining (n_1-1) lines intersects l_2 because they are ordinary. Therefore, n_1 ≤ n_2. Similarly, n_2 ≤ n_1. This proves that all special lines have the same number of points. Deducting 1 for the special point D gives the number of ordinary points on a special line. Proof of Part (<ref>). The number of special lines is m, Part (<ref>) says the number of ordinary points in each special line equals n-1 and Part (<ref>) says the special lines partition the ordinary points. Proof of Part (<ref>). We suppose that p is an ordinary point with exactly k incident lines. Let l be a special line with n points and p∉l. From Part (<ref>) we know that there are exactly n distinct lines intersecting l that are incident with p. This implies that k≥ n. We want to prove that k = n. Suppose by contradiction that there is another line l_1 incident with p and not intersecting l. It is clear the l_1 must be an ordinary line. That is a contradiction, because an ordinary always intersect special lines. By Part (<ref>) every special line has n-1 ordinary points, and by definition there are m special lines. Proof of Part (<ref>). Let p be a point in an ordinary line. Two ordinary points in two special lines give rise to a unique ordinary line. Since every special line has n points and one of them is D, it is easy to see that the two special lines give rise to (n-1)^2 ordinary lines. Those are all the ordinary lines that intersect the two special lines. Since every ordinary line intersects every special line, we conclude that there are no more ordinary lines in . Proof of Part (<ref>). Since p is a point in an ordinary line l, from Part (<ref>) there are n lines incident with p. Only one of those n lines is special; the other n-1 are not. This implies that there are n-2 ordinary lines intersecting l at p. §.§ Projective subplanes We show that a projective rectangle is a combination of projective planes, in the strong sense that every two intersecting ordinary lines are lines of a substructure that is a projective plane. Before our results, though, we have to clarify the notion of substructure of an incidence structure (,,). An incidence substructure of (,,) is an incidence structure (',',') in which ' ⊆, ' ⊆, and ' = |'×', i.e., the incidence relation is the same as in the superstructure but restricted to the elements of the substructure. In particular, if (',',') is a projective plane, we call it a subplane of (,,). In a projective rectangle a subplane may contain an ordinary line and all its points; we call that kind full. A full subplane necessarily has order m-1. A subplane need not be full; it also need not be a maximal subplane, for instance if it is a proper subplane of a full subplane. In fact, that is the only way a subplane can fail to be maximal, as we will see in Theorem <ref>. The special point D is very special, as are the special lines. In a projective rectangle , the special point D is a point of every full subplane. Also, for every special line s and every full subplane π, s∩π is a line of π. A full subplane π contains at least two lines, l and l', which intersect at a point p ∈π, and at least one is ordinary, say l. If l' is ordinary, then every special line s intersects both l and l' at different points, unless s is the special line s_p on p. These two points of s determine a line of π, which is the intersection of s with π. Thus, for every special line except possibly s_p, s ∩π is a line of π. If l' is special, or rather if l'=s'∩π for some special line s', then there is at least one point p' on l' that is neither p nor D. Let q be a point in l ∖ p; then π has a line m determined by p' and q, which is ordinary since it contains not only p ∈ s_p but also q ∉ s_p. Then we can replace l' by m and have the case of two ordinary lines, so we may as well assume l' is ordinary. Let s_1 and s_2 be two special lines that are not s_p. Their intersection is in π, but their intersection is D. Therefore, D ∈π. Let p_1 be the intersection of l with s_1 and let p_2 be the intersection of l' with s_2. Since p_1 ∉ l' and p_2 ∉ l, the line m of π determined by p_1 and p_2 does not contain p. Since the points p_1,p_2 are not D and are not in the same special line, m is ordinary, hence it is contained in π. Therefore, m intersects s_p in a point p_12, which cannot be p, so p and p_12 determine a line of π, which must be s_p∩π. That is, s_p∩π is a line of π. Now we present the fundamental result about subplanes. Let be a projective rectangle. If two ordinary lines in intersect in a point, then both lines are lines of a unique full projective plane in . First we state the construction that gives the projective plane. Let l_0 and l_1 be ordinary lines in with exactly one point q in common. (See Figure <ref>.) Let a_0s= l_0∩ s and a_1s= l_1∩ s, where s ranges over the set of special lines in , and pick three special lines to be called x, y, and z such that q ∈ x. Thus, q=a_0x=a_1x. (We know there are three special lines by Theorem <ref> Part (<ref>).) Let b_1s= n_1∩ s, where n_1 is the ordinary line that passes through a_0y and a_1z. Suppose that s and t denote two special lines. We denote by l_st the ordinary line passing through a_0s and a_1t with s,t x and we denote by n_st the ordinary line passing through a_0s and b_1t with s,t y. Let L={l_st: s,t ∈, s,t x and s t } and N={n_st: s,t ∈, s,t y and s t }. Note that n_1 = l_yz∈ L and l_1 = n_xz∈ N. We set :=(_,ℒ_,ℐ_), where ℐ_ is the incidence relation defined in and [ _ := (⋃_l∈ N l) ∪ (⋃_l∈ L l) ∪ l_0 ∪{ D },; _1 := { s∩_ : s ∈},; _2 := L ∪ N ∪{ l_0},; _ := _1 ∪_2. ] We begin with the incidence structure given by Construction <ref>. With the notation there, we prove that is a projective plane. First of all, we note that one of the defining properties of a projective plane, that there are four points in _ with no three of them collinear, is satisfied by a_0y, a_1z, q, and D. We next prove that given two lines in , they intersect. Suppose that the two given lines are in L (they are ordinary). If they intersect in a point in l_0 or in a point in l_1, there is nothing to prove. Suppose that neither of those two cases holds. So, they are two ordinary lines that intersect l_0 and l_1 in four different points. Therefore, by Axiom (A<ref>) the two given lines intersect. By a similar argument we conclude that if the two given lines are in N, then they intersect. It is clear that any two lines in _1 intersect in D and that a line in _2 intersects every line in _1. Suppose the two given lines are λ and η with λ∈ L and η∈ N. If a_0y∈λ and q∈η, then λ and η intersect both l_0 and n_1 in four distinct points. Since l_0 and n_1 intersect in a_0y, by (A<ref>) we conclude that λ and η intersect. Now suppose that a_0y∉λ. Since λ intersects both l_0 and l_1 in distinct points, and n_1 intersects l_0 and l_1 in distinct points, by (A<ref>) we know that λ intersects n_1. Then λ intersects l_0 and n_1 in distinct points (because n_1 intersects l_0 at a_0y∉λ). The fact that λ and η both intersect l_0 and n_1 in distinct points, with (A<ref>), implies that λ and η intersect in a point. Supposing q∉η, the proof is similar. Since λ meets l_0 at a_0y∉ l_1, and q = l_0 ∩ l_1 ∉η, each of λ and η intersects l_0 and l_1 in distinct points; thus, λ and η intersect in a point. This completes the proof that any two lines in _ intersect. We now prove that given two points p_0, p_1 ∈_, they are in a line in . (If they are in one line, they cannot be in two, because the lines of are ordinary lines or restrictions of special lines of , and every line in is determined by two of its points.) This proof requires cases depending on the locations of the two points. The proofs (if not trivial) depend on repeated application of Axiom (A<ref>). For economy of notation we employ a shorthand: p_34 = A6(l_1,l_2;l_3,l_4| p_12;p_13,p_23,p_14,p_24) means that each pair {l_i,l_j} intersects at p_ij for ij = 12, 13, 14, 23, 24. Axiom (A<ref>) then implies that l_3 and l_4 intersect at a point p_34, provided that l_1 and l_2 are ordinary. In this proof all four lines are always ordinary. Case 1. If both points are in a special line s, the line in is s∩_∈_1. This includes the case in which one of those points is D. Henceforth we assume the points are not in the same special line. Case 2. If both points are in l_0 or l_1, there is nothing to prove. Case 3. Suppose both points are not in x ∪ l_0 ∪ l_1. Then p_0 is in a line l_st = a_0sa_1t for some two special lines s and t, not equal, and p_1 is in a line l_uv = a_0ua_1v for some two special lines u and v, not equal (but s,t may not be distinct from u,v). Form the point p_3 = A6(l_0,l_1;l_st,l_uv| q; q_0u,a_1v,a_0s,a_1t), then the point p_4 = A6(l_st,l_uv;l_1,| p_3;a_1t,a_0s,p_1,p_0), and finally the point p_5 = A6(l_st,l_uv;l_0,| p_3;a_0u,a_1v,p_1,p_0). Now p_3 and p_4 are the intersections of l_0 and l_1, respectively, with . Since p_3 ≠ p_4, is a line generated by a point on l_0 ∖ q and a point on l_1 ∖ q (as p_0, p_1 ≠ q). Since that line is not a special line, it is in L. Therefore, p_0 and p_1 are collinear. Case 4. In this case p_0 ∈ l_0 but p_1 ∉ x ∪ l_0 ∪ l_1. We choose names so p_0 = a_0s and p_1 ∈ l_uv as in Case 3. Choose a_1t∈ l_0 ∖ (∪{q}) and form p_2 = A6(l_0,l_1;l_uv,l_st| q;a_0u,a_1v,a_0s, a_1b); then let p_3 = A6(l_uv,l_st;,l_1 | p_2;a_1t,a_1v,p_0,p_1). Now p_3 is the intersection of with l_1, which implies that is generated by p_0 ∈ l_0 ∖ q and p_3 ∈ l_1 ∖ q. Since is not special, it is a line in L. Case 5. In this most complicated case we assume p_0 ∈ x ∖ q and p_1 ∉ x ∪ l_0 ∪ l_1. As in the preceding cases we take p_1 ∈ l_uv. Step 1: Choose p_2 = A6(n_1,l_0;n_st,l_1 | a_0s;b_1t,a_0s,a_0u,a_1v). Step 2: p_3 = A6(l_0,l_1;n_st,l_uv| q;a_0s,p_2,a_0u,a_1v). Step 3: p_4 = A6(n_st,l_uv;l_1,| p_3;p_2,a_1v,p_0,p_1). Step 4: p_5 = A6(n_st,l_uv;l_0,| p_3;a_0s,a_0u,p_0,p_1). The result is that is generated by p_5 ∈ l_0 ∖ q and p_4 ∈ l_1 ∖ q so it is in L. Case 6. Here we assume p_0 ∈ x ∖ q and p_1 ∈ l_1 ∖ q. In this case we take p_0 ∈ n_su. We first find p_2 = A6(l_1,n_1;n_su,l_1 | a_0s;a_0s,b_1u,q,a_1z). Then we find p_3 = A6(n_su,l_1;,n_1 | p_2;p_0,p_1,b_1u,a_1z) and last p_4 = A6(l_1,n_1;l_0,| a_1t;q,a_0s,p_1,p_3). Then is generated by p_4 ∈ l_0 ∖ q and p_1 ∈ l_1 ∖ q, therefore it is in L. Case 7. Now p_0=q and p_1 ∉ x ∪ l_0 ∪ l_1. As usual we take p_1 ∈ l_uv. The first step is to define p_2 = A6(l_0,l_1;l_uv,n_1 | q;a_0u,a_1v,a_0s,a_1t), and then p_3 = A6(l_1,l_uv;,n_1 | a_0u;p_0,p_1,a_1t,p_2). Since p_3 lies on n_1 it is a point b_1w for a special line w ≠ x. Thus, is generated by p_0 = q = a_0x and p_3 = b_1w; this line is n_xw so it is in N. Case 8. The last case is where p_0=q and p_1 ∈ l_1. Both are in the line l_1. In all cases there is a line in _ that contains both p_0 and p_1, so they are collinear in . We have proved collinearity of all pairs of points in _, so is indeed a projective planes. An interpretation of Theorem <ref> is the following corollary. Given three noncollinear ordinary points in a projective rectangle , there is a unique full projective plane in that contains all three points. Given an ordinary line l and an ordinary point p not in l, there is a unique full projective plane in that contains both. For the first part, let the three points be p,q,r. No special line contains all three, so there is one, say p, that is not in a special line through the others. The lines pq and pr are ordinary lines, they are distinct by noncollinearity of the three points, and they intersect, so by Theorem <ref> there is a unique full projective plane that contains them and the three points. The second part follows by taking q,r ∈ l. In a projective rectangle, every maximal subplane is full. The line set of an incidence subplane π contains two ordinary lines l_1,l_2 and its point set contains their intersection point. It follows from Theorem <ref> that π is a subplane of the full subplane determined by l_1 and l_2. Thus, maximality and fullness are equivalent for projective subplanes of a projective rectangle. From now on, when we refer to a plane in a projective rectangle, we mean a full projective subplane. Also, when we say several lines are coplanar, we mean there is a plane π such that each of the lines that is ordinary is a line of π and for each line s that is special, s ∩π is a line of π. We can now characterize a nontrivial projective rectangle as a projective rectangle that contains more than one maximal projective subplane. Such projective rectangles have properties not common to all projective planes; e.g., they satisfy the dual half of Desargues's Theorem (see Theorem <ref>) and they are harmonic matroids (see <cit.>). Let be a projective rectangle. Every ordinary line in is a line of a plane in . If is nontrivial, then every ordinary line l is a line of at least three planes that contain l. Let l be an ordinary line in . From Theorem <ref> Part (<ref>) we know that there is another ordinary line l' that intersects l at exactly one point. This and Theorem <ref> imply that l is in a plane π. If is nontrivial, there is a point q not in π. Let p_1,p_2 ∈π be points in l that are not in the special line that contains q. Then the plane p_1p_2q that contains both ordinary lines p_1q and p_2q, which exists and is unique by Theorem <ref>, is a plane containing l that is different from π. To find a third plane, let p_1 ∈π_1 and p_2 ∈π_2 be ordinary points not in l. There is an ordinary line p_1p_2 that must contain a third point p_3 since m≥3 by Theorem <ref>. By Corollary <ref> there is a unique plane π_3 that contains l and p_3. If s is a special line in the projective rectangle and π is a plane in , then s ∩π is a line of π. Let p_1 and p_2 be points in distinct special lines that are not s. Then by Axiom (A<ref>) there is an ordinary line l that contains both p_1 and p_2, and by Corollary <ref> there is a plane π that contains l. In π there is another line l' that intersects l at p_1; then q=l∩ s and q'=l' ∩ s are two points in s ∩π, which determine a line in π that is contained in the unique line s of that contains q and q'. Thus, s ∩π is a line of π. Now we prove a generalization of Theorem <ref> to all lines, although we lose uniqueness of the containing plane. Let be a projective rectangle. If two lines l_1 and l_2 intersect in a point p, then they are coplanar. Suppose l_1 is a special line. There are points p_1 in l_1 ∖ l_2 ∖ D and p_2 in l_2 ∖ l_1. By Axiom (A<ref>) there is an ordinary line l_3 determined by p_1 and p_2. If l_2 is ordinary, by Theorem <ref> there is a unique plane π that contains l_2 and l_3. By Proposition <ref> the restriction of l_1 to π is a line of π, so l_1 and l_2 are coplanar. If l_2 is special, then l_3 is ordinary. By Proposition <ref> there is a plane π that contains l_3, and by Proposition <ref> both l_1∩π and l_2∩π are lines of π. Thus, l_1 and l_2 are coplanar. Next is an intersection property of lines that has a consequence for the matroid structure of a projective rectangle. Suppose three lines in a projective rectangle intersect pairwise in three different points. Then they are a coplanar triple. Equivalently, if three lines intersect pairwise (i.e., are pairwise coplanar) but are not a coplanar triple, then they all intersect in the same point. Suppose two ordinary lines l_1, l_2 intersect in a point p and lie in a common plane π, and suppose a third line l_3, possibly special, intersects l_1 and l_2 in points different from p. Choosing any points q_1 ∈ l_1 ∖ p and q_2 ∈ l_2 ∖ p determines a line of π through q_1 and q_2. By Construction <ref> and Theorem <ref>, this line is either an ordinary line of or the restriction to π of a special line of . In particular, this applies to l_3, hence l_1, l_2 and l_3 are a coplanar triple of lines of . In case l_1 is ordinary while l_2 and l_3 are special, by Corollary <ref> l_1 and l_2 are coplanar in a plane π and by Proposition <ref> l_3∩π is a line of π, so the three lines are coplanar. The second statement, which is the contrapositive of the first (and see Corollary <ref>), is a useful restatement. If a finite projective rectangle has order (n,n), then it is a projective plane. Because n=m, the projective plane of Corollary <ref> is the whole projective rectangle. This proposition does not apply to the infinite case; see Example <ref>. §.§ No Vamos configuration The Vamos matroid is the matroid of eight points in Figure <ref>. It is one of the smallest matroids that cannot be represented in a projective geometry; for that reason it is one of the fundamental matroid examples. However, we shall not think of it as a matroid but as an incidence structure with eight points as well as lines and planes. The lines are the solid lines in Figure <ref> and the planes are the ones composed of pairs of lines as described in the caption. (As a matroid a projective rectangle has rank 3 while the Vamos matroid has rank 4 and therefore it is trivial that it cannot be a submatroid of a projective rectangle. That is why it is important to think of the Vamos incidence structure instead of the Vamos matroid, even though they look the same in a diagram.) The Vamos incidence structure is not a substructure of any projective rectangle. Suppose a configuration of this kind exists in a projective rectangle. By Proposition <ref> the lines l_1,l_2,l_3 are concurrent in a point and the lines l_2,l_3,l_4 are also concurrent in a point. Clearly, these points are one point, so l_1 and l_3 contain a common point and hence are coplanar, contrary to the structure of the Vamos matroid. That proves the corollary. § FINITE PROJECTIVE RECTANGLES In finite projective rectangles there are many possibilities for counting elements and configurations. They are the topic of this section. §.§ Counts We extend the counts of points, lines, etc. in Section <ref> to planes and various kinds of incidence. Let be a projective rectangle of order (m,n). * The number of ordinary lines that are concurrent with each ordinary line is m(n-2). * There are m(m-1) ordinary points and (m-1)^2 ordinary lines in each plane. * The number of pairs (p,l) that consist of an ordinary point p and an ordinary line l that contains p is m(n-1)^2. * The number of planes that contain each ordinary line is n-2m-2. * The number of pairs (l,π) such that l is an ordinary line and π is a plane that contains l is (n-1)^2 n-2m-2. * The number of planes in is (n-1)^2(n-2)(m-1)^2(m-2). * For a fixed ordinary point p, the number of triples (p,l,π) such that l is an ordinary line incident with p and π is a plane that contains l is (n-1) n-2m-2. * The number of triples (p,l,π) such that p is an ordinary point, l is an ordinary line, and π is a plane that contains l is m(n-1)^2 n-2m-2. * The number of pairs (p,π) such that p is an ordinary point and π is a plane that is incident with p is m(n-1)^2m-1 n-2m-2. * The number of planes that are incident with each ordinary point is n-1m-1 n-2m-2. Proof of (<ref>). Let l be an ordinary line. From Part (<ref>)) there are m points on l. From Theorem <ref> Part (<ref>) we know there are n-2 ordinary lines that intersect l at each point. All those lines are distinct. Proof of (<ref>). This follows from the fact that the plane is projective of order m-1. We exclude the one special point D and the m special lines in the plane. Proof of (<ref>). Each of the (n-1)^2 ordinary lines (Theorem <ref> Part (<ref>)) contains m ordinary points (Part (<ref>)). Proof of (<ref>). Let l be an ordinary line. From Part (<ref>) there are m(n-2) ordinary lines l' that intersect l at exactly one point. Theorem <ref> guarantees the existence of a unique plane π that contains both l and l'. By Part (<ref>) the number of ordinary lines in π that intersect l is (m-1)^2-1 = m(m-2). Thus, the number of planes on l is the quotient, m(n-2)/m(m-2)=(n-2)/(m-2). Proof of (<ref>). The number of ordinary lines should be multiplied by the number of planes on each line. Proof of (<ref>). The number of incident line-plane pairs should be divided by the number of ordinary lines in a plane. Proof of (<ref>). The number of incident line-plane pairs should be multiplied by the number of points in an ordinary line. Proof of (<ref>). The number of triples in Part (<ref>) should be multiplied by the number of ordinary points from Part (<ref>). Proof of (<ref>). The number of triples in Part (<ref>) should be divided by the number of ordinary lines in pi that contain p, which is m-1. Proof of (<ref>). Either divide the number of triples in Part (<ref>) by m-1, the number of ordinary lines on p in π, or divide the number in Part (<ref>) by m(n-1), the whole number of ordinary lines on p. Two lines are skew if they have no point in common. A skew class of lines is a maximal set of lines, in which every pair is skew. If a line has no skew mate, it is a skew class of one. A line may belong to more than one skew class. Two lines that are skew to the same line may intersect. If is a finite projective rectangle of order (m,n), then the following hold in : * Given an ordinary point p and given any ordinary line l that does not contain p, there are exactly n-m ordinary lines containing p that are skew to l. * If l is an ordinary line, then there are (n-2)(n-m) lines that are skew to l. * If l_1 is skew to l, there are m(n-m) lines skew to l that are concurrent with l_1. Proof of Part (<ref>). From Theorem <ref> Part (<ref>) we know that there are exactly n lines passing through p (including a special line). From Theorem <ref> Part (<ref>) we also know that there are exactly m lines passing through p that intersect l (including a special line). Therefore, there are exactly (n-1)-(m-1) ordinary lines passing through p and skew to l. Part (<ref>) follows by subtracting from the number of ordinary lines, (n-1)^2 (Theorem <ref> Part (<ref>)), the number that are concurrent with l, which is m(n-2) (Theorem <ref> Part (<ref>)), and the number that are l, which is 1. Part (<ref>) follows from Part (<ref>). Suppose that is a nontrivial projective rectangle of order (m,n). Let l be an ordinary line l∈. Tthere is a skew line class containing l that has at least m lines in it. I.e., there are m-1 ordinary lines skew to l and skew to one other. Let M = ⌈ (n-m)/(m-1) ⌉ - m, the largest integer such that (n-1)/(m-1)>m+M. Then there is a skew class containing l that has at least m+M lines in it. I.e., there are m+M-1 ordinary lines skew to l and skew to one other. Let l be an ordinary line and let l_1 l be an ordinary line passing though q∈ l. Let p q be a second point in l. By Theorem <ref> Part (<ref>), since n>m there is an ordinary line l_2 passing through p skew to l_1. Let a_i and b_i' be the points in l_1 and l_2 for i=1,2, …, m, labeled so that the line a_ib_i' is special. Lines a_ib_i and a_jb_j for i,j∈{1,2, …, m} with i j, b_i ≠ b_j, b_i ≠ b_i', and b_j ≠ b_j' are ordinary and are skew to each other, because if they intersect, then by Axiom (A<ref>), l_1 intersects l_2, which is a contradiction. Note that it is easy to choose all b_i ≠ b_i' since m>1. Also, we can suppose that l is the line a_1b_1. Now we suppose that (n-1)/(m-1)-m>0 and M is the largest integer such that (n-1)/(m-1)>m+M. (Thus, n>m+M.) Let s be a special line with points s_1, s_2, …, s_m, …, s_n-1,D. Suppose that S∩ a_ib_i=s_i for i=1, …, m. We prove by induction that there are lines h_1, h_2, …, h_M, skew to one other and to all lines of the form a_ib_i. Assume we have k lines h_1, h_2, …, h_k that are skew to one other and to all lines of the form a_ib_i for some k∈{0,1, …, M-1}, where s_m+t∈ h_t for t=1, 2, …, k. First note that neither h_t nor a_ib_i contains the point s_m+k+1 and that (m-1)(m+k) is the number of points in (⋃_t=1^k h_t∪⋃_i=1^m a_ib_i)∖ S. Thus, the maximum number of ordinary lines passing through s_m+k+1 intersecting a line of the form a_ib_i and the lines h_1, …, h_k is (m-1)(m+k). Since s_m+k+1 is an ordinary point, by Theorem <ref> Part (<ref>) we know there are n-1 ordinary lines passing through this point. Since (n-1)>(m-1)(m+k) there must be at least one ordinary line h_k+1 passing through s_m+k+1 that is skew to all lines of the form a_ib_i and the lines h_1, …, h_k. This proves the induction, completing the proof. In the notation of Theorem <ref>, M = (τ-1)m - 2τ. This is negative or zero if τ = 1, or if τ=2 and m≤4, and positive otherwise, so in the “otherwise” case the second bound on the maximum size of the skew class is the better one. §.§ Constraints on the parameters We have found some integers in Theorem <ref>, namely, ρ=n-2m-2, n-1m-1 n-2m-2, and (n-1)^2(m-1)^2 n-2m-2. These integral fractions imply relationships between m and n. Theorem <ref> is a constraint on n, given a value of m. By Section <ref> m-1 must be the order of a projective plane; that is the only constraint we know on m. Let p,p' be two ordinary points in a special line s. Let s' be any other special line. The planes π that contain both p and p' partition s'∖ D into sets π∩(s'∖ D) of size m-1, and each such set is in a unique plane that contains p and p', so there are (n-1)/(m-1) such planes. For an ordinary point q∈ s' let π(q) denote the plane that contains p,p',q. This plane is unique, by Theorem <ref>, because it is determined by the intersecting ordinary lines pq and p'q. Choose another ordinary point q' ∈ s' ∖π(q) and suppose π(q) and π(q') contain a common point r. Then both planes contain the intersecting ordinary lines pr and p'r, so they must be the same plane. It follows that the distinct planes π(q) for q ∈ s' ∖ D partition the points of s' ∖ D. The intersection π(q) ∩ s' is a line of π(q) that contains D, so the number of ordinary points in it is m-1. The number of sets into which s' ∖ D is partitioned is therefore equal to (n-1)/(m-1), and this is the number of planes that contain both p and p'. For a projective rectangle of order (m,n), there is an integer τ≥ 0 such that n = m + τ (m-1)(m-2). If is nontrivial, then τ≥ 1. We simplify the notation by writing ν=n-1 and μ=m-1. Integrality of (n-2)/(m-2) implies that there is an integer ρ≥ 1 such that ν = 1 + ρ(μ-1). Proposition <ref> implies that ν = σμ for some positive integer σ. Therefore, ν = ρ(μ-1)+1 = σμ. It follows that (ρ-σ)μ = ρ-1, so ρ-1 is a multiple of μ, say ρ = τμ+1 where τ≥0. Then substituting for ρ gives (τμ+1-τ)μ = τμ, and upon division by μ we find that σ = τ(μ-1) + 1. This implies ν = τμ(μ-1) + μ, so n-m = ν-μ = τμ(μ-1). We infer the expressions n-2m-2 = τ(m-1)+1, n-1m-1 = τ(m-2)+1, n-1m-1 n-2m-2 = [τ(m-2)+1] [τ(m-1)+1], (n-1)^2(m-1)^2 n-2m-2 = [τ(m-2)+1]^2 [τ(m-1)+1]. If the projective rectangle is nontrivial, n ≥ (m-1)^2 + 1 and ρ≥ m. If the projective rectangle has m=3, then n= 3 + 2τ, where τ≥0. The value τ=0 gives the Fano plane and τ=1 gives n=5 as with the L_2^2 projective rectangle of Example <ref>. However, not all those values of τ admit a projective rectangle with m=3; there are examples only for n = 2^k+1, that is, for τ = 2^k-1-1 (see Section <ref>). Our numerical constraints need strengthening. § AXIAL AND CENTRAL DESARGUES'S THEOREMS Consider two triangles in a projective rectangle, A = a_1a_2a_3 and B = b_1b_2b_3. (A triangle consists of three points, not all collinear, and the three lines joining the points in pairs.) There are three lines l_i = a_ib_i; if they concur in a point p we say the triangles are centrally perspective from center p. If each of the three pairs of lines a_ia_j and b_ib_j meets in a point p_ij and the points p_12, p_13, p_23 are collinear in a line l, we say A and B are axially perspective from axis l. The Central Desargues's Theorem says that, if two triangles are centrally perspective, then they are axially perspective. The converse is the Axial Desargues's theorem. The two together are generally known as Desargues's Theorem. In a projective plane the points p_ij always exist. However, neither half of Desargues's Theorem is valid in every projective plane; in fact the validity of Desargues's Theorem is equivalent to the existence of plane coordinates in a division ring. Thus, for any plane, knowing whether Desargues's theorem holds true is a fundamental question. Every projective plane is a projective rectangle, so we cannot say that Desargues's Theorem holds true in every projective rectangle; but eliminating projective planes from consideration changes the situation. We first establish that each triangle in the axial configuration is necessarily coplanar. If A= a_1a_2a_3 is a triangle and l is a line that intersects the three lines a_ia_j in three points p_ij, then all six points and the four lines are contained in a unique plane. There are four lines in the configuration of six points: l and the lines l_ij = a_ia_j. At most two can be special, so two are ordinary, say l' and l”. Any two of the four lines intersect, so l' and l” intersect; this implies they are in a unique plane π (by Theorem <ref>). The other two lines of the four are each determined by one point in l and one in l', so each is a line of π, or if special the intersection with π is a line of π. Let be a nontrivial projective rectangle. Every plane in satisfies the Axial Desargues's Theorem when the axis is an ordinary line. We begin by assuming triangles A and B are in planes π_A and π_B, respectively, and are axially perspective from an ordinary line l with intersection points p_ij, as in Figure <ref>. The two planes may be the same or different; if they are different, l is their intersection. We may assume a_i ≠ b_i for i=1,2,3 because otherwise the conclusion is trivial. If a_1b_1, a_2b_2, a_3b_3 are not all coplanar, they are coplanar in pairs, since a_i,b_i,a_j,b_j ∈p_ija_ia_j. Hence, by Proposition <ref> there is a point q at which all three lines are concurrent; therefore, q is a center of perspectivity for A and B. Thus, we assume henceforth that a_1b_1, a_2b_2, a_3b_3 are all in one plane, so that π_A = π_B. There is another plane π_ on l because is nontrivial and l is ordinary (by Corollary <ref>), and in this plane we can find a triangle = _1_2_3 that is axially perspective from l with the same intersection points p_ij = l ∩_i_j. The lines b_i_i and b_j_j are coplanar in a plane p_ijb_i_j = b_i_ib_j_j. Therefore, they intersect in a point s_ij. The pairwise coplanar lines b_1_1, b_2_2, and b_3_3 are not all coplanar because _1_2_3 = π_∌b_1,b_2,b_3. By Proposition <ref>, those three lines have a common point s = s_12 = s_13 = s_23. See Figure <ref>. Similarly, there is a point r = a_1_1∩a_2_2∩a_3_3. We prove that r ≠ s and r,s ∉π_A. If r=s, then a_i_i = ra_i_i = r_i and b_i_i = sb_i_i = r_i, so ra_i_i and rb_i_i are the same line; that is, a_i,b_i,_i are collinear; but this is impossible. Similarly, a_i,b_i,_i are collinear, which is impossible, if r or s ∈π_A. Each plane a_ib_i_i contains r and s so the lines a_ib_i and rs are coplanar. We know that r,s ∉a_ib_i⊂π_A. Hence, we have three triples a_ib_i, a_jb_j, rs of lines that are coplanar in pairs but not all coplanar. By Proposition <ref> there is a point q_ij at which each triple is concurrent. Then taking i=1 and j=2,3, we have q_12 = rs∩a_1b_1 = q_13, so q_12=q_13 is a point on all three lines a_1b_1, a_2b_2, a_3b_3 and a center of perspectivity for A and B. That completes the proof. The case in which A and B are not coplanar is reminiscent of the higher-dimensional Desargues's Theorem for projective geometries. That suggests a central Desargues's Theorem for noncoplanar triangles. Let be a nontrivial projective rectangle. Then satisfies the Central Desargues's Theorem for triangles that are not coplanar. We begin by assuming triangles A and B are in two different planes, π_A and π_B respectively, and are centrally perspective from a point p. We show that we may assume a_i ≠ b_i for i=1,2,3. Since the triangles are not coplanar, they cannot be equal; in particular, say, a_3 ≠ b_3. The conclusion is trivial if a_1=b_1 and a_2=b_2; the axis is then a_1a_2=b_1b_2. Suppose henceforth that a_2 ≠ b_2 and a_3 ≠ b_3. Assume first that a_1 ≠ b_1. Let l_i := a_ib_i (which exists and contains p by central perspectivity), p_ij := l_i ∩ l_j (which exists because a_i,b_i,a_j,b_j,p are coplanar and any distinct three of them, excluding D if one of them is not ordinary, determine the plane), and λ_ij := p_ikp_jk where {i,j,k} = {1,2,3}. The lines λ_ij exist if a_1 ≠ b_1 because if p_ij=p_ik (i,j,k all different), then this point is the intersection of a_ia_j and a_ia_k but that intersection is a_i, and it is also the intersection of b_ib_j and b_ib_k but that intersection is b_i, from which it follows that a_i=b_i, contrary to our assumption. Now we observe that all points p_ij∈π_A ∩π_B, so all lines λ_i ⊆π_A ∩π_B. But as we assumed π_A ≠π_B, their intersection cannot consist of more than one line. It follows that λ_12 = λ_13 = λ_23 and this is the required axis of perspectivity. If a_1=b_1, in the previous discussion the line l_1 degenerates to a point and the rest of the proof is similar but simpler, with a_1p_23 as the axis of perspectivity. We note that any of the lines in the proof might be special, but because we only argue within planes, the proof is not affected. Theorem <ref> reinforces our belief that a nontrivial projective rectangle should be regarded as, in a strange way, nonplanar. Unfortunately, we were not able to make this intuition precise. § THE SUBPLANE CONSTRUCTION OF PROJECTIVE RECTANGLES Given a projective plane π and a subplane π', we wish to get a projective rectangle by taking a point D, all the lines joining it to points of π', all the points on those lines, and all the restrictions to our point set of the lines in π that are generated by our points (i.e., contain at least two of our points). D must be taken in the subplane. Suppose D is not in π'. Take a point P ∈π' and the line PD. This is supposed to be a special line so it must be a line of any plane in the projective rectangle; the proof is that every line of a projective rectangle, thus every line of π', intersects every special line (Axiom (A<ref>)), so L ∩π' cannot be one point. Therefore L ∩π' must be a line of π'. Now consider a second point P' ∈π' ∖ L. Then L and L'=P'D are both extensions of lines of π' so they intersect in π', but they intersect in D; this means D ∈π'. We could simplify the construction: Take a subplane π' and one line l in it, and any point D in π' ∖ l. For the projective rectangle, take all lines that join D to l and for ' take all points of π on those lines. This gives precisely the subplane construction, because already it gives all the points of π' and then only the points generated from D and π' in that construction. A plane is Pappian if it is coordinatized by a (commutative) field. The subplane construction in a Pappian projective plane produces a projective rectangle. Let our point set be ' and the incidence structure induced on it by π be '. There are two kinds of line in ': a long line is a line of π and a short line l is the restriction to ' of a line L of π that is not contained in ', so if l is any short line, L denotes its extension into π. If ' turns out to be a projective rectangle, the long lines will be the special lines of ' and the short lines will be the ordinary lines. Axiom (A<ref>): By definition, since we took every line generated by two points of '. Axiom (A<ref>): Four such points exist in the subplane π'. Axiom (A<ref>): By definition. Axiom (A<ref>): Every point of ' is in a long line, every short line of ' is a restriction of a line L of π, and any two lines of π intersect in a point P. Thus, for each short line l of ', its extension L intersects each long line s in a point which, by definition of ', is in the long line s. Axiom (A<ref>): Follows from (A<ref>) because there are at least 3 special lines. Axiom (A<ref>): Let the other two lines be l_1' and l_2'. If either of them is long, the conclusion follows from Axiom (A<ref>). Therefore, assume l_1' and l_2' are short lines. If two or more of them are in π', then all four are and the property follows from that of a projective plane. This leaves two cases: One of the lines is in π', or none is. We give an analytic proof, using coordinates, when π=π() for a field , so we can take π' to be a subplane generated by a subfield '. We write P := l_1 ∩ l_2, Q_ij := l_i ∩ l_j', R := L_1' ∩ L_2'. We need to prove that R ∈'. We give an analytic proof. Write I_m for the point on the ideal line L_∞ that is on all lines of slope m. We choose D to be the point I_∞ on all vertical lines of π; thus, the point set of our supposed projective rectangle is ' = {[z:x:y] : z=0, or z=1 and x ∈'}. We consider two cases, depending on whether or not one of the short lines is within π'=π('). Case 1. One of the short lines is in π', say l_1 ⊆π'. Since we can assign noncollinear coordinates arbitrarily to any three noncollinear points in π', we may choose the coordinate system so that l_1 has the equation y=0, P = (0,0), l_2 has the equation y=m_2x, and l_2' has the equation y = b_2' (where b_2' ∉' since l_2' ⊈π'). Then Q_12 = I_0. The equation of l_1' has the form y = m_1'x+b_1'. Note that m_2, m_1' ∉' since l_2, l_1' are not in π'. From this information we can find the coordinates of the other intersection points. They are Q_11 = (-b_1'/m_1', 0 ), Q_21 = (b_1'/m_2-m_1', y_21), Q_22 = (b_2'/m_2, b_2'), R = (b_2'-b_1'/m_1' , b_2'). Because Q_11, Q_21, Q_22∈', their x-coordinates are in '. None equals 0. Therefore, m_1'/b_1', m_2-m_1'/b_1', b_2'/m_2∈', so also m_2/b_1'∈'. The x-coordinate of R is b_2'-b_1'/m_1' = b_2'/m_1' - b_1'/m_1' = b_2'/m_2m_2/b_1'b_1'/m_1' - b_1'/m_1'∈', proving that R ∈'. Case 2. None of the four short lines is in π'. We choose coordinates so that P ∈ L_∞; that is, P = I_m for some m ∈, so l_1 has equation y=mx+b_1 and l_2 has equation y=mx+b_2 with b_1,b_2 ∈ and b_1 ≠ b_2. The other lines l_j' have equations y = m_j'x+b_j', where m_j' ≠ m. The special case m_1'=m_2' is not excluded, but then R ∈ L_∞⊆', so we may assume m_1' ≠ m_2'. The special case b_1' = b_2' is also not excluded; then R is in the line x=0; this case will be dealt with in the course of the proof. We can exclude m_1'=m and m_2'=m since then P ∈ l_1' or l_2', respectively, which violates the assumption of Axiom (A<ref>). The intersection points (other than P), which cannot be in L_∞, have coordinates Q_11 = (b_1-b_1'/m_1'-m, y_11), Q_12 = (b_1-b_2'/m_2'-m, y_12), Q_21 = (b_2-b_1'/m_1'-m, y_21), Q_22 = (b_2-b_2'/m_2'-m, y_22), R = (b_1'-b_2'/m_2'-m_1', y_R). The x-coordinates of the Q_ij are in '; we want to show that of R is also in '. Write ρ_ij for the x-coordinate of Q_ij. That is, b_i-b_j' = ρ_ij(m_j'-m). These are four equations E_ij. By combining E_11 with E_21 and E_12 with E_22 we infer that b_2-b_1 = (ρ_21-ρ_11)(m_1'-m) = (ρ_22-ρ_12)(m_2'-m). Thus, m_1'-m/m_2'-m = ρ_22-ρ_12/ρ_21-ρ_11 =: α∈'. (This last step would be forbidden if ρ_21=ρ_11, but that implies l_1' contains D, contrary to assumption.) Now combining E_11 with E_12 and E_21 with E_22 we infer that b_2'-b_1' = ρ_11(m_1'-m) - ρ_12(m_2'-m) = (ρ_12-αρ_11)(m_2'-m) with α∈' and similarly b_2'-b_1' = (ρ_22-βρ_21)(m_1'-m) with β∈'. Rewriting, m_1'-m = b_2'-b_1'/ρ_22-βρ_21, m_2'-m = b_2'-b_1'/ρ_12-αρ_11, which combine to give m_2'-m_1' = (b_2'-b_1') ( 1/ρ_12-αρ_11 - 1/ρ_22-βρ_21), or in a different form, m_2'-m_1'/b_2'-b_1'∈'. This is the reciprocal of the x-coordinate of R; consequently, R ∈'. The one caveat is that, if b_1'=b_2', we cannot proceed from Equation (<ref>); but then that equation implies m_1'=m_2', which was excluded at the beginning of the proof. So this difficulty will not occur. That concludes the proof of Theorem <ref>. If π is Pappian and not prime, it has a prime subplane so there are proper subplanes to carry out this construction. All Desarguesian planes and many others have proper subplanes (e.g., planes over near fields; cf. the book of Hughes and Piper <cit.>). However, we do not know whether the subplane construction works in a non-Pappian plane. We did not try to construct an algebraic proof for Desarguesian planes; we chose to study only Pappian planes to keep the algebra simple. We fear that generalization may require finding a synthetic proof. There are nontrivial projective rectangles in which n=m, but n,m must be infinite. Suppose is a field that has a proper subfield ' of the same infinite cardinality. The subplane construction generates a nontrivial projective rectangle with n=|| and m = |'| = n, within which π(') is one of the (full) planes. This contrasts with the case of finite m=n in Proposition <ref>. § NARROW RECTANGLES The smallest allowed value of m is 3. We call a projective rectangle narrow if it has m=3. The matroid L_2^k of Example <ref> is defined for any group 𝔊 (except the trivial group), simply replacing _2^k by 𝔊. In fact, all we need for 𝔊 is a (nontrivial) quasigroup; this matroid is the complete lift matroid L_0(𝔊K_3) from <cit.> or <cit.>). We define L_0(𝔊K_3) in a way compatible with Example <ref>. The ground set is E:= A∪ B∪ C where A:= { a_g | g ∈𝔊}∪{D }, B:= { b_g | g ∈𝔊}∪{D } and C:= { c_g | g ∈𝔊}∪{D }. The lines (rank-2 flats of the matroid) are A, B, and C and the sets {a_g, b_g h, c_h } with g, h ∈𝔊. If this is a projective rectangle, A, B, and C are the special lines and the other lines are the ordinary lines. But L_0(𝔊K_3) is not always a projective rectangle. Every narrow projective rectangle has the form L_0(𝔊K_3) where 𝔊 is a nontrivial group with exponent 2, and conversely. If is finite the group is ℤ_2^k with k≥1 and its parameters are (m,n)=(3,2^k+1) with k≥1. This proposition includes infinite groups. First we note that every narrow projective rectangle is an L_0(𝔊K_3) where 𝔊 is a quasigroup of order greater than 1. There are three special lines, which we call A, B, and C. We label the elements of each line, except D, by a set G of labels and we define an operation on G by gh=k such that a_gc_hb_k is an ordinary line of . It is clear that this is well defined and that any two of g,h,k determine the third, so G is a quasigroup. Then is the same as L_0(𝔊K_3) except that in the projective rectangle we ignore the trivial lines of the matroid. Now let us assume that a matroid L_0(𝔊K_3) is a projective rectangle. We prove that 𝔊 satisfies the following fundamental property: gh=ef gf=eh. Consider the lines l_1={a_g,b_gh,c_h} and l_2={a_e,b_ef,c_f in Axiom (A<ref>), and two other lines, l={a_g,b_gf,c_f} and l'={a_e,b_eh,c_h}. According to Axiom (A<ref>) the lines l and l' should have a common point, so b_gf=b_eh, which means gf=eh. Any quasigroup is isotopic to a loop (a quasigroup with identity element, 1), so we may assume 𝔊 is a loop. Suppose h=e=1 in Equation (<ref>). Then g=f gf=1; in other words, gg=1 for every element of 𝔊. Suppose g=h and e=f. Then 1=1 ge=eg; that is, 𝔊 is commutative. A property that characterizes a quasigroup that is isotopic to a group is the Quadrangle Criterion <cit.>, which is .[ a_1c_1=a_2c_2; a_1d_1=a_2d_2; b_1c_1=b_2c_2 ]} b_1d_1=b_2d_2. We prove the Quadrangle Criterion for 𝔊 by means of Equation (<ref>). a_1c_1=a_2c_2 a_1a_2=c_1c_2, a_1d_1=a_2d_2 a_1a_2=d_1d_2, b_1c_1=b_2c_2 b_1b_2=c_1c_2. The first two lines imply that c_1c_2=d_1d_2 and combined with the third line we deduce that b_1b_2=d_1d_2, proving the Quadrangle Criterion. Hence, 𝔊 is isotopic to a group. By isotopy we may assume 𝔊 is a group, and we have seen that it is abelian and has exponent 2. If 𝔊 is finite, it is _2^k for some positive integer k as in Example <ref>. These necessary properties of 𝔊 are sufficient for L_0(𝔊K_3) to be a projective rectangle, because exponent 2 implies Axiom (A<ref>), as is easy to verify. The geometry of a narrow projective rectangle is determined by the isotopy type of its quasigroup. Thus, the finite such rectangles are obtained from a finite Pappian projective plane of 2-power order by the subplane construction of Section <ref> using a Fano subplane. § ORTHOGONAL ARRAYS FROM PROJECTIVE RECTANGLES A transversal design is a partition of a set _T of m(n-1) points into m special sets of size n-1 together with a family of m-subsets of _T such that each such m-set intersects each special set exactly once and each pair of points not contained in a special set lies in exactly one m-set. A projective rectangle with D deleted is exactly a transversal design with the extra partial Pasch property Axiom (A<ref>). A dual concept to transversal designs is that of orthogonal arrays; the corresponding dual to projective rectangles is orthogonal arrays with a dual property to (A<ref>). We explore that dual concept in this section.[We thank Douglas Stinson for drawing our attention to transversal designs.] An orthogonal array (OA) is a generalization of orthogonal latin squares. We adopt the notation for orthogonal arrays used in <cit.>. An N× k array with A entries from S (a set of size s) is said to be an orthogonal array, OA_λ(N,k,s,t), with s symbols, strength 0≤ t ≤ k, and index λ if every N× k subarray of A contains each tuple based on S exactly λ times as a row. We write a(r,c) for the label that appears in row r and column c. §.§ An orthogonal array from points and lines We represent a projective rectangle as an orthogonal array of points and lines. In ∖ D we have m special lines partitioning all the points, and (n-1)^2 ordinary lines. By Theorem <ref>, every ordinary line intersects every special line exactly once and every pair of points in different special lines lie in exactly one ordinary line. Each ordinary line will give a row of the orthogonal array and each special line will give a column. We label the points in each special line by the numbers 1,…,n-1 and we write a(p) for the label of the point p. The entries in a row are the labels of the points that appear in that ordinary line, arranged in the column of the special line that contains the point. Thus, each pair of labels appears once in each pair of columns. That is a 2-(n-1,m,1) orthogonal array in standard notation. In the notation used in <cit.>, it is an OA_1((n-1)^2, m, n-1,2). We formulate a special property for an orthogonal array of type OA_1((n-1)^2, m, n-1,2). (OA6) If four rows in the orthogonal array appear like the first five columns c_ij in this table, c_12 c_13 c_24 c_14 c_23 c_34 r_1 a_12 a_13 a_14 r_2 a_12 a_24 a_23 r_3 a_13 a_23 a_34 r_4 a_24 a_14 a_34 where it is possible that c_13=c_24 or c_14=c_23, then there is a sixth column that appears like c_34. (The empty cells are arbitrary.) The property (OA6) does not follow from the definition of an orthogonal array. We are not aware that it has been considered in the theory of orthogonal arrays or dually in transversal designs. Its contrary, that the sixth column of (OA6) never appears, arises (in the language of transversal designs) as the “anti-Pasch configuration” in <cit.> (whose “Pasch configuration” is slightly stricter than ours).[We are very grateful to Charles Colbourn for hunting in the literature and communicating these facts.] Let n≥ m ≥ 3. * A projective rectangle of order (m,n) gives rise to an orthogonal array OA_1((n-1)^2, m, n-1,2) with property (OA6). * An orthogonal array OA_1((n-1)^2, m, n-1,2) gives rise to a projective rectangle of order (m,n) if, and only if, it satisfies the additional property (OA6). Proof of Part (i). We have shown that gives rise to an orthogonal array with the stated parameters. Conversely, suppose we have an OA_1((n-1)^2, m, n-1,2). Let C be the set of m columns, let R be the set of rows, let L be the set of n-1 labels in the array, and write a(r,c) for the entry in row r, column c. We form an incidence structure whose point set is (C× L) ∪ D. The lines of this structure are special lines, of the form s_c = {(c,a) : a ∈ L }∪ D, for each c∈ C, and ordinary lines, of the form l_r = {(c,a) : c ∈ C and a= a(r,c) }, for each r∈ R. We prove this incidence structure satisfies Axioms (A<ref>)–(A<ref>) of a projective rectangle. We assumed n-1≥ m-1≥2 so in the orthogonal array there are at least two distinct labels, which we call a_1 and a_2, and at least 3 columns, of which three are c_1,c_2,c_3. There are also at least 2^3 rows. Proof of Axiom (A<ref>). We consider two points p_1=(r_1,a_1) and p_2=(r_2,a_2) where a_1=a(r_1,c_1) and a_2=a(r_2,c_2). The points belong to the same special line if and only if c_1=c_2. The special line is s_c_1. Otherwise, there is exactly one row r where the entry in column c_1 is a_1 and the entry in column c_2 is a_2. Then p_1 and p_2 belong to the ordinary line l_r. Proof of Axiom (A<ref>). Among the three pairs a(r_1,c_j), a(r_2,c_j) for j=1,2,3, only one can be the same label, a(r_1,c_j) = a(r_2,c_j), because each ordered pair of labels appears only once in the same two columns. Say a(r_1,c_1) ≠ a(r_2,c_1) and a(r_1,c_2) ≠ a(r_2,c_2). Then (c_1,a(r_1,c_1)), (c_1,a(r_2,c_1)), (c_2,a(r_1,c_2)), (c_2,a(r_2,c_2)) are four points, no three collinear. Proof of Axiom (A<ref>). The special line s_c contains at least the three points D, (c,a_1), (c,a_2). The ordinary line l_r contains the points (c_1,a(r,c_1)), (c_2,a(r,c_2)), (c_3,a(r,c_3)). Proof of Axiom (A<ref>). This follows by the definition of the incidence structure. Proof of Axiom (A<ref>). Two special lines intersect only in D. A special line s_c and an ordinary line l_r intersect only in the point (c,a(r,c)). Proof of Part (ii). We assume an orthogonal array is constructed from . Property (OA6) is the interpretation of Axiom (A<ref>) for an OA_1((n-1)^2, m, n-1,2). In Axiom (A<ref>) let l_3 and l_4 be the two lines besides l_1 and l_2. The assumption in the axiom is that points p_ij = l_i ∪ l_j exist for (i,j) = (1,2),(1,3),(2,4),(1,4),(2,3). Let s_ij be the special line that contains p_ij; we note that the special lines are distinct except that s_13 may be the same as s_24 and s_14 may be the same as s_23. In the orthogonal array derived from , the row of line l_i is r_i, the column of line s_ij is c_ij, and the label of p_ij is a(r_i,c_ij)=a(r_j,c_ij). Therefore, the array looks as in Property (OA6), except for the last column. The conclusion of Axiom (A<ref>) is that there is a point p_34 that is incident with both lines l_3 and l_4. That translates to the existence of a final column as in (OA6) with a_34 = a(p_34). Hence, Property (OA6) is satisfied by the array derived from the projective rectangle . Conversely, we prove Axiom (A<ref>) from Property (OA6). Let r_1, r_2 be the rows of the array that correspond to the lines l_1, l_2 in this axiom and let l_3,l_4 be the two other lines with corresponding rows r_3,r_4. The hypotheses of intersection imply that the diagram in Property (OA6) is satisfied, possibly except for the last column. By the assumption of Property (OA6), the final column does exist. This implies that l_3∩ l_4 is the point p_34 in the special line s_34 that corresponds to column c_34 and has the label a(p_34 = a_34. Therefore, the conclusion of Axiom (A<ref>) is satisfied. §.§ An orthogonal array from points and planes Ryser gives a nice construction of an orthogonal array from a projective plane <cit.>. We extend Ryser's ideas to construct an orthogonal array from points and planes of a projective rectangle by partitioning the ordinary points outside a given ordinary line by means of the separate planes that contain that line. The proof is based on the proof that Ryser gives for projective planes, adapted to the existence of multiple planes. Let l be an ordinary line in a finite . The family of sets π∖ (l∪ D) for all planes π that contain l is a partition of the points in ∖ (l ∪ D) into (n-2)/(m-2) parts of m(m-2) points each. We observe that every plane in containing l also contains the special point D. If p∉l ∪ D, then by Corollary <ref> there is a unique plane on l that contains p; thus, the planes on l partition the points in ∖ (l ∪ D). The number of such planes is given by Theorem <ref> Part (<ref>). The number of parts of the resulting partition equals the number of planes that contain the line l. Suppose that (m,n) is the order of the projective rectangle . Let l ∈ be an ordinary line and let π_1, π_2, …, π_w be all the planes in that contain l, where w=(n-2)/(m-2). Then gives rise to an orthogonal array of the form OA_w((m-1)+w, m, m-1,2). Let p_1, p_2, …, p_m be the points of l. We label the points in π_i∖ l by q_1^i, q_2^i, …, q_k^i where k=(m-1)^2 (D is one of these points) and label the lines on p_r in π_i∖ l with 1, 2, …, m-1 for each r=1,2, …, m. We write a_st^i to record the label of the line q_s^ip_t∈π_i. We claim that the matrix A_i=[a_st^i]_s,t is an orthogonal array of the form OA_1((m-1)^2,m,m-1,2). We prove this by contradiction. Suppose that there two ordered pairs in the rows of A_i that are equal; that is, (a_s_1t_1^i,a_s_1t_2^i) =(a_s_2t_1^i,a_s_2t_2^i) with s_1 s_2. Therefore, a_s_1t_1^i=a_s_2t_1^i and a_s_1t_2^i =a_s_2t_2^i. The equality of these labels implies that the points q_s_1^i, q_s_2^i, and p_t_1^i are collinear and that q_s_1^i, q_s_2^i, and p_t_2^i are also collinear. Thus, each p_t_j^i is the unique point of l on the same line q_s_1^i q_s_2^i. Therefore, p_t_1^i = p_t_2^i, but that is impossible because t_1 ≠ t_2. Now let B=[ A_1; A_2; ⋮; A_w ]. This matrix is an orthogonal array of the form OA_λ((m-1)^2+w,m,m-1,2) where λ = ∑_i=1^w 1 = w. That completes the proof. We give an example for Theorem <ref> using the projective rectangle L_2^2 depicted in Figure <ref>. For the sake of simplicity we pick the line l={a_1, b_1,c_1}. We recall that for an ordinary line in L_2^2, there are exactly λ=3 planes having that line in common. Figure <ref> shows the three planes embedded in L_2^2 with l as common line. For the first plane, let's say π_1, we distinguish the points a_1, a_g, b_1, b_g, c_1, c_g and D_1:=D. For a fixed point in l theres two lines in π_1∖ l passing by the fixed point; from the set {1,2} we assign labels to these lines. For the lines {a_1,a_g,D_1} and {a_1,b_g,c_g}, which intersect l at a_1, we assign 1 and 2 to them, respectively. We arbitrarily assign 1 and 2 to {b_1,b_g,D_1} and {a_g,b_g,c_g}, respectively, and also to {a_g,b_g,c_1} and {c_g,c_1,D_1}. With these labels we construct the first four rows of the rectangular array in Table <ref>. The columns of the array are labeled on top with the points in the line l and the rows are labeled on the left with the points in each plane that are not in l. In this case the first four rows are labeled with the points in π_1∖ l. The entries of the rectangular array are the labels of the lines passing through the point in the column label and the point in the row label. For instance, the first entry of the first row in Table <ref> is 1, because the line passing through a_1 and a_g has label 1. The first entry of the fourth row is 1, because the line passing through a_1 and D has label 1. The second plane in Figure <ref>, π_2, has the points a_1, a_h, b_1, b_h, c_1, c_h and D_2:=D. As in π_1, we assign arbitrary labels from {1,2}. We choose 1 to be the label of {a_1,b_h,c_h}, {a_h,b_1,c_h}, and {c_1,c_h,D_2} and 2 as the label of {a_1,a_h,D_2}, {b_1,b_h,D_2}, and {a_h,b_h,c_1}. For the third plane in Figure <ref>, π_3 with points a_1, a_gh, b_1, b_gh, c_1, c_gh and D_3:=D, we also assign arbitrary labels from {1,2}. So, for example, 1 will be the label of {a_1,a_gh,D_3}, {a_gh,b_1,c_gh}, and {a_gh,b_bh,c_1} and 2 will be the label of {a_1,b_gh,c_gh}, {b_1,b_gh,D_3}, and {c_1,c_gh,c_1}. These give the orthogonal array OA_3(12,3,2,2). This is a 12 × 3 array filled with 2 symbols, such that in any 2 columns there are 4 different ordered pairs, each repeated λ=3 times. § THE DUAL INCIDENCE STRUCTURE The dual structure is obtained by interchanging the roles of points and lines. It is interesting in its own right, as it connects projective rectangles with incidence geometry in a different way. The dual is essentially a net with a complete quadrangle property. Being a dual projective rectangle, it contains all the dual projective planes of the planes of the original projective rectangle. A net is an incidence structure (,,ℐ) which consists of a set of points and a set of parallel classes _i (i ∈ an index set) of lines, such that each line is a set of points, every point belongs to exactly one line of each parallel class, and any two lines of different parallel classes have exactly one point in common. The theory of nets is extensive. It is easy to prove that every parallel class has the same number of lines and that the number of points on every line is the same. We call these points and lines ordinary. By adding a special point for each parallel class, which is defined to belong to all lines of that class and no other ordinary lines, and adding one special line that contains all the special points, we get a projectively extended net. (“Projectively” refers to the existence of the special line.) Two points might not be in any common line. They are called collinear if they are in a line. They cannot be in more than one line. A complete quadrangle in a net consists of 4 points, no three collinear, and 6 lines determined by them. A nearly complete quadrangle consists of the same 4 points and 5 of the 6 lines, the 6th line possibly existing or not existing. The dual of Axiom (A<ref>) is (A<ref>*) (Complete Quadrangle Property) Every nearly complete quadrangle is complete. A projective extension of a net has the complete quadrangle property if and only if the unextended net has it. Assume a net has the complete quadrangle property and consider the cases in its extension that are not in itself. If P_1' and P_2' are special points, they are already collinear. Suppose only P_1' is special: then it is in every line of some parallel class, and that class includes a line that contains P_2'. The dual of a projective rectangle is a projective extension of a net that has the complete quadrangle property, at least three parallel classes, and at least 2 lines in each parallel class, and vice versa. We dualize the rectangle axioms and consider how they apply to the net. * Every two distinct lines contain exactly one point in common. This is true by definition if one of the lines is the special line. It is valid in the net except when the lines are parallel. Parallel lines have a common point in the extension. * There exist four lines in the extended net with no three of them concurrent. Take the special line, three special points, and one ordinary line on each of the special points. If the three ordinary lines are concurrent, replace one of them by a parallel line. Or, take two lines from each of two parallel classes. * Every point is in at least three distinct lines. This is equivalent for an ordinary point to the existence of at least 3 parallel classes and for a special point to the existence of a parallel to each ordinary line. * There is a special line D. (A point in with D is called special. A point that is not in D and a line that is not D are called ordinary.) This is part of the definition of a projectively extended net. * Each special point belongs to exactly one line with each other point. This is part of the definition of a projectively extended net. * If two ordinary points P_1 and P_2 are collinear, then any two other points that are collinear with P_1 and P_2 through four distinct lines (i.e., there are four distinct lines P_iP_j' for i,j=1,2), are themselves collinear. It is clear that Axiom (A*<ref>) is the complete quadrangle property for the extended net, excluding the case where P_1 or P_2 is special. Lemma <ref> says that the two formulations are actually equivalent. § OPEN PROBLEMS Our work on nontrivial projective rectangles leaves many unanswered questions. Here are some to add those in the body of the paper. * All our examples of projective rectangles are substructures of Pappian projective planes that can be obtained by the subplane construction. Are there other examples? * We are ignorant of how a special line compares in its intersections with two planes π and π'. Two questions stand out. * If a plane π has an ordinary line l, there are many other planes in which l is a line. However, if l is special, i.e., l = s ∩π for a special line s, we have no idea whether even one other plane has l as a line. * We do not know whether there may be another plane π' such that s ∩π∩π' has a specific cardinality (not greater than m), what the possible values of |s ∩π∩π'| may be, whether 0 is a possible value in every nontrivial (aside from L_2^2, where it is not), or in the infinite case whether it is even possible that s ∩π' may properly contain s ∩π. * We proved the subplane construction of Section <ref> only for Pappian planes, coordinatizable by a field. * Is there an analytic proof for skew fields? * Does an analytic proof using alternative algebras succeed in planes with weaker coordinate algebras such as near fields and alternative algebras? * Is there a synthetic proof for Pappian or Desarguesian or other projective planes? * Does the construction exist in non-Desarguesian, or non-Moufang, planes? * Are all planes in a projective rectangle isomorphic? We were unable to find a proof or a counterexample. * What do the partial Desargues's theorems in Section <ref> imply about automorphisms and coordinatizations? * Is there a rigorous sense in which a projective rectangle is higher-dimensional, as suggested in Section <ref> and <cit.>? * If every plane in is Moufang, it has coordinates in an alternative ring. If all such rings are isomorphic, does extend to a Moufang plane with an alternative ring that extends that of the planes in ? * Given a projective rectangle, in what projective planes can it be embedded? In particular, our constructions by subplanes and harmonic extension give projective rectangles embedded in a Pappian plane but the same rectangles may possibly be isomorphically embeddable in planes that are not Pappian, not Desarguesian, maybe not even Moufang, in a nontrivial way, i.e., not by finding the Pappian plane as a subplane of a non-Pappian plane. 99 dk J. Dénes and A. D. Keedwell, Latin Squares and Their Applications. Academic Press, New York–London, 1974. dls Jeff H. Dinitz, Alan C. H. Ling, and Douglas R. Stinson, Perfect hash families from transversal designs. Australas. J. Combin. 37 (2007), 233–242. rfhc Rigoberto Flórez, Harmonic conjugation in harmonic matroids. Discrete Math. 309 (2009), 2365–2372. bgpp Rigoberto Flórez and Thomas Zaslavsky, Projective planarity of matroids of 3-nets and biased graphs. Australasian J. Combin. 77(2) (2020), 299–338. pr2 Rigoberto Flórez and Thomas Zaslavsky, Projective rectangles: Incidence graphs and higher structure. In preparation. pr3 Rigoberto Flórez and Thomas Zaslavsky, Projective rectangles: Harmonic conjugation. In preparation. Hedayat A. S. Hedayat, N. J. A. Sloane, and J. Stufken, Orthogonal Arrays, Theory and Applications. Springer-Verlag, New York, 1999. HP Daniel R. Hughes and Fred C. Piper, Projective Planes. Grad. Texts in Math., Vol. 6. Springer-Verlag, New York, 1973. MR 48 #12278. Zbl 267.50018. ldt Bernt Lindström, A Desarguesian theorem for algebraic combinatorial geometries. Combinatorica 5 (1985), no. 3, 237–239. lhc Bernt Lindström, On harmonic conjugates in full algebraic combinatorial geometries. Europ. J. Combin. 7 (1986), 259–262. Ryser H. J. Ryser, Combinatorial Mathematics. Carus Math. Monographs, No. 14. Math. Assoc. Amer., New York, 1963. vw J. H. van Lint and R. M. Wilson, A Course in Combinatorics. Second ed. Cambridge University Press, Cambridge, Eng., 2001. b1 Thomas Zaslavsky, Biased graphs. I. Bias, balance, and gains. J. Combin. Theory Ser. B 47 (1989), 32–52. b2 Thomas Zaslavsky, Biased graphs. II. The three matroids. J. Combin. Theory Ser. B 51 (1991), 46–72.
http://arxiv.org/abs/2307.06173v1
20230712135811
Assessing Augmented Reality Selection Techniques for Passengers in Moving Vehicles: A Real-World User Study
[ "Robin Connor Schramm", "Markus Sasalovici", "Axel Hildebrand", "Ulrich Schwanecke" ]
cs.HC
[ "cs.HC" ]
Assessing AR Selection Techniques for Passengers: A Real-World User Study] Assessing Augmented Reality Selection Techniques for Passengers in Moving Vehicles: A Real-World User Study [email protected] 0000-0002-4775-4219 Mercedes-Benz Tech Motion GmbH Gutenbergstraße 19 Leinfelden-Echterdingen Germany 70771 [email protected] 0000-0001-9883-2398 Mercedes-Benz Tech Motion GmbH Gutenbergstraße 19 Leinfelden-Echterdingen Germany 70771 [email protected] 0009-0008-3038-7775 Mercedes-Benz Tech Motion GmbH Gutenbergstraße 19 Leinfelden-Echterdingen Germany 70771 [email protected] 0000-0002-0093-3922 Hochschule RheinMain Unter den Eichen 5 Wiesbaden Germany 65195 Nowadays, cars offer many possibilities to explore the world around you by providing location-based information displayed on a 2D-Map. However, this information is often only available to front-seat passengers while being restricted to in-car displays. To propose a more natural way of interacting with the environment, we implemented an augmented reality head-mounted display to overlay points of interest onto the real world. We aim to compare multiple selection techniques for digital objects located outside a moving car by investigating head gaze with dwell time, head gaze with hardware button, eye gaze with hardware button, and hand pointing with gesture confirmation. Our study was conducted in a moving car under real-world conditions (N=22), with significant results indicating that hand pointing usage led to slower and less precise content selection while eye gaze was preferred by participants and performed on par with the other techniques. <ccs2012> <concept> <concept_id>10003120.10003121.10011748</concept_id> <concept_desc>Human-centered computing Empirical studies in HCI</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003120.10003121.10003128</concept_id> <concept_desc>Human-centered computing Interaction techniques</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003120.10003121.10003124.10010392</concept_id> <concept_desc>Human-centered computing Mixed / augmented reality</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003120.10003121.10003122</concept_id> <concept_desc>Human-centered computing HCI design and evaluation methods</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Human-centered computing Empirical studies in HCI [500]Human-centered computing Interaction techniques [500]Human-centered computing Mixed / augmented reality [500]Human-centered computing HCI design and evaluation methods < g r a p h i c s > The user study setup divided into three images. Shown in the left image, a person sits in the passenger seat of a car wearing a Varjo XR-3 HMD on their head and a wireless presenter in their hand. In the middle image, the view through the HMD is shown. Outside the car, digital objects are overlaid onto the real world. In the right image, the wireless presenter can be seen in more detail, held in the right hand. User study setup. From left to right: in-car AR HMD setup, AR point-of-view during the study, mobile presenter for hardware selection. [ Ulrich Schwanecke August 12, 2023 ===================== § INTRODUCTION The automotive sector offers many opportunities for augmented reality (AR) use-cases such as navigation, information, and entertainment, e.g. in the form of points of interest (POIs). Consequently, the adoption of AR technology in vehicles has been growing rapidly, with major car manufacturers like Audi, BMW, and Mercedes-Benz already incorporating Head-Up Displays (HUDs) and video AR features to display navigation data and other information. Researchers are also examining usage of windshield AR displays which would cover the whole front view <cit.>. Over time, AR hardware is becoming lighter while offering a bigger field-of-view (FoV). With those advances, both Head-Mounted Displays (HMDs) and AR glasses are becoming increasingly suitable for in-car use and should be taken into consideration by automotive Human-Computer Interaction (HCI) researchers <cit.>. AR systems can provide passengers with context specific, real-time, information about their surroundings, such as traffic conditions, POIs, and hazards overlaid on their view of the real world. Data like this puts digital content into a real-world context and makes it potentially easier to understand information at a glance <cit.>. Furthermore, the emergence of autonomous vehicles means that passengers will have a considerable amount of free time while in transit <cit.> to work <cit.> or to play games <cit.>. Additionally, HMDs offer a variety of built-in features for interacting with digital objects, such as head-tracking, hand-tracking, and eye-tracking sensors. Leveraging these natural interaction methods can offer several advantages over traditional touch-based methods, such as increased simplicity and reduced distraction, e.g. while driving <cit.>. However, in order for users to effectively interact with their surroundings, they require precise methods for selecting and manipulating objects and UI elements, which becomes especially critical in dynamic environments such as a moving vehicle. In this work, we evaluate selection techniques for interacting with digital objects outside the car while in motion using HMD-based AR. We build on related work by adapting existing AR selection techniques to an in-car environment. Based off widely researched interaction techniques <cit.>, we investigate the efficacy of the following four techniques: head pointing with dwell time, head pointing with a hardware button, eye gaze with a hardware button, and hand pointing with gesture confirmation. We provide a comprehensive analysis of the results and discuss their implications for researchers and designers who aim to integrate AR-HMDs into future automotive concepts and applications. Thus, the main contributions of this paper are as follows: * A comparison of four selection techniques for use with an AR-HMD in a moving vehicle, tested under real-world conditions. * Detailed analysis of each interaction technique, by portraying individual advantages or disadvantages concerning the dimensions speed, error rate, workload, and usability. * Evaluation of selection techniques and an HMD-based AR system for usage in moving vehicles by means of a semi-structured interview. § RELATED WORK Selection is an essential task for the interaction between a user and virtual elements across mixed reality (MR) systems <cit.>. While selecting objects in MR has been extensively studied <cit.>, the majority of research in this field is focused solely on usage within stationary environments. Nevertheless, insights from this research can still be applied to HCI research in dynamic environments. We focus specifically on remote interaction techniques considering the use-case of POIs outside the vehicle. For example, Blattgerste et al. <cit.> conducted a study comparing the performance of head gaze and eye gaze during the aiming phase. They found that eye gaze outperformed head gaze in terms of speed, task load, required head movement and user preference. They also discovered that the advantages of eye gaze increased with larger FoV sizes. Tanriverdi and Jacob <cit.> found that eye gaze-based interaction was faster than hand pointing, even with hardware limitations of the time. On the contrary, Cournia et al. <cit.> found that gaze-based techniques were slower than hand pointing for distant objects. Luro and Sundstedt <cit.> found similar results between eye gaze and a hand controller regarding usability, while eye gaze induced a lower cognitive load in participants. Kyoto et al. <cit.> extensively compared head pointing and eye gaze in combination with various refinement techniques. They found eye gaze to be faster and head pointing to be more accurate. Furthermore, participants preferred hardware-based inputs to confirm selections over gestures. Hansen et al. <cit.> compared eye gaze, head gaze, and mouse input combined with dwell time and click for confirmation in regard to Fitt's Law. They found eye gaze to be less accurate and having a lower throughput, while head pointing was more physically demanding. Techniques that utilize dwell time can suffer from the Midas touch problem <cit.>, where involuntary selections can occur due to simply looking around, e.g. via head gaze or eye gaze. There are also existing works that studied the selection of objects from the inside of a vehicle. However, previous works mostly study interaction with objects inside the car <cit.>, during short driving rounds <cit.>, or don't use AR. For example, Rümelin et al. <cit.> and Fujimura et al. <cit.> investigated hand pointing for interaction with distant objects while in the vehicle, but did not use AR HMDs. Aftab et al. <cit.> explored multimodal interaction of head, eye, and finger direction for drivers referencing outside-vehicle objects. Gomaa et al. <cit.> also adopted a multimodal approach to use eye gaze and hand pointing to reference objects outside the car. While hand-based interaction has been commonly studied inside vehicles, McGill et al. <cit.> pointed out that physical constraints of the in-car environment, such as available space and motion restraints, may impair the effectiveness of such techniques. There are also challenges regarding HMD use for vehicles in transit. McGill et al. <cit.> examined challenges of HMD use for passengers in cars and other transportation systems. While motion sickness, safety and technical obstacles are examples of challenges to overcome, they argued that it is justified to further explore the use of MR headsets in cars. Their reasoning mainly consists of a potentially improved passenger experience in terms of productivity, entertainment and isolation. Furthermore, Riegler et al. <cit.> proposed a research agenda for MR in automated vehicles. One of their overarching points was to investigate the use of HMDs to create useful in-car work and entertainment experiences for passengers. Usually, in-vehicle HCI studies are tested using some form of simulators such as VR simulators <cit.>, CAVE systems or professional driving simulators. Virtual driving simulators can have value, but can not replace real-world studies <cit.>. A driving simulator may for example influence the participant's behavior as it is a less stressful environment compared with a moving vehicle in traffic <cit.>. Driving simulators are also constricted regarding realism, as they can not match the fidelity of the real world <cit.>. § STUDY In this section, we describe the user study we conducted to compare different selection methods for AR in moving vehicles. We decided to investigate four input methods within our study, which to the best of our knowledge have not yet been fully explored in the context of a moving vehicle. These methods are head gaze with dwell time (HeadDwell), head gaze with hardware button (HeadHardware), eye gaze with hardware button (EyeHardware), and hand pointing with gesture confirmation via airtap (HandPoint). This design decision to only investigate four techniques was motivated by the need to maintain a sufficiently short duration for the overall experiment, in order to mitigate the risk of terminations due to the risk of motion sickness. §.§ Research Questions Due to the four distinct methods, we chose a within-subject design. We evaluated four selection methods for digital objects in AR with the following research questions: * Q1: Which selection method offers the highest selection speed and lowest error rate? * Q2: Does the selection method have an impact on perceived workload and usability? * Q3: Which selection methods are the most and least preferred by participants? §.§ Participants and Apparatus We recruited 23 participants for the study, but had to exclude one participant's data due to terminating the experiment early because of motion sickness. This resulted in a number of 22 participants (18 male, 4 female) with a mean age of 36.3 years (SD=10.6). All participants were employees of an automotive software consulting company. Ten participants required prescription glasses and seven of them didn't use them during the study because the glasses didn't fit comfortably inside the HMD. Participants were also asked to report their experience with immersive technologies, such as VR or AR. 15 participants had at least some experience in developing immersive technologies and therefore had used those technologies in the past, 5 people had experience using immersive technologies and 3 participants had never used any VR or AR headset. The mean study duration per participant was about 80 minutes. The set-up of our study is shown in Figure <ref>. The participants were seated in the front passenger seat of a premium midsize estate. In order to carry out the study under realistic and controllable conditions, we chose a private traffic-calmed environment. The environment resembled an urban area with some traffic in form of cars, bikes, as well as pedestrians crossing streets. The car's speed limiter has been set to the maximum allowed speed in the study environment of 30 km/h to support uniform driving conditions. For the AR-hardware, we chose the Varjo XR-3 video see-through HMD <cit.> based on its state-of-the-art resolution, 115° FoV, 90Hz refresh rate, video see-through latency of < 20ms as well as for its features, namely built-in eye tracking, and hand tracking via an integrated Ultraleap Gemini <cit.>. Six degrees of freedom (6-DoF) HMD tracking was achieved by using optical tracking and an additional car-fixed Inertial Measurement Unit. The system was connected to a Varjo XR-3 compatible desktop PC running the required Varjo software, the study software in Unity, and the tracking software required for 6-DoF tracking. The selection techniques were implemented based on Microsoft's Mixed Reality Toolkit (MRTK) Version 2.8.3 <cit.>. MRTK-based interactions proved themselves within a wide range of contexts and industry projects and could therefore provide a reliable implementation of the methods. The dwell time for the HeadDwell condition was set to one second, based on the findings of Riegler et al. <cit.>. Dwelling progress did not reset to zero immediatly after losing focus, instead it decayed over the span of two seconds in case car movements influenced the participant's head position. For selections that required hardware input, a simple three-button portable Bluetooth presenter was used, as seen in Figure <ref>. After hovering a POI either via head gaze or eye gaze depending on the condition tested, the user simply needed to press one of the three buttons to trigger the selection. For the HandPoint condition, we used MRTK's integrated LeapMotion support with the default air tap gesture for remote interaction. The system allowed Users to use both hands in any way they wanted. §.§ Stimuli and Procedure At the beginning of each study session, participants signed a privacy and information sharing consent agreement and completed a demographic questionnaire. This was followed by an introduction to the experiments procedure and the task. Participants were not informed about the selection methods before the study in order to reduce possible influence on the quantitative data through comparison or the qualitative feedback through preference bias. Participants wore a Varjo-XR 3 in video see-through mode as described in Chapter <ref> and were positioned on the front passenger seat. In each trial, six digital UI elements were shown outside the car in front of the participant, as shown in Figure <ref>. The UI elements resembled restaurant POIs filled with randomized fake data. Each POI consisted of a 2D square sized about 4.14° in angular distance, an image with a white border and the restaurant name, which can be seen in Figure <ref>. POIs were rotated to always face the camera. The POI's center points were placed on a half circle with a target angle of 15° shifted to the left and right, which resulted in angles of ±7.5°, ±22.5°, and ±37.5°. This results in uniform 10.86° sized gaps between each pair of POIs. The POIs were in car-fixed space and thus followed the translation and rotation of the car. Between each input method, the car was halted and participants were introduced to the next method. This took part in form of an explanation and a training session, where participants could test the upcoming method on six unmarked POIs. During this training time, the system didn't record any data. After the participant had successfully selected all six POIs, the trial started automatically and the ride began. There were four trials per participant, one for each of the four selection methods. In order to prevent learning effects, the order of trials was counterbalanced using Latin square. Before each trial began, participants were instructed to think aloud in order to provide live feedback on situations. We used these comments to enrich the post-study interview. All steps were recorded on video for later analysis. Each trial consisted of 70 rounds, each round lasting five seconds with a one-second pause in between. Six POIs appeared in each round, one of which was marked randomly with a red crosshair. The goal of each round was to select the marked POI without selecting an unmarked POI. After each round, all POIs, whether marked or not, disappeared for a second, then a new set of six POIs appeared with one POI marked. The route was roughly timed to match the speed of the 70 rounds, but was dependent on traffic. The car was stopped after completing the course, regardless whether the run was finished. In that case, participants proceeded with the last few rounds while standing still. After all rounds were completed, a message with the text "run finished" appeared and the car was stopped, with participants being allowed to remove the HMD. Following each trial, the participants filled out the questionnaires described in chapter <ref> and were asked what they noticed positively and negatively, as well as how much they perceived their surroundings. §.§ Measures We chose the System Usability Scale (SUS) <cit.> and the NASA Task Load Index (TLX) <cit.> as quantitative metrics. The SUS provides an overall usability score by a formula, which indicates the system's overall usability. Scores range from 0 to 100, with higher scores being better. Bangor et al. <cit.> linked SUS scores with adjectives such as good or excellent, which will be used as a basis for comparing and discussing the results. For the NASA TLX, the Raw TLX (RTLX) variant <cit.> was used for simplicity. With RTLX, each subscale is given a score between 0 and 100, while the total score simply consists of the sum of these numbers and thus ranges between 0 and 600. A higher score indicates higher perceived workload by the user. It is also possible to analyze the individual subscales in addition to the overall score <cit.>. For performance evaluation, the system recorded each hover and selection with respective timestamps, round, selection technique, and information on whether the POI hovered or selected was the marked one. For the techniques using a hardware button, each button click was also documented with a timestamp. Based on this data, the elapsed time between each hover and subsequent selection was also calculated, which gives insight into the speed of buttons presses and air tap gestures after hovering. This metric will be called DeltaSelectHover going further. Additionally, an error rate was calculated based on how many hovers and selections were correct or incorrect. Furthermore, we collected qualitative data in the form of comments made by participants during the study and through semi-structured interviews. The interview questions after each round asked for positive and negative feedback on the technique used, how the environment was perceived, and technique-specific questions such as feedback on the dwell time. At the end of the study, participants had to name their favorite and least favorite technique, as well as provide feedback on the HMD and their opinion on AR in moving vehicles. § RESULTS We used a one-way ANOVA with the selection method as independent variable and RTLX scores, SUS scores and selection times as dependent variables. When the assumptions of normality or the assumption of homogeneity of variances was violated, we used the non-parametric Kruskal-Wallis H test. Post-hoc tests were then conducted using Dwass-Steel-Critchlow-Fligner (DSCF) pairwise comparisons. Only significant results assuming a 5% significance level or otherwise notable results are reported. §.§ Time The selection technique had a significant effect on the mean selection time (χ^2(3) = 1141, p < 0.001) as well as the elapsed time between a hover and the subsequent selection (χ^2(3) = 2281, p < 0.001). The mean, median, and standard deviation for each condition for selection time and the time between hover and selection is shown in Table <ref>. For select time, DSCF pairwise comparisons found that the mean value was significantly different for all groups with p < 0.001 for each pair. Similarly, DSCF pairwise comparisons found significant differences between all pairs with p < 0.001, except for the pair HeadHardware and EyeHardware (W = 2.02, p = 0.480). This shows, that the time between a hover and the subsequent selection via hardware button is not significantly influenced by the technique to hover the element. The techniques using a hardware button were overall faster regarding both the mean time participants needed for one selection and the delta between a hover and the following selection. Notably, selections via EyeHardware were fastest with a mean time of 1.82s, followed by HeadHardware with 1.95s. HandPoint was the slowest technique with a mean time of 2.76s. Notably, the mean delta time of HeadDwell was found to be 0.993s, slightly below the one-second threshold specified in Unity. This is likely due to Unity's Time.deltaTime, which can vary depending on the framerate <cit.>. §.§ Error Rate Table <ref> shows the respective correct, wrong and missing selections for each method. The maximum possible amount of correct selections was 1540. Notably, while using HandPointing, participants didn't select 30% of the marked targets. This could stem from multiple factors, which will be discussed in chapter <ref>. Furthermore, with HeadDwell, participants selected 96.56% of marked POIs with only 11 selections on unmarked POIs, even though HeadDwell was the only tested technique with implicit selection via dwell time. §.§ Perceived Workload The analysis of the RTLX questionnaire showed significant differences between techniques for perceived workload regarding all six subscales and the total workload score. The results of the non-parametric Kruskal-Wallis H test are shown in Table <ref>, mean values for each subscale grouped by selection technique are shown in Table <ref>. Total RTLX scores for each method are visualized in Figure <ref>. The median for EyeHardware is notably lower than the mean (M = 123, Mdn = 67.5). This could stem from minor problems a few select people had with the eye-tracking system, thus resulting in a higher mean and lower median value. This is also reflected most notably by the subscales performance (M = 4.32, Mdn = 2.50) and frustration (M = 3.91, Mdn = 2.50). DSCF pairwise comparisons were conducted for each pairing of methods for each subscale and the score. Regarding mental workload, only HandPoint and EyeHardware had significant differences (W = -4.022, p = 0.023). EyeHardware was mentally the least demanding while HandPoint was mentally the most demanding, which is also reflected in statements in the qualitative interview. Physical workload showed significant differences between HandPoint and HeadHardware (W = -3.811, p = 0.036) as well as between HandPoint and EyeHardware (W = -5.594, p < 0.001). This can be explained by the low physical effort the hardware button required. Selection times for techniques using the button were the fastest, thus participants only needed to move their head or eyes for a short time in the direction of the marked POI and could then move back into a resting position. In contrast, the average and median speed for HandPoint were the lowest, meaning that participants had to hover their arms longer before moving them back into a resting position, resulting in higher physical workload. Performance, effort, frustration, temporal demand and the final score all had significant differences between HandPoint paired with each other technique. Pairings between HeadDwell, HeadHardware and EyeHardware showed no significant differences for those subscales. This difference between the HandPointing technique and the other techniques is also visible in mean scores. §.§ Usability There was a statistically significant difference between methods regarding usability scores (χ^2(3) = 25.3, p < 0.001). Figure <ref> and Table <ref> show the data for SUS scores. DSCF pairwise comparisons showed significant differences between HandPoint and HeadDwell (W = -5.299, p = 0.001), HandPoint and HeadHard (W = 5.868, p < 0.001) and HandPoint and EyeHard (W = 5.953, p < 0.001), which is also visible in the mean SUS scores. §.§ User Preferences The results of the reported user preferences are presented in Figure <ref>. Among the evaluated methods, EyeHardware was the most preferred, with 59.1% (N = 13) of participants favoring it. HeadHardware (N = 4; 18.2%) and HeadDwell (N = 5; 22.7%) were similarly preferred. Conversely, HandPoint was the least preferred option, with 72.7% (N = 16) of participants selecting it as their least favorite and no participant preferring it. § DISCUSSION In this section, we discuss and analyze the findings obtained from our user study and semi-structured interview, in relation to the research questions that were previously established. Statements of most participants were given in german and were translated to english for better understanding. As mentioned in chapter <ref>, seven participants that required prescription glasses did not use them during the study and thus had at least some vision limitations. None of these participants mentioned any limitations due to impaired vision during the study and in the semi-structured interview. In addition, there was no significant effect of those participants on SUS (t(86)=0.446, p=0.657) and RTLX (t(86)=-0.560, p=0.577) scores. §.§ Influences on Speed and Error Rate Using the data recorded in our study software and presented in Tables <ref> and <ref>, we address the research question Q1: Which selection method offers the highest selection speed and lowest error rate? in the following paragraphs. Overall, techniques utilizing a hardware button demonstrated the highest performance in terms of both selection time and time elapsed between hover and selection. Qualitative interviews conducted with study participants revealed that many described these techniques as "easy" and "fast" when asked to provide positive feedback. These findings are consistent with those reported in a previous study by Kyoto et al. <cit.>. The mean select time for the HeadDwell technique with a one-second dwell time (M = 2.45s) was found to be 0.5 seconds slower compared to the mean select time of the HeadHard method (M = 1.95s). This difference is nearly identical to the mean delta time between select and hover for HeadDwell (M = 0.993s) and HeadHard (M = 0.494s), which is 0.499. Therefore, the overall slower performance of HeadDwell can be attributed solely to the dwell time of one second, in contrast to the approximately 0.5 seconds it took to press the hardware button for HeadHard. It is worth noting that none of the participants found the chosen dwell time to be significantly too low or high. Participants were not aware of the Midas touch problem, but most stated that they were aware of accidental selections when using HeadDwell. HandPoint was the slowest technique regarding selection time (M = 2.76s) and the second slowest technique regarding the delta time between select and hover (M = 0.930s), only being faster than HeadDwell. Several participants, especially those with little or no experience with AR interaction techniques, needed time to learn how to move their hands efficiently, react to instances where the system lost track of their hands, and perform the air tap gesture efficiently. Regarding speed, participants mostly expressed negative opinions on HandPoint such as "the technique is not that intuitive and I first needed some practice" and "this one is more complicated, I had to concentrate more on the selection". Concerning error rate, head-based techniques had the highest hit-rate on marked POIs, as shown in Table <ref>, with both HeadDwell and HeadHard being perceived as highly reliable. Positive feedback for HeadDwell included multiple participants noting the technique's precision, with comments such as "really precise when the car is not shaking or in curves" and "I had a high hit rate with this one". Similarly, positive feedback for HeadHard included comments such as "it was very clear what was happening" and also noting the precision when not influenced by g-forces. The low percentage of selections that were missed for both techniques can be attributed to the high influence of car movements on the direction of the head gaze. Participants could not comfortably rest their head on the seats headrest due to the HMDs headstrap and therefore could not use the head rest to support and stabilize their movements. Most negative feedback regarding the head-based techniques was related to this issue. HeadDwell received negative comments such as "Sometimes I lost the dwell in curves", "It was difficult to hold the cursor on the element", "I had to steer my head against the g-forces in curves". Similarly, comments regarding HeadHard included "The heavy headset was greatly influenced by the car forces" and "I had to press the button multiple times at speedbumps or in sudden curves". HeadDwell's low percentage of incorrect selections (0.71%) is noteworthy given the Midas touch problem of implicit selection confirmation. The rather long dwell time of one second <cit.> and the participant's cautiousness could explain this finding. Some participants reported that the HeadDwell technique restricted their ability to freely look around and required more concentration to avoid unintentional selections with statements such as "I could not look around freely" and "I had to concentrate more to not select anything after selecting the marked one". Although statistical significance was not found for supporting this problem, HeadDwell was perceived as more mentally demanding than the hardware-based techniques, as demonstrated by higher mean scores on the RTLX-subscale mental and overall RTLX score. EyeHard had the highest percentage of false selections (2.86%) and the second highest percentage of missed selections (7.79%). This could potentially stem from technical errors, either from the eye-tracking system, from the study software, or a combination of both. Some participants reported that a button press triggered two selections at once, indicating a technical issue. Some participants had problems, where the accuracy and eye-recognition were worse. Video recordings of the HMD-view in those cases showed the eye gaze pointer being offset a few degrees up or down. Another problem that was mentioned was called "flickering," where the gaze cursor jittered at the border of the POI hitbox, therefore triggering multiple hovers every few milliseconds and making selection more difficult as the user had to press the button at the right time. This error was also visible in some of the recordings of the headset view. To fix this error, a small cooldown window where the hover stays active after hovering off a POI could be implemented. HandPoint had the highest percentage of missed selections overall, with 30% of POIs not being selected. This technique faced the most technical limitations, despite the use of state-of-the-art hardware. Many participants struggled with performing the air tap gesture, and the system frequently lost track of their hands, making selection difficult. Participants also reported feeling restricted in their movements and not wanting to disturb the driver while gesturing around. In particular, selecting the rightmost POIs posed a challenge for HandPoint. Most participants attempted to select them with their right hand, resulting in hitting their hand against the door or needing to rotate their upper body or twist their arm. Some participants even resorted to using their left hand, which required additional mental effort. For example, one participant stated "First, I had to think about which hand to use before I could even think about the rest". §.§ Influences on Workload and Usability The research question Q2: Does the selection method have an impact on perceived workload and usability? will be mainly answered with results stemming from the RTLX and SUS questionnaires, as described in Chapters <ref> and <ref>. The used selection method had significant influence on both perceived workload and usability. Inspection of the data in Figures <ref> and <ref> as well as pairwise comparisons revealed HandPoint to be the deviating method. HandPoint showed to have significantly higher mean scores in most subscales (performance, effort, frustration, and temporal demand) as well as in the overall scores for RTLX and SUS compared with the other methods. Lower perceived performance is in line with the measured performance shown in the high error and missing rates explained in Chapter <ref> and listed in Table <ref>. Higher effort is also in line with already described problems within HandPoint, like the difficulities participants had with the tracking and the confirmation gesture. Frustration was called out verbally by some participants who had problems with the technique. Higher temporal demand e.g. participants feeling time pressure is also reflected by the slower average selection time and DeltaSelectHover time discussed in the previous chapter. Those problems probably had the similar influence on the usability score. The study found no statistically significant differences in the RTLX subscales and workload score for the pairings of Head­Dwell, Head­Hardware, and Eye­Hardware. However, the mean values show, that Eye­Hard had the lowest workload, followed by Head­Hard and then Head­Dwell. One possible explanation for this is the difference in neck strain between the techniques. Eye gaze requires minimal head movement and participants can quickly glance at the POI and then reset to a more comfortable position after a quick button press. Additionally, the FoV for the Varjo XR-3 is around 115° <cit.>, which means for eye gaze, that the POI just needs to be visible in this window. In contrast, head gaze requires the POI to be at the center of the view which requires more head movement and may cause neck strain. Participants reported that the movements for the head pointing techniques felt unnatural, and they experienced difficulty keeping their head steady and discomfort in the neck when viewing outermost POIs. This was reinforced by the HeadDwell condition where the head had to remain in those more unnatural positions for one second. These findings align with those of Kyoto et al. <cit.>, where participants found eye pointing easy and fast, but reported difficulties in keeping their head steady and discomfort in the neck. Regarding usability, HandPoint was the worst rated method out of the four. DSCF pairwise comparisons showed significant differences between HandPoint and each other technique. The other techniques had similar mean scores. For easier interpretation, Bangor et al. <cit.> assigned adjectives to ranges of SUS scores. According to his ratings, the usability of the HandPoint technique could be described as ok (Mean = 58.0, "ok" > 50.9), HeadDwell as good (Mean = 83.8, "good" > 71.4), EyeHard as excellent (Mean = 85.6, "excellent" > 85.5), and HeadHard also as excellent (Mean = 87.6, "excellent" > 85.5). §.§ Influences on User Preference Results of the semi-structured interview and participant's statements during the study were used to answer the research question Q3: Which selection methods are the most and least preferred by participants?. User preferences are described in Chapter <ref> and shown in Figure <ref>. EyeHardware was clearly the preferred method of most participants, with 59.1% (N = 13) favoring it. At the same time, it was intuitive and fast ("It can't get any simpler than that") while still coming off as striking ("It was kind of like magic"). The participants that picked EyeHard as their least favorite in our study all had technical problems with the tracking system. Our results are in line with Blattgerste et al. <cit.>, where eye gaze was preferred over head gaze. However, these results are contrary to the study Kyoto et al. <cit.> conducted, where head interactions were slightly preferred over eye gaze. For them, eye pointing was also described as easy and fast, but also apparantly not accurate enough. This difference could stem from the limited FoV of the HoloLens of 30x17° compared with the 115° of the Varjo XR-3 <cit.>, as advantages of eye gaze over head gaze are dependent on FoV <cit.>. HeadHardware (N = 4; 18.2%) and HeadDwell (N = 5; 22.7%) were similarly preferred. Apart from the neck strain some participants felt after using these techniques non-stop for seven minutes and the influence of g-forces on the head movements, there were not many negative comments about the techniques. They performed precise and reliably but were not the fastest and not the most exciting. Noteworthy, HeadHardware was the only method that was not selected as the least favorite by any of the participants. Conversely, HandPoint was the least preferred option with 72.7% (N = 16) selecting it as their least favored. Notably, hand pointing was not chosen as a favorite method by any of the participants. While our study is not perfectly comparable with the study of Kyoto et al. <cit.>, some connections can be drawn nevertheless, as their combination of head pointing and eye pointing with gesture confirmation were also both among the least favored methods. §.§ Limitations As mentioned in chapter <ref>, one participant's data was excluded due to almost immediate symptoms of motion sickness and the following termination of the study run. We plan to employ the Motion Sickness Susceptibility Questionnaire <cit.> in future studies to mitigate the risk of motion sickness in participants. Although we performed initial eye-tracking calibration for all participants, we cannot exclude that errors might have occurred within this process. They could originate from user error, as well as from issues within the technical system. Within our study, this resulted in an offset of the eye-tracking cursor, impacting selection precision for some participants. Furthermore, such errors could also have resulted in flawed video see-through quality. The employed 6-DoF tracking system did not perform without flaws, especially within the first minute of driving. Here, errors concerning the alignment of the car-fixed coordinate system resulted in displayed POIs drifting away or changing position and rotation in relation to the car. This caused irritation across participants, but also impacted the active round of the task, as targets were moving and therefore harder to select. After a few seconds, the issue resolved itself, with POIs returning to their initial position, allowing the study to proceed as usual. Additionally, a minor misconfiguration of the tracking system's coordinates resulted in a slight shift of POI alignment to the right, making POIs on the right side more challenging to reach than those on the left. This offset was consistent accross all participants. Furthermore, we attempted to synchronize the length of the driven course with the required duration of a complete trial. However, this was not always successful across all participants, and most of them ended up performing the last few rounds in standstill instead of the intended investigation in a moving vehicle. As this behavior is also realistic for real world scenarios, such as waiting at a traffic light, and was consistent across participants, we do not anticipate any significant impact on the results. Finally, the built-in hand tracking occasionally failed to recognize participants' hands, which may have been due to dynamic lighting in an outdoor environment, where bright sunlight can reduce the performance of hand tracking. Furthermore, the air tap gesture for selection was occasionally not detected by the system, resulting in longer selection times, more missed selections and higher frustration. Further studies testing the LeapMotion for in-car interaction may give more insight into these problems. § CONCLUSION AND FUTURE WORK In conclusion, we explored four different selection techniques for digital objects outside of moving vehicles and identified which techniques are best suited for use with AR HMDs. Our findings bridge the gap between conventional AR interaction research in stationary environments and the practical implementation of AR technology in the automotive space. Additionally, we emphasize the significance of testing HMD-based AR systems in real-world scenarios, as it can yield valuable insights necessary for the successful deployment of such systems for consumers in the future. Our investigation included: head pointing with dwell time, head pointing with a hardware button, eye gaze with a hardware button, and hand pointing with gesture confirmation. Our user study conducted under real-world conditions demonstrated that eye gaze combined with a hardware button was the fastest and preferred technique with the lowest perceived workload. While head-based techniques were not as much preferred as eye gaze, they had a lower error rate, and similarly low perceived workload values and similarly high usability scores. In contrast, hand pointing with gesture confirmation was the least favored technique across all tested categories. Overall, these findings provide valuable insights into the selection of digital objects in AR outside of moving vehicles and contribute to the field of in-car AR-based HCI research. Following our findings, we want to improve the tested techniques, for example by exploring the combination of multiple techniques to employ a multimodal approach and by integrating them more tightly into the car's ecosystem. Additionally, we plan to investigate interaction with location-based data, explore other environments like highways, map the track characteristics to the data, and explore more advanced interaction techniques, such as creating or manipulating digital objects in AR. ACM-Reference-Format
http://arxiv.org/abs/2307.07482v1
20230714170649
Dual-Query Multiple Instance Learning for Dynamic Meta-Embedding based Tumor Classification
[ "Simon Holdenried-Krafft", "Peter Somers", "Ivonne A. Montes-Majarro", "Diana Silimon", "Cristina Tarín", "Falko Fend", "Hendrik P. A. Lensch" ]
cs.CV
[ "cs.CV" ]
Useful Circuit Analogies to Model THz Field Effect Transistors Sergey V. Baryshev August 12, 2023 ============================================================== Whole slide image (WSI) assessment is a challenging and crucial step in cancer diagnosis and treatment planning. WSIs require high magnifications to facilitate sub-cellular analysis. Precise annotations for patch- or even pixel-level classifications in the context of gigapixel WSIs are tedious to acquire and require domain experts. Coarse-grained labels, on the other hand, are easily accessible, which makes WSI classification an ideal use case for multiple instance learning (MIL). In our work, we propose a novel embedding-based Dual-Query MIL pipeline (DQ-MIL). We contribute to both the embedding and aggregation steps. Since all-purpose visual feature representations are not yet available, embedding models are currently limited in terms of generalizability. With our work, we explore the potential of dynamic meta-embedding based on cutting-edge self-supervised pre-trained models in the context of MIL. Moreover, we propose a new MIL architecture capable of combining MIL-attention with correlated self-attention. The Dual-Query Perceiver design of our approach allows us to leverage the concept of self-distillation and to combine the advantages of a small model in the context of a low data regime with the rich feature representation of a larger model. We demonstrate the superior performance of our approach on three histopathological datasets, where we show improvement of up to 10% over state-of-the-art approaches. § INTRODUCTION Histopathological slide assessment is the gold standard for grading and treatment planning for almost all types of cancer <cit.>. In computational pathology, slide scanners convert tissue specimens on glass slides into digital images. Due to the required subcellular details, the scanned slide specimens, also called whole slide images (WSIs), can have more than a hundred thousand pixels in each dimension. Processing such gigapixel images entirely is computationally intractable. Hence, WSIs are subdivided into patches, reducing the computational burden and allow for processing each patch with well-established architectures such as a convolutional neural network (CNN) or Transformers <cit.>. Unfortunately, precise annotations for patch- or even pixel-level classifications in the context of gigapixel images are labor intensive to acquire and require expert knowledge. Instead, slide-level labels, such as tissue type, cancer grade, or molecular subtype are widely available and less time-consuming to collect. Multiple Instance learning (MIL), a subset of weakly supervised learning introduced by <cit.>, can make use of such coarse-grained labels and has showed its effectiveness in the field of WSI classification in a variety of recent studies <cit.>. MIL defines one sample as a bag of instances and there are two major categories: instance-based or embedding-based <cit.>. Different studies indicate that embedding-based MIL has superior performance compared to instance-based MIL <cit.>. Embedding-based approaches first transform all instances into learned feature vectors, aggregate them into a joint bag representation, and conclude with a bag-level classification. The initial step of acquiring robust visual features is demanding, especially for WSI classification, where relevant features depend on the cancer entity <cit.>. But even for the same entity, WSIs can vary from hospital to hospital and show severe differences in appearance due to slightly different staining chemicals <cit.>. Thus, out-of-distribution generalization remains a challenge for embedding models, and as the quality of the feature representations directly affects the performance on the downstream task <cit.>, it is not negligible. Although aggregation models can supplement the embedding architecture by leveraging the supervised training signal <cit.> to enrich the representations, they come with inherent issues. In classical MIL, a WSI is defined as a bag and its corresponding patches are assumed to be independent and identically distributed (i.i.d.) instances. Given a binary cancer classification task, the whole bag is labeled as cancerous as soon as a single patch is cancerous. For highly unbalanced bags, where only a small fraction of patches are actually positive (diseased), the training signal diminishes due to the dominance of negative instances <cit.>. In such cases, simple models tend to misclassify. While larger models can still learn rich bag representations, they tend to overfit in small data regimes common in medical image analysis. Another dubious aspect of classical MIL in the context of WSI classification is the i.i.d. assumption <cit.>. In fact, pathologists exploit structural information to enrich smaller areas with the surrounding context. Thus, correlating instances seems natural, but due to a large number of instances within one bag, it can be computationally demanding, especially for Transfomer-based approaches <cit.>. In our work, we address the various topics previously mentioned. We conduct extensive experiments to validate the benefits of our approach and evaluate our method based on three different publicly available medical datasets on tumor classification and cancer subtyping. Our contributions are threefold: * We introduce a novel MIL architecture, named Dual-Query MIL inspired by the Perceiver <cit.>. Due to its design, the Perceiver decouples the input size from the dimensionality of a latent representation, eliminating the quadratic scaling problem of the classical Transformer architecture <cit.>. Our dual-query design in the cross-attention layer combines i.i.d. MIL attention <cit.> with correlative self-attention <cit.> in one architecture. * We introduce a self-distillation loss function, which allows us to leverage both the advantages of a small and a larger aggregation model in one framework, preventing overfitting while simultaneously acquiring rich feature representations. * We explore the potential of dynamic meta-embedding <cit.> based on three state-of-the-art self-supervised learning (SSL) methods in the context of MIL. Our experiments show the superiority of dynamic meta-embedding compared to individual embeddings and indicate a step towards robust visual representations in the context of medical image analysis. § RELATED WORK As our work focuses on deep MIL-based histopathological slide assessment, we want to provide an overview of the most recent trends and the role of SSL embedding specific to this field of research. For further literature, we refer to <cit.>. Deterministic MIL pooling operations, such as max or mean pooling, are limited in terms of performance. Therefore, <cit.> base the pooling operation on DNNs, which assigns attention scores to i.i.d. instances, defining the contribution of each instance to the final bag representation. <cit.> extended the idea of attention-based instance scoring. They utilize instance-level clustering to guide the learning and to constrain the feature space by creating class-specific pseudo-labels and subsequent class-specific attention branches for multi-class settings. All these methods neglect correlations between instances, whereas graph or capsule-based architectures <cit.> incorporate contextual information between instances. This resembles a pathologist's proceeding that connects local characteristics such as the nucleus shape and size with the global context, e.g. surrounding cell architecture. Most recent architectures use non-local attention. Dual-stream MIL (DS MIL) <cit.> consists of one branch, which detects the most significant instance using a max-pooling operation, and a second branch correlating the detected characteristic instance with all remaining instances using a Transformer-like one-to-all attention mechanism. The one-to-many approach by <cit.> similarly consists of two stages: an iterative patch selection (IPS) and a small cross-attention Transformer stage. Using the IPS module drastically reduces the number of patches per bag, accelerating the aggregation step. <cit.> utilize an approximated all-to-all self-attention by utilizing the Nyström method <cit.>. This allows for large inputs, as is crucial for WSI classification, and reduces the computational cost of multi-head self-attention. Our proposed method is based on the Perceiver model by <cit.>. Instead of the classical all-to-all Transformer self-attention <cit.> with its quadratic scaling problem, the Perceiver relies on an asymmetric attention mechanism. This reduces the computational complexity and decouples the input size from the depth of the architecture. The Perceiver exploits a cross-attention layer to transform the input into a condensed latent array. The all-to-all self-attention is only applied in this latent array. Furthermore, we combine this approach with the one-to-all query design from <cit.>. Our adaptation combines MIL and Transformer attention in one architecture. Whereas most other methods rely on cross-entropy (CE) loss with bag-labels as the training signal <cit.>, we utilize the concept of self-distillation <cit.> to fully exploit the potential of our approach. Besides the aggregation model, we also explore new ways of feature extraction or merging. As indicated by <cit.>, the generalization of SSL representations is improved compared to supervised learning (SL) representations. However, instead of training an embedding model with histopathological samples using SSL <cit.>, we explore the potential of dynamic meta-embedding in the context of MIL based on three of the most recent pre-trained SSL methods (SwAV <cit.>, DINO <cit.>, DINOv2 <cit.>). This idea from the vibrant field of natural language processing showed increased robustness and generalization by combining multiple embedding techniques complementary to one another <cit.>. § METHODOLOGY During classical supervised training, a model learns to estimate the given label y corresponding to input x ∈ℝ^D. Instead, multiple-instance learning is set-based. Each set consists of multiple inputs, or instances, and is called a bag = {x_1, ..., x_N}. The number of instances N within the bag can vary between bags. Moreover, we assume that there exists a label y_n with n=1, ..., N for each instance within the bag, which is unknown. Only one global label is given for the whole bag . In a binary MIL classification task, the bag label is positive as soon as a single instance label is positive. To estimate the final label of bag , multiple instance learning requires suitable transformations represented by f and g. The choice of f and g defines whether it is an instance-based or embedding-based approach <cit.>. In instance-based MIL, f transforms each instance into scores, and function g is a pooling operation, such as max- or mean-pooling, aggregating the scores. In embedding-based MIL, f first projects the instances into a newly learned embedding space, and function g afterward distills all instances corresponding to one bag into a joint bag representation. In our proposed method, illustrated in Figure <ref>, we touch upon both transformations f and g of the embedding-based procedure. First, we introduce the concept of meta-embedding in the context of multiple instance learning with the Dynamic Meta-Embedder (DME). Furthermore, we propose the Dual-Query (DQ) Perceiver, based on the Perceiver architecture <cit.>. Our method leverages the flexibility of the Perceiver and joins MIL and self-attention in one framework. §.§ Dynamic Meta-Embedding for Multiple Instance Learning Instance-embedding models transform a raw input patch x_i into a feature vector 𝐡_i=f(x_i). We utilize three SSL pre-trained encoding models to distill the raw image into a single feature representation. Rather than just concatenate the embeddings, we utilize the training signal of the downstream task to dynamically learn the new representation <cit.>. Our Dynamic Meta-Embedder consists of two ResNet50 architectures <cit.>, and one Vision Transformer <cit.> (ViT-L/14). The two ResNet models were pre-trained on the ImageNet dataset <cit.>, whereas the ViT used the LVD-142M dataset <cit.>. r0.45 < g r a p h i c s > Dynamic Meta-Embedder. The different embedding models first condense each patch into a representation vector, then each of them gets processed in three different trainable linear layers and concatenated to a single vector. Besides the architecture, all three embedders differ in terms of the SSL technique used for pretraining. One ResNet model was pre-trained using SwAV <cit.>, the other one utilizes the DINO approach <cit.>. The ViT model was pre-trained with the most recent SSL method DINOv2 <cit.>. DINOv2 joins ideas from various SSL methods, the image-level loss of DINO <cit.>, the masked image modeling of iBOT <cit.>, the Sinkhorn-Knopp centering of SwAV <cit.>, and more. After piping the input patch through each of the embedding models, the Dynamic Meta-Embedder projects all three embeddings of various lengths to the same dimensionality using separate linear layers per embedder. This step allows exploiting the training signal from the bag label to finetune the representations and to extract task- and domain-specific features. Figure <ref> depicts the DME module. §.§ Dual-Query Perceiver As an aggregator model to summarize the bag, we propose the Dual-Query Perceiver. This architecture is based on the Perceiver and Perciever IO idea <cit.>. We leverage the flexibility of the proposed querying mechanism and propose the novel MIL architecture in Figure <ref>. The key components of our method are the Dual-Query Cross-Attention Module and the Latent Transformer. Both include a query-key-value (QKV) attention block, the core element in all Transformer-like architectures. It transforms the input into queries 𝐐, keys 𝐊, and values 𝐕 by piping the input through three multi-layer perceptrons (MLPs). The general attention operation itself can be expressed as: Attention(𝐐,𝐊,𝐕)= softmax(𝐐𝐊^T/τ)𝐕, where the temperature τ scales the dot-product of Q and K^T. There are two types of attention, self-attention and cross-attention. In self-attention, the queries originate from the same source as keys and values. While in cross-attention, queries do not share the same origin. The DQ Perceiver combines both attention categories. In the cross-attention module, keys K∈ℝ^N× d_k, and values V∈ℝ^N× d_k are projections of the input array (bag-of-instances) with shape N× C. The queries 𝐐_1∈ℝ^M× d_k and 𝐐_2∈ℝ^1× d_k are projections of two learned latent arrays, one of size M× D and the other of shape 1× D, with M+1≪ N. As the query defines the shape of the output, the input array is distilled into a latent array of fixed size. The dual-query module, illustrated in Figure <ref>, leverages this behavior and creates two pathways. The first pathway is based on the regular Perceiver pipeline. Here we use 𝐐_1 to compress the input into a latent array, which afterward gets processed by the Latent Transformer. This module, shown in Figure <ref>, performs self-attention on the latent array. The latent array is piped through the Latent Transformer J times to improve the features. In the final step of this pathway, the latent array is averaged along the instance dimensions M to obtain the self-attention token t_sa. The second pathway is based on the idea of MIL-attention, where an attention-score a weights each instance, so the aggregation function g corresponds to weighted sum, see Equation <ref>. This can be transferred to a single query cross-attention. Similar to the proposed method by <cit.>, the query (𝐐_2) of size 1× d_k is used to predict attention scores for each projected instance 𝐤_i=𝐖_k 𝐡_i. Afterward, a second projected version of the instance 𝐡_𝐢, 𝐯_i=𝐖_v 𝐡_i is scaled by the predicted attention score a_i and summed up to build the MIL-attention token t_mil. t_mil = ∑_i=1^Na_i 𝐯_i = ∑_i=1^Na_i 𝐖_v 𝐡_i = ∑_i=1^N𝐐_2𝐤_i^T/τ𝐖_v 𝐡_i Each of the final bag representations t_sa and t_mil is processed in a separate MLP-based classifier in combination with a softmax operation to acquire the corresponding probability distribution p_sa and p_mil. The final output during inference is derived with a simple, balanced weighting mechanism, which can be expressed as p= bp_sa + (1 - b)p_mil, with b as a hyperparameter. This combination of outputs enables an architecture immanent supervision and the utilization of a self-distillation-based learning strategy. §.§.§ Self-distillation Loss Self-distillation exploits components within an architecture to set up a knowledge-distillation-like learning scheme, in which shallow parts of a network are treated as an independent student architecture <cit.>. For the DQ Perceiver, the final loss function ℒ_SD, is a combination of the main Cross-Entropy (CE) loss, ℒ_CE(p_sa, ) of the deepest part (Perceiver branch) and three additional self-distillation losses of the shallow part (MIL branch). Like the main branch, the Cross-Attention Module also receives supervision by the bag label . Moreover, the deeper Perceiver pathway supervises the Cross-Attention Module, using the Kullback-Leibler divergence between p_mil and p_sa. This is complemented by an L2 loss, also called hint <cit.>, inducing the MIL-attention token t_mil to fit the self-attention token t_SA. Hyper-parameter α and λ are used for balancing the different terms. ℒ_SD = ℒ_CE(p_sa, ) + αℒ_CE(p_mil, ) + (1-α)ℒ_KL(p_mil, p_sa) + λt_sa-t_mil_2^2 § EXPERIMENTS AND RESULTS §.§ Experimental Design We thoroughly evaluate the DQ-MIL on three different histopathological datasets (Camelyon16, TCGA-BRCA, and TCGA-BLCA). The tasks are cancer classification and subtyping. Details regarding the different datasets, their curation, as well as about implementation are covered in the supplementary material. We report our evaluation using two metrics, area, under the curve (AUC), and accuracy scores. Furthermore, we evaluated the localization of the most significant instances qualitatively. During pre-processing, each WSI is subdivided into patches, x_i ∈ℝ^256×256×3 in magnification of 20x. Patches with background or artifacts are sorted out by combining threshold-based filtering <cit.> with a pre-trained tissue segmentation U-Net <cit.>. §.§ Tumor Classification and Cancer Subtyping The two tasks we use for evaluation, tumor classification, and cancer subtyping, cover complementary challenges. Slides from the Camelyon16 dataset are highly unbalanced, where less than 10% of the tissue area per slide covers positive instances (cancer) <cit.>. In contrast, The Genome Cancer Atlas (TCGA) datasets <cit.>, which we use to test performance in cancer subtyping, require that the network does not just focus on small regions within the tissue but rather evaluate the global appearance of WSIs. For cancer subtyping, we use two publicly available datasets of different entities, breast cancer (BRCA) and bladder cancer (BLCA). The results of the classification are summarized in Table <ref>. We realized that the pre-processing step, especially the Otsu-based filtering, has a strong impact on the final evaluation metrics. Thus, all values represented are based on experiments we run under the exact same conditions, using the dynamic meta-embedding approach for all of the different methods. Our proposed self-distilled DQ-MIL achieves state-of-the-art performance on the TCGA-BRCA dataset. For the Camelyon16 dataset, we achieve an improvement of up to 6.4% in AUC and 5.4% in accuracy. For the BLCA dataset, the improvement is even more significant, with up to 10.1% in AUC and 5.1% in accuracy compared to the second-best performing networks per metric. To assess whether the DQ Perceiver is able to localize the most relevant areas with regard to the classification task, we also conduct a qualitative analysis, illustrated in Figure <ref>. We utilize the pixel-wise annotations of the Camelyon16 dataset to evaluate the match between patches with top 5% attention scores (highlighted in red Figure <ref> (c-f)) and cancerous regions, annotated by domain experts (green contours in Figure <ref> (c-f)). We can see that the DQ Perceiver is congruent with the annotated regions and is even able to detect small cancer areas, as shown in Figure <ref> (c). §.§ Ablation Study §.§.§ Effects of the Individual Components of the Dual-Qyery Perceiver In this section, we evaluate how the individual components of our approach perform on the different classification tasks, the results are given in Table <ref>. Each sub-component, namely pure MIL Cross-Attention, the original Perceiver proposed by <cit.>, and the Dual-Query Perceiver (DQ-MIL) without additional self-distillation is tested on all of the three medical datasets. For all the sub-network, we use a single cross-entropy loss during training. The final logits of the regular DQ-MIL are derived with the weighting mechanism, mentioned above, with b=0.5, leading to p = 1/2(p_sa + p_mil). For this ablation study, we also use the dynamic meta-embedding strategy. We can see that the DQ-MIL-SD approach is slightly better or on par with its sub-components. The table also indicates that the self-distillation loss is the key element to boost the performance of the DQ-MIL-SD architecture and to join the benefits of the small MIL Cross-Attention model and the larger Perceiver. The table point out that for the cancer subtyping task, a correlation between the instances is slightly beneficial to improve on the AUC metric, whereas the MIL-attention approach achieves higher accuracy values. §.§.§ Effect of the Dynamic Meta-Embedding Strategy We also conduct an ablation study to indicate the benefits and advantages of dynamic meta-embedding. Here we used the DQ-MIL-SD approach as our fixed evaluation model. We trained the model using the different embedding methods shown in Table <ref>. Each embedding model varies in terms of architecture and SSL strategy. Furthermore, we compare the out-of-domain embedding methods (^*) with two in-domain pre-trained embedding methods (^†). The in-domain methods are a ResNet18 pre-trained on the Camelyon16 dataset using SimCLR <cit.> and a Vision Transformer (ViT) pre-trained on a large and comprehensive TCGA dataset covering multiple entities <cit.>. The performance evaluations show the advantage of the proposed Dynamic Meta-Embedder. It also indicates that the aggregation model can compensate for the embedding models' lack of domain knowledge. Although we were surprised to observe that the in-domain methods did not generalize well across our evaluation datasets, it resonates recent findings by <cit.>. § CONCLUSION AND FUTURE WORK In our work, we present a novel MIL approach called DQ-MIL-SD to the field of histopathological slide assessment. We introduce a dual-query cross-attention layer to combine single-token MIL-cross-attention with multi-token Perceiver cross- and self-attention in one architecture. By introducing a self-distillation loss, we can leverage the advantages of a small and a larger aggregation model. The proposed DQ Perceiver outperforms recent state-of-the-art approaches or is on par. In addition, combining multiple pre-trained embedders by the Dynamic Meta-Embedder ensures consistent performance across datasets. The next step will be to extend this approach to a multi-modal setting, allowing us to fully leverage the flexibility of the Perceiver and to explore its potential in the field of molecular subtyping. Acknowledgements This work was sponsored by the Graduate School 2543/1 “Intraoperative Multisensory Tissue Differentiation in Oncology” (project ID 40947457), funded by the German Research Foundation (DFG - Deutsche Forschungsgemeinschaft). This work was also supported in part by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039A.
http://arxiv.org/abs/2307.04539v1
20230710130519
Neural functional theory for inhomogeneous fluids: Fundamentals and applications
[ "Florian Sammüller", "Sophie Hermann", "Daniel de las Heras", "Matthias Schmidt" ]
cond-mat.soft
[ "cond-mat.soft", "cond-mat.stat-mech" ]
[email protected] Theoretische Physik II, Physikalisches Institut, Universität Bayreuth, D-95447 Bayreuth, Germany We present a hybrid scheme based on classical density functional theory and machine learning for determining the equilibrium structure and thermodynamics of inhomogeneous fluids. The exact functional map from the density profile to the one-body direct correlation function is represented locally by a deep neural network. We substantiate the general framework for the hard sphere fluid and use grand canonical Monte Carlo simulation data of systems in randomized external environments during training and as reference. Functional calculus is implemented on the basis of the neural network to access higher-order correlation functions via automatic differentiation and the free energy via functional line integration. Thermal Noether sum rules are validated explicitly. We demonstrate the use of the neural functional in the self-consistent calculation of density profiles. The results outperform those from state-of-the-art fundamental measure density functional theory. The low cost of solving an associated Euler-Lagrange equation allows to bridge the gap from the system size of the original training data to macroscopic predictions upon maintaining near-simulation microscopic precision. These results establish the machine learning of functionals as an effective tool in the multiscale description of soft matter. Neural functional theory for inhomogeneous fluids: Fundamentals and applications Matthias Schmidt August 12, 2023 ================================================================================ § INTRODUCTION The problem with density functional theory (DFT) is that you do not know the density functional. Although this quip by the late and great Yasha Rosenfeld <cit.> was certainly meant in jest to a certain degree, it does epitomize a structural assessment of classical DFT <cit.>. As a general formulation of many-body statistical physics, the framework comprises a beautiful and far reaching skeleton of mathematical formalism centered around a formally exact variational minimization principle <cit.>. In practice however, the theory needs to be fleshed out by approximations of all means conceivable in our efforts to get to grips with the coupled many-body problem that is under consideration. Specifically, it is the excess (over ideal gas) intrinsic Helmholtz free energy [ρ], expressed as a functional of the position-resolved density profile ρ(r⃗), which needs to be approximated. Decades of significant theoretical efforts have provided us with a single exact functional, that for nonoverlapping hard rods in one spatial dimension, as obtained by another hero in the field, Jerry Percus <cit.>. Nevertheless, useful DFT approximations range from the local density approximation for large scale features which are decoupled from microscopic length scales, to square-gradient functionals with their roots in the 19th century, to the arguably most important modern development, that of the fundamental measure theory (FMT) as kicked off by Rosenfeld in 1989 <cit.> and much refined ever since <cit.>. FMT is a geometry-based framework for the description of hard sphere systems and it has deep roots in the Percus-Yevick <cit.> and scaled-particle theories <cit.>, which Rosenfeld was able to unify and generalize based on his unique theoretical insights <cit.>. The realm of soft matter <cit.> stretches far beyond the hard sphere fluid. FMT remains relevant though in the description of a reference system as used e.g. in studies of hydrophobicity, where the behaviour of realistic water models <cit.> is traced back to the simpler Lennard-Jones fluid, which in turn is approximated via the hard sphere FMT functional plus a mean-field contribution for describing interparticle attraction <cit.>. Further topical uses of FMT include the analysis of the three-dimensional electrolyte structure near a solid surface <cit.> and the problem of the decay length of correlations in electrolytes <cit.>. There is a current surge in the use of machine learning techniques in soft matter, e.g. for its characterization <cit.>, engineering of self-assembly <cit.>, structure detection <cit.>, and for learning many-body potentials <cit.>. Within classical DFT, machine learning was used to address ordering of confined liquid crystals <cit.>, and free energy functionals were obtained for one-dimensional systems from convolutional <cit.> and equation-learning <cit.> networks as well as within a Bayesian inference approach <cit.>. <cit.> used machine learning to improve the standard mean-field approximation of the excess Helmholtz free-energy functional for the Lennard-Jones fluid. In nonequilibrium, <cit.> have reported a method to machine-learn the functional relationship of the local internal force for a steady uniaxial compressional flow of a Lennard-Jones fluid at constant temperature. As prescribed by power functional theory <cit.>, the functional dependence in nonequilibrium not only incorporates the density profile but also the one-body current. In this work, we return to the problem of describing and predicting the structure and thermodynamics of inhomogeneous equilibrium fluids. We show that a neural network can be trained to accurately represent the functional dependence of the one-body direct correlation function with respect to the density profile. While the presented methods are applicable to virtually arbitrary fluids with short-ranged interparticle interactions, we focus here on the well-studied hard-sphere fluid in order to exemplify our framework and to challenge the available highly accurate analytic approaches from liquid integral equation theory and FMT. Reference data for training and testing the model is provided by grand canonical Monte Carlo (GCMC) simulations that cover a broad range of randomized inhomogeneous environments in planar geometry. We implement functional calculus on the basis of the trained neural functional to infer related physical quantities and demonstrate their consistency with known literature results both in bulk and in inhomogeneous systems. In particular, we highlight the accessibility of the fluid pair structure, the determination of free energies and equations of state as well as the validation of thermal Noether sum rules <cit.>. These results corroborate that the neural functional exceeds its role as a mere interpolation device and instead possesses significant representational power as a genuine density functional for the prediction of nontrivially related physical properties. We apply the trained neural network in the DFT Euler-Lagrange equation, which enables the self-consistent calculation of density profiles and which hence constitutes a neural-network-based DFT or short neural DFT. This method alleviates conventional DFT from the burden of having to find suitable analytic approximations while still surpassing even the most profound existing treatments of the considered hard sphere fluid via FMT functionals <cit.> in accuracy. We further demonstrate the fitness of the method for the straightforward application to multiscale problems. Neural DFT therefore provides a way to transfer near-simulation microscopic precision to macroscopic length scales, which serves as a technique to predict properties of inhomogeneous systems which far exceed typical box sizes of the original training data. This work is structured as follows. The relevant physical background of liquid state theory is provided in Sec. <ref>. Details of the simulations as well as of the neural network are given in Secs. <ref> and <ref>. The training procedure and results for the achieved metrics that measure its convergence are presented in Sec. <ref>. We proceed by testing physical properties of the trained model and use automatic differentiation of the neural network in Sec. <ref> to access pair correlations, which are then compared to bulk results from both the Percus-Yevick theory and from simulations. The consistency of the neural direct correlation functional to satisfy thermal Noether sum rules is validated in Sec. <ref>, and different ways to obtain the bulk equation of state as well as free energies in inhomogeneous systems are given in Sec. <ref>. In Sec. <ref>, we show the application of the neural functional to the self-consistent calculation of density profiles via the DFT Euler-Lagrange equation and describe the technical details and conceptual advantages of this neural DFT over analytic approaches. In Sec. <ref>, the results are compared to those from FMT, and in Sec. <ref>, the relevance of the method for making macroscopic predictions is illustrated for cases of randomized external potential and for sedimentation between hard walls on length scales that far exceed the training simulation box sizes. We conclude in Sec. <ref> and give an outlook to possible improvements and extensions of the method as well as to its application for different fluid types, in more general geometries and in nonequilibrium. § MACHINE LEARNING INTRINSIC CORRELATIONS §.§ Physical background We start with the standard relation for the one-body direct correlation function c_1(r⃗) of liquid state theory <cit.>, c_1(r⃗) = lnρ(r⃗) + β(r⃗) - βμ, where r⃗ denotes the spatial position and β = 1 / (k_B T) with the Boltzmann constant k_B and absolute temperature T. The three terms on the right hand side of Eq. (<ref>) represent respectively the ideal gas contribution, the external potential (r⃗) and the influence of the particle bath at chemical potential μ. The logarithm in Eq. (<ref>) is understood as ln[Λ^3 ρ(r⃗)] with the thermal wavelength Λ, which can be set to the particle size σ without any loss of information in the present classical context. For a prescribed external potential (r⃗), knowledge of the corresponding equilibrium density profile ρ(r⃗) allows to compute c_1(r⃗) explicitly via Eq. (<ref>). This relationship can be viewed as a locally resolved chemical potential balance: the contribution from the ideal gas, k_B T lnρ(r⃗), from the external potential, (r⃗), and from interparticle interactions, - k_B T c_1(r⃗), add up at each position to μ, which is necessarily uniform throughout an equilibrium system. However, the notation in Eq. (<ref>) is oblivious to a central result shown by <cit.> in 1979, thereby kicking off a modern theory for the description of inhomogeneous fluids. For given type of internal interactions, the spatial variation of the function c_1(r⃗) is already uniquely determined by the spatial form of the density profile ρ(r⃗) alone, without the need to invoke the external potential explicitly. From this vantage point of classical DFT, the dependence of c_1(r⃗) on ρ(r⃗) is not merely pointwise but rather with respect to the values of the entire density profile, which determine c_1(r⃗) at each given position r⃗. Formally, this relationship is exact <cit.> and it constitutes a functional dependence c_1(r⃗; [ρ]), which is indicated by brackets here and in the following and which is in general nonlinear and nonlocal. As we will demonstrate, the existence of such a universal functional mapping makes the problem of investigating inhomogeneous fluids particularly amenable to supervised machine learning techniques. In most formulations of classical DFT, one exploits the fact that the intrinsic excess free energy functional [ρ] acts as a functional generator such that the one-body direct correlation function is obtained via functional differentiation with respect to the density profile, c_1(r⃗; [ρ]) = - δβ[ρ]/δρ(r⃗). A compact description of standard formulae for the calculation of functional derivatives can be found in Ref. Schmidt2022. In order to make progress in concrete applications, one typically needs to rely on using an approximate form of [ρ] for the specific model under consideration, as determined by its interparticle interactions. DFT is a powerful framework, as using c_1(r⃗; [ρ]) obtained from Eq. (<ref>) with a suitable expression for [ρ] turns Eq. (<ref>) into an implicit equation for the equilibrium density profile ρ(r⃗). In the presence of a known form of (r⃗), one can typically solve Eq. (<ref>) very efficiently, allowing ease of parameter sweeps, e.g. for exhaustive phase diagram explorations. On the downside, [ρ] and thus also c_1(r⃗; [ρ]) remain approximative and the development of analytic tools has certainly slowed down over several years if not decades. Here we proceed differently and bypass the excess free energy functional [ρ] at first. Instead, we use a deep neural network to learn and to represent the functional relationship ρ(r⃗) → c_1(r⃗) directly, which has significant advantages both for the generation of suitable training data as well as for the applicability of the model in the determination of fluid equilibria. This investigation is based on GCMC simulations that serve to provide training, validation and test data. Discriminating between these three roles of use is standard practice in machine learning and we give further details below. §.§ Simulation method Generating the simulation data is straightforward and we use the following strategy, adopted to planar situations where the position-dependence is on a single position variable x while the system remains translationally invariant in the y- and z-direction. This geometry is highly relevant to identify the physics in planar capillary and adsorption situations and facilitates ease of accurate sampling. We employ randomized simulation conditions by generating external potentials of the form (x) = ∑_n=1^4 A_n sin(2 π n x/L + ϕ_n) + ∑_n V_n^lin(x), where A_n and ϕ_n are randomly selected Fourier coefficients and phases, respectively, and L is the simulation box length in x-direction. We choose L = 20 σ, although there is no specific compliance requirement for the neural network (see below), and the lateral box lengths are set to 10σ to minimize finite-size effects. Periodic boundary conditions apply in all spatial directions. The sinusoidal terms in (x) are complemented by up to five piecewise linear functions V^lin(x) = V_1 + (V_2 - V_1) (x - x_1) / (x_2 - x_1) for x_1 < x < x_2 and 0 otherwise, for which the parameters 0 < x_1 < x_2 < L, V_1 and V_2 are again chosen randomly. Additionally, we explicitly impose planar hard walls in a subset of the simulations by setting (x) = ∞ for x < x_w/2 and x > L - x_w/2, i.e. near the borders of the simulation domain; the width x_w of the wall is chosen randomly in the interval 1 ≤ x_w / σ≤ 3. To cover a broad range from dilute to dense systems, the chemical potential is chosen randomly within the range -5 ≤βμ≤ 10 for each respective GCMC simulation run. The observed mean densities range from 0.006 σ^-3 to 0.803 σ^-3, yet smaller and much larger local densities occur due to the inhomogeneous nature of the systems. In total, 750 such GCMC runs are used, where for given form of (x) the planar one-body profiles ρ(x) and c_1(x) are obtained. The former is acquired from straightforward histogram filling and the latter from evaluating Eq. (<ref>) on the basis of the sampled histogram for ρ(x) as well as the known form of (x) and value of μ for the specific run under consideration. As Eq. (<ref>) is undefined for vanishing density, we have excluded regions where ρ(x) = 0 such as within the hard walls. By modern standards of computational resources, the workload for the generation of the simulation data is only moderate at a total CPU time of ∼ 10^4 hours. §.§ Neural network We use a deep neural network <cit.> to represent the functional map from the density profile to the local value of the one-body direct correlation function at a given point. That is, instead of the entire function, we construct the network to output only the scalar value c_1(x) for a certain position x when supplied with the surrounding inhomogeneous density. The relevant section of the density profile comprises the values of ρ(x) in a specified window around a considered location x, as described below. Despite the locality of the method, access to the entire (discretized) one-body direct correlation profile is immediate via evaluation of the neural network at pertinent positions x across the domain of interest. Multiple local evaluations of the network remain performant on highly parallel hardware such as GPUs when passing the input accordingly in batches. A schematic picture of the network architecture is given in Fig. <ref> and is explained in the following. The functional dependence on the density profile is realized by providing discretized values of ρ(x) on an equidistant grid with resolution Δ x = 0.01 σ. As c_1(x; [ρ]) depends only on the immediately surrounding density profile around a fixed location x, we restrict the input range x' to a sufficiently large window x' ≤ |x - x_c|. We choose the cutoff x_c = 2.56 σ based on simulation data for the bulk direct correlation function <cit.> and on the evaluation of training metrics for different window sizes x_c. Increasing the value of x_c further led to no improvement in the performance of the trained neural network. This behavior is expected from theoretical considerations, as the one-body direct correlation function vanishes quickly for short-ranged pair potentials <cit.>. We recall that in FMT, x_c = σ by construction. Note that the choice of c_1(x; [ρ]) as our target functional is not coincidental, but that its quick spatial decay rather is a pivotal characteristic central to the success of our method. To contrast this, assume that one attempts to model the functional mapping μ_loc(x) = μ - (x) →ρ(x), thereby naively imitating the simulation procedure. This task poses major challenges due to the long-range nature of density correlations induced by an external potential, which is circumvented in our case by the choice of a more manageable target functional. The input layer involves 513 nodes and is followed by three fully-connected hidden layers with 512 units each. The output layer consists of a single node for the scalar value of c_1(x) at the specified location x. Crucially, continuously differentiable activation functions such as the exponential linear unit or the softplus function are used throughout the network for the realization of nonlinearities. This leads to substantial improvements particularly when evaluating two-body quantities via automatic differentiation (see Secs. <ref> and <ref>) as compared to the standard rectified linear unit (ReLU). We attribute this superior performance to the fact that activation functions which are not continuously differentiable and which vanish in certain domain ranges (such as ReLU) reinforce sparsity of the activation output within the hidden layers <cit.>. While this property is desired in many machine learning tasks (e.g. for classification), it hinders the accurate representation of the functional relation c_1(x; [ρ]) in our case. The resulting neural functional for the one-body direct correlation function is denoted in the following by c_1^⋆(x; [ρ]) and related quantities which follow from its inference are marked accordingly by a superscript star. §.§ Training procedure and metrics The machine learning routines are implemented in Keras/Tensorflow <cit.> and we use the standard Adam <cit.> optimizer for the adjustment of the network parameters in order to fit c_1^⋆(x; [ρ]) against the simulation reference c_1(x). The problem at hand is a regression task. Hence, the mean squared error is chosen as a suitable loss function and the mean average error serves as a validation metric. Since the model shall infer the pointwise value c_1(x) from a density section around a specified location x, see Fig. <ref>, the simulation data cannot be passed as is to the neural network. Instead, windowed views of the density profile have to be generated prior to the training loop, which correspond to the target value c_1(x) at the center x of the respective window. A periodic continuation of all simulation profiles is valid due to periodic boundary conditions. Additionally, we use data augmentation to benefit from the inherent mirror symmetry (i.e. x → -x) of the problem and thus effectively double the number of training data sets. As is customary, we separate the independent simulation results prior to performing the machine learning routines: 150 are kept aside as a test set, 150 serve as validation data to monitor training progress and 450 are used for the actual training of the neural network. Modeling the functional relationship of c_1(x; [ρ]) locally, i.e. inferring pointwise values individually instead of outputting the entire profile at once, has numerous conceptual and practical advantages. Regarding the feasibility of the neural network in concrete applications, one is free to choose an arbitrary box length L when gathering training data and more importantly to readjust the value of L when using the trained neural network for making predictions (cf. Sec. <ref>). From a physical point of view, providing only local density information has the merit of already capturing the correlated short-range behavior of c_1(x; [ρ]). If the neural network were to output the entire one-body direct correlation profile from a given density profile ρ(x) at once, this inherent locality would have to be learned instead, hence leading to a much more elaborate training process. Lastly, the fine-grained nature of the training data turns out to be highly beneficial from a machine learning perspective. Note that one can generate 9 · 10^5 input-output pairs from 450 training simulations in the present context (with the values being doubled after data augmentation). The increased cardinality of the training set enables better generalization of the model and also prevents overfitting, e.g. to the statistical noise of the sampled profiles. We train the model for 100 epochs in batches of size 256 and decrease the learning rate exponentially by ∼ 5% per epoch from an initial value of 0.001. This results in a best mean average error of 0.0022 over the validation set, which is of the same order as the estimated average noise of the simulation data for c_1(x). Therefore, we deem our neural network to possess full representational power of the local functional relationship c_1(x; [ρ]) within the conditions of the provided simulation data. § EXAMINING THE NEURAL CORRELATION FUNCTIONAL §.§ Two-body bulk correlations Besides monitoring standard metrics such as the mean average error over a test set, arguably deeper physical insights into the rigorous structure of the statistical mechanics at hand serves for assessing the quality of the neural functional c_1^⋆(x; [ρ]). We first ascertain that the model gives an accurate representation of the physics of bulk fluids. Despite the apparent simplicity of this case, this is a highly nontrivial test as the training data solely covered (strongly) inhomogeneous situations. For this, we investigate the pair structure and aim at implementing the two-body direct correlation functional, which is formally defined as the functional derivative <cit.> c_2(r⃗, r⃗'; [ρ]) = δ c_1(r⃗; [ρ])/δρ(r⃗'). On the basis of the neural network, we can make use of the powerful automatic differentiation techniques. This allows to create an immediate analog of Eq. (<ref>) via c_2^⋆(x, x'; [ρ]) = δ c_1^⋆(x; [ρ]) / δρ(x'), where the functional derivative δ / δρ(x') is evaluated by reverse mode automatic differentiation with respect to the input values of the discretized density profile. In common machine learning frameworks, this requires only high-level code (e.g.  in Keras/Tensorflow <cit.>). The numerical evaluation of c_2^⋆(x, x'; [ρ]) is performant as reverse mode automatic differentiation generates executable code that is suitable for building derivatives with respect to multiple input variables simultaneously. We obtain the bulk direct correlation function in planar geometry as the special case c̅_2^b(x, ρ_b) = c_2(0, x; [ρ_b]), where we have introduced the bulk density ρ_b(x) = ρ_b = const. (In the notation, the parametric dependence on ρ_b is dropped in the following.) Note that c̅_2^b(x) is distinct from the more common radial representation c_2^b(r), as our geometry implies an integration over the lateral directions y and z, i.e. c̅_2^b(x) = ∫ y z c_2^b(r = √(x^2 + y^2 + z^2)) = 2 π∫_x^∞ r r c_2^b(r), where the last equality follows from using radial coordinates and substitution. We commence by performing a Fourier transform of the planar real space representation c̅_2^b(x) and utilize radial symmetry in Fourier space. This acts as a deconvolution of Eq. (<ref>) and directly yields the radial Fourier (Hankel) transform of c_2^b(r), c̃_2^b(k) = 4 π/k∫_0^∞ r r sin(kr) c_2^b(r). The inverse transform is identical to Eq. (<ref>) up to a factor of (2 π)^-3 upon interchanging r and k. To go further, the bulk Ornstein-Zernike equation <cit.> c̃_2^b(k) = h̃(k)/1 + ρ_b h̃(k) is used to obtain the total correlation function h̃(k) from c̃_2^b(k) in Fourier space after rearrangement. Recall that the radial distribution function follows directly via g(r) = h(r) + 1; here h(r) is the real space representation of h̃(k). The static structure factor S(k) is then given as S(k) = 1 + ρ_b h̃(k). In Fig. <ref>, results of c̅_2^b(x), c̃_2^b(k), h̃(k) and S(k) are shown for different bulk densities ρ_b σ^3 = 0.4, 0.7, 0.9. From our neural functional, we obtain c̅_2^b ⋆(x) = δ c_1^⋆(0; [ρ]) / δρ(x) |_ρ = ρ_b, i.e. the autodifferentiated network is evaluated at spatially constant density ρ_b. The total correlation function and the static structure factor follow from Eqs. (<ref>) and (<ref>) after having computed c̃_2^b ⋆(k) via a numerical Fourier transform of c̅_2^b ⋆(x). For comparison, we also depict reference data obtained analytically from the Percus-Yevick theory <cit.> and reproduced from simulation results of <cit.>. Good agreement is found between simulation and the autodifferentiated neural network, while the Percus-Yevick result shows noticeable deviations in c̅_2^b(x). The latter overestimates the depth of the core region x < σ and this discrepancy increases for larger bulk densities. The neural functional yields a clear improvement over the Percus-Yevick theory and shows only marginal differences to the simulation results of Ref. Groot1987 for both the planar real space and the radial Fourier space representation of the two-body direct correlation function. In h̃(k) and S(k), the severity of the discrepancies of simulation and machine-learning data to the Percus-Yevick results decreases, but a difference is still noticeable in particular for large bulk densities. A slight mismatch to the simulation reference is observed in the magnitude and phase of the oscillations of the Percus-Yevick static structure factor S_PY(k), and this correction is reproduced very well by the neural functional. Note that although one arrives at radial representations of the quantities c̃_2^b(k), h̃(k) and S(k) in Fourier space, performing the radial backtransform to real space numerically according to the inverse of Eq. (<ref>) is generally a “notoriously difficult task” <cit.> and is not considered here. This successful test reveals that, while being trained solely with one-body profiles, the neural functional c_1^⋆(x; [ρ]) contains full two-body information equivalent in bulk to the radial distribution function g(r). The pair correlations can be accessed via automatic differentiation at low computational cost and they are consistent with known bulk results. We recall that this is a mere byproduct of the neural network and that no such two-body information has been explicitly incorporated in the training. More so, Fig. <ref> demonstrates that the bulk quantities c̅_2^b(x), c̃_2^b(k), h̃(k) and S(k) as obtained from c_1^⋆(x; [ρ]) substantially outperform the Percus-Yevick theory and almost attain simulation quality. In Appendix <ref>, we illustrate that higher-order correlations such as the three-body direct correlation functional c_3^⋆(x, x', x”; [ρ]) follow analogously via nested automatic differentiation. On this level, differences to FMT results are even more prominent than the deviations to the two-body Percus-Yevick results. As we will show in Sec. <ref>, the accuracy of predictions from the neural network also holds in inhomogeneous situations, where FMT serves again as an analogous and arguably even more challenging theoretical baseline than the Percus-Yevick bulk theory. Before doing so, we lay out additional consistency tests and quality assessments that are applicable in inhomogeneous systems. §.§ Noether sum rules In order to further elucidate whether c_1^⋆(x; [ρ]) quantitatively reproduces fundamental properties of equilibrium many-body systems, we make use of exact sum rules that follow from thermal Noether invariance <cit.>: ∇ c_1(r⃗) = ∫r⃗' c_2(r⃗, r⃗') ∇' ρ(r⃗'), ∫r⃗ρ(r⃗) ∫r⃗' ρ(r⃗') ∇ c_2(r⃗, r⃗') = 0. Both Eqs. (<ref>) and (<ref>) apply in any equilibrated inhomogeneous system regardless of the type of internal interactions. While the interparticle interaction potential does not appear explicitly in Eqs. (<ref>) and (<ref>), it nevertheless determines the functionals c_1(r⃗; [ρ]) and c_2(r⃗, r⃗'; [ρ]). Recall that the spatial gradient of the one-body direct correlation function can be identified with the internal equilibrium force profile, f⃗_int(r⃗) = k_B T ∇ c_1(r⃗) <cit.>. We verify that the neural functional complies with the above sum rules (<ref>) and (<ref>) as follows. Analogous to Sec. <ref>, we use autodifferentiation to evaluate Eq. (<ref>), but this time retain the full inhomogeneous structure of c_2^⋆(x, x'; [ρ]). The left hand side of Eq. (<ref>) is obtained straightforwardly from simple evaluation of the neural functional and numerical spatial differentiation. As input for ρ(x), we use the simulated density profiles of the test set. Care is required when evaluating the spatial gradients ∇ρ(x), ∇ c_1^⋆(x; [ρ]) and ∇ c_2^⋆(x, x'; [ρ]) due to the amplification of undesired noise, which we reduce by applying a low-pass filter after having taken the numerical derivatives. The volume integrals reduce in planar geometry to ∫r⃗ = A ∫ x, where A is the lateral system area. In Fig. <ref>, three typical profiles for the left and right hand side of Eq. (<ref>) are shown. In all three systems both sides of the equation coincide up to numerical noise due to the required spatial derivatives. Additionally, we define errors via scalar deviations from equality in Eqs. (<ref>) and (<ref>) respectively as e_1 = ‖∇ c_1(x) - A ∫ x' c_2(x, x') ∇' ρ(x') ‖_∞, e_2 = A^2 ∫ x ρ(x) ∫ x' ρ(x') ∇ c_2(x, x'), where ‖·‖_∞ denotes the maximum norm. Panels (a) and (b) of Fig. <ref> depict results for e_1 and e_2 as a function of the mean density ρ̅ = ∫r⃗ρ(r⃗) / V for all 150 density profiles of the test set, where V denotes the volume of the system. The small magnitudes of the observed error values indicate that the neural network satisfies the Noether identities (<ref>) and (<ref>) to very high accuracy. Outliers can be attributed mostly to the moderate numerical noise of the spatial gradients, see panel (III) in Fig. <ref>, and are no hinderance in practical applications of the neural functional. This confirmation demonstrates that our method transcends the neural network from a mere interpolation device of the simulation training data to a credible standalone theoretical object. The fact that one is able to carry out consistent and performant functional calculus indeed renders c_1^⋆(x; [ρ]) a neural-network-based density functional. Besides functional differentiation, we show next that functional line integration acts as the inverse operation and provides access to the corresponding free energy. Appendix <ref> gives further insight into the symmetry properties of c_2^⋆(x, x'; [ρ]), which serve as a prerequisite for the existence of a generating excess free energy functional ^⋆[ρ]; we recall Eq. (<ref>). §.§ Equation of state and free energy Although the machine learning procedure operates on the level of the one-body direct correlation function, the excess free energy [ρ] is accessible by functional line integration <cit.>: β[ρ] = - ∫_0^1 α∫r⃗ρ(r⃗) c_1(r⃗; [ρ_α]). Here, ρ_α(r⃗) = αρ(r⃗) is a sequence of density profiles that are linearly parametrized by α in the range 0 ≤α≤ 1. The limits are ρ_0(r⃗) = 0 such that [0] = 0, and ρ_1(r⃗) = ρ(r⃗), which is the target density profile that appears as the functional argument on the left hand side of Eq. (<ref>). Other parametrizations of ρ_α(r⃗) are conceivable but change the concrete form of Eq. (<ref>). On the basis of c_1^⋆(x; [ρ]), we implement Eq. (<ref>) via β^⋆[ρ] = - A ∫_0^1 α∫ x ρ(x) c_1^⋆(x; [ρ_α]) and evaluate the integrals numerically; as before A denotes the lateral system area. We first return to bulk systems and illustrate in the following three different routes towards obtaining the bulk equation of state from the neural network. For this, we introduce the excess free energy density as ψ_b(ρ_b) = [ρ_b] / V, where V is the system volume. From the neural functional, the excess free energy density ψ_b^⋆(ρ_b) can be acquired via ^⋆[ρ_b] from functional line integration along a path of bulk densities according to Eq. (<ref>). Alternatively and equivalently, one can simply evaluate the neural direct correlation functional at bulk density ρ_b and due to translational symmetry at arbitrary location (e.g. x = 0) such that c_1^b ⋆ = c_1^⋆(0; [ρ_b]). Simplifying Eq. (<ref>) in bulk reveals that ψ_b^⋆'(ρ_b) = - k_B T c_1^b ⋆, where the prime denotes the derivative with respect to the bulk density argument. The excess free energy density ψ_b^⋆(ρ_b) follows from ordinary numerical integration across bulk densities up to the target value ρ_b. The numerical accuracy to which both routes coincide serves as a further valuable consistency test. Additionally, one obtains the bulk pressure P(ρ_b) from the excess free energy density via P(ρ_b) = ( ψ_b^'(ρ_b) + k_B T ) ρ_b - ψ_b(ρ_b). The pressure is equally accessible from a further route which incorporates previous results for the bulk pair structure via their low-wavelength limits according to <cit.> β. ∂ P/∂ρ_b|_T = β/ρ_b χ_T = 1 - ρ_b c̃_2^b(0) = 1/1 + ρ_b h̃(0) = 1/S(0), where one can identify the isothermal compressibility χ_T = ρ_b (∂ρ_b / ∂ P)_T. From Eq. (<ref>), P(ρ_b) is obtained by evaluation of either of the bulk correlation functions (see Sec. <ref>) in Fourier space at k = 0 for different bulk densities and by subsequent numerical integration towards the target value of ρ_b. We compare the results in Fig. <ref>, where the equation of state P^⋆(ρ_b) of the neural network was acquired from functional line integration across bulk systems, cf. Eq. (<ref>), from evaluation of one-body bulk correlation values c_1^b ⋆, cf. Eq. (<ref>), and from the low-wavelength limit of two-body bulk correlations, cf. Eq. (<ref>). One finds that the results of all three routes are consistent with each other and that they match very well the highly accurate Carnahan-Starling equation of state <cit.>. A slight deviation can be noticed when evaluating P^⋆(ρ_b) via Eq. (<ref>), which constitutes the most indirect route detouring to two-body correlations. This may reflect the small discrepancy of the neural functional to simulation results (cf. Fig. <ref>) and the sensitivity of the low-wavelength limit of the static structure factor to remaining finite size effects <cit.>. As already observed for the bulk pair structure in Sec. <ref>, the neural network also clearly outperforms the Percus-Yevick theory for the bulk fluid equation of state. We recall again that neither bulk information nor data for free energies or pressures was given explicitly in the training of the neural network. Additionally, we demonstrate in Appendix <ref> that the neural functional is fit for the application of dimensional crossover <cit.> in order to obtain the bulk equation of state for the two-dimensional hard disk fluid within a reasonable range of packing fractions. For a concise comparison of free energies in inhomogeneous situations, additional reference data has to be acquired from simulations. In our grand canonical setting, thermodynamic integration <cit.> with respect to the chemical potential can be used to measure the grand potential according to Ω[ρ] = - ∫_-∞^μμ' ⟨ N ⟩. Here, the integration starts from an empty system with Ω[0] = 0 and traverses the chemical potential up to the target value μ. One needs to measure the mean number of particles ⟨ N ⟩ in a sufficient number of simulations with intermediate chemical potentials -∞ < μ' ≤μ to evaluate Eq. (<ref>) numerically. The excess free energy then follows directly from [ρ] = Ω[ρ] - k_B T ∫r⃗ρ(r⃗) (lnρ(r⃗) - 1) + ∫r⃗ρ(r⃗) ((r⃗) - μ). Thermodynamic integration according to Eq. (<ref>) has been performed for 22 systems of the test set to yield reference values ^sim for the excess free energy via Eq. (<ref>). The systems were selected to cover a broad range of excess free energy values, and FMT results for were used as a further theoretical estimate for this selection. In Tab. <ref> and Fig. <ref>, we show errors of to the quasi-exact simulation values when calculating the excess free energy via Rosenfeld and White Bear MkII FMT as well as from functional line integration according to Eq. (<ref>) of the neural functional. For both FMT methods, a DFT minimization (cf. Sec. <ref>) is performed to yield a self-consistent density profile ρ(x), which serves as input to the respective analytic FMT expression for [ρ]. Hence we compare consistently equilibrium states (according to the respective theory) corresponding to the same form of the external potential. The comparison reveals that the neural functional significantly outperforms Rosenfeld FMT and still yields slightly more accurate values for the excess free energy than the very reliable White Bear theory. Regarding the above described bulk results for the free energy, this behavior is both consistent and expected, as the Rosenfeld and White Bear MkII functionals can be associated with the Percus-Yevick compressibility and Carnahan-Starling bulk equations of state respectively. Still, the test in inhomogeneous systems is a more rigorous one than in bulk, as the full nonlocal functional representation is invoked when providing c_1^⋆(x; [ρ]) with an inhomogeneous density profile as input. Given that the functional line integration of c_1^⋆(x; [ρ]) via Eq. (<ref>) is practically immediate, one can deem ^⋆[ρ] itself a corresponding neural functional for the excess free energy that enables a full description of the thermodynamics of inhomogeneous fluids to high accuracy. As we present below, this quantitative precision is preserved when applying the neural functional in a predictive manner in the self-consistent calculation of density profiles. § PREDICTING INHOMOGENEOUS FLUIDS VIA NEURAL DFT §.§ Going beyond analytic approximations In the previous section, the trained model has been put to test by deriving related quantities such as c_2^⋆(x, x'; [ρ]) from autodifferentiation and ^⋆[ρ] from functional line integration in order to assess its performance against analytic and numerical reference results. We now turn to the application of the neural functional c_1^⋆(x; [ρ]) in the context of the self-consistent determination of density profiles according to the DFT Euler-Lagrange equation. This is achieved by rearranging Eq. (<ref>) to the standard form <cit.> ρ(r⃗) = exp(-β ((r⃗) - μ) + c_1(r⃗; [ρ])). A fixed-point (Picard) iteration with mixing parameter α can be used to determine the density profile from Eq. (<ref>) according to ρ(r⃗) ← (1 - α) ρ(r⃗) + αexp(-β ((r⃗) - μ) + c_1(r⃗; [ρ])). The degree of convergence is determined from the remaining difference of right and left hand side of Eq. (<ref>). With the trained neural functional at hand, one can evaluate the one-body direct correlation function in Eq. (<ref>) via the surrogate c_1^⋆(x; [ρ]) in each iteration step. In the following, the use of c_1^⋆(x; [ρ]) in this context will be referred to as neural DFT. We note two minor technical points concerning the use of the neural functional in the Picard iteration. It was observed that a conservative choice of α is necessary during the first few iterations to ensure numerical stability. After this burn-in, the mixing parameter can be set to usual values (e.g. α = 0.05). Furthermore, the convergence criterion has to be relaxed as compared to typical choices in analytic DFT methods due to the remaining intrinsic uncertainty of c_1^⋆(x; [ρ]). The mean average error after training, cf. Sec. <ref>, provides an estimate for the expected relative uncertainty of the density profile according to Eq. (<ref>). Depending on the specific problem, the error might not decrease any further than that during the iteration (<ref>). Neither of these points caused any practical hinderance in applications. The treatment of Eq. (<ref>) in neural DFT is conceptually not different than in standard DFT methods. However, the model c_1^⋆(x; [ρ]) relieves the theory from being restricted by the available approximations for the one-body direct correlation function as generated from analytic expressions of the excess free energy functional [ρ] via Eq. (<ref>). We emphasize that, unlike in previous work <cit.>, no analytic ansatz had to be provided and that our method is generic for the determination of a suitable functional from a given model Hamiltonian, thus indeed constituting a “machine learning black box” <cit.> regarding the training procedure. However, in contrast to a closed black box, the inner workings of the resulting neural correlation functional can be inspected very thoroughly via the neural functional calculus laid out above. Also note that, while the model works at the level of the one-body direct correlation function, the free energy is readily available from functional line integration, cf. Sec. <ref>. Lastly, we point out that c_1^⋆(x; [ρ]) captures the entirety of the intrinsic correlations and that further improvements are conceivable by only learning differences to an analytic reference functional. To demonstrate the capabilities of our method, we refrain from this route and show that the trained neural functional alone already exceeds the accuracy of FMT. §.§ Comparison to FMT In the following, we benchmark the self-consistent inhomogeneous density profiles obtained via neural DFT against FMT results. For this comparison, the Rosenfeld <cit.> and White Bear MkII <cit.> FMT functionals are considered and the simulated density profiles are taken as quasi-exact reference data. The FMT functionals are the most profound analytic description of the hard sphere fluid with the White Bear MkII theory being the state-of-the-art treatment of short-ranged intermolecular repulsion in classical DFT. Nevertheless, measurable and systematic deficiencies still remain, e.g. in highly correlated systems <cit.>. We point the reader to Ref. Roth2010 for a thorough account of FMT and to Ref. Sammueller2023 for a very recent quantitative assessment. Note that the tensorial weights of <cit.> to describe hard sphere freezing are not included in our investigation. The comparison is set up as follows. For each hard sphere system of the test set (see Sec. <ref>), we determine the density profile ρ(x) from the Rosenfeld and White Bear MkII FMT functionals as well as from c_1^⋆(x; [ρ]) via the Picard iteration (<ref>) of the Euler-Lagrange Eq. (<ref>). For this, only the known form of the external potential (x) and the value μ of the chemical potential are prescribed. As reference density profiles are available from GCMC simulations, we can evaluate the error Δρ(x) of each of the DFT results relative to the simulation data for ρ(x). From here, different scalar metrics for the quantitative agreement of self-consistent DFT profiles and simulation results are considered. In Fig. <ref>, both global and local error measures for the deviation of FMT as well as neural DFT to simulation data are depicted. For the assessment of the global error, we show the L_2-norm ‖Δρ‖_2 of the discrepancy to the reference profile, which is normalized by the mean density ρ̅ of each system respectively. As the test data covers very dilute to very dense systems, this relative global error measure is plotted as a function of ρ̅ to discern the behavior with respect to varying global average density. Similarly, we define an upper estimate for the relative local error by evaluating the maximum norm ‖Δρ‖_∞ of the density deviation divided by the maximum value ‖ρ‖_∞ of the GCMC density profile. This quantity is resolved against the maximum ‖ρ‖_∞ of the respective inhomogeneous density, thus enabling the detection of local discrepancies, e.g. in the vicinity of maxima and discontinuities of the density profile. One recognizes that neural DFT yields substantially better results than the FMT functionals with regard to both error measures. Compared to the Rosenfeld results, both the global and the local error is decreased by approximately an order of magnitude. Surprisingly, even the White-Bear MKII functional is not able to match the accuracy of the neural DFT, which is noticeable especially for large values of ρ̅ and of ‖ρ‖_∞. §.§ Simulation beyond the box A particular advantage of the local nature of the neural functional c_1^⋆(x; [ρ]) is its applicability to systems of virtually arbitrary size. As explained in Sec. <ref>, it is sufficient to provide the density profile within a rather narrow window as input to the neural network to infer the value of the one-body direct correlation function at the center of the density section. The model c_1^⋆(x; [ρ]) can therefore be used directly in the Euler-Lagrange Eq. (<ref>) for the prediction of planar systems of arbitrary length. Due to the low computational demands of solving this equation self-consistently, this method is suitable even in multiscale problems where macroscopic length scales compete with and are influenced by microscopic correlations and packing features. Although one could argue that analytic DFT methods already account for such tasks, importantly the neural functional c_1^⋆(x; [ρ]) acts as a drop-in replica of the (almost) simulation-like description of the intrinsic correlations. Therefore, neural DFT facilitates to fuse simulation data with common DFT methods, thus providing a means to “simulate beyond the box”. Simulation beyond the box is demonstrated in Fig. <ref>, where a system with a length of 1000 σ is considered; the numerical grid size remains unchanged at 0.01 σ. Our setup implies that for colloids of, say, size σ = 1, we have spatial resolution of 10 across the entirety of a system of macroscopic size 1. As a demonstration, similar to the strategy in Sec. <ref>, a sequence of spatially connected randomized potentials is generated, and the chemical potential is set here to μ = 0. Using c_1^⋆(x; [ρ]), we obtain the corresponding density profile straightforwardly from the simple iteration scheme (<ref>). The computational cost for the determination of ρ(x) with neural DFT is negligible as compared to an analogous many-body simulation, which is hardly feasible on such length scales. A second example, which is arguably more relevant from a physical point of view <cit.>, is given in Fig. <ref>, where we show the sedimentation behavior of the hard sphere fluid as obtained with neural DFT. For this, a local chemical potential μ_loc(z) = μ - (z) that decreases linearly with respect to the height z is imposed in a system which is bounded from the bottom (z = 0) and the top (z = 1000 σ) by hard walls. The spatial variation of μ_loc(z) is chosen small enough to enable thermal diffusion across the whole sedimentation column and to yield locally an almost bulk-like behavior except near the upper and lower hard walls. The method reproduces both the highly correlated nature of ρ(z) in the vicinity of the walls as well as its intermediate behavior within the sedimentation column, which follows closely the bulk equation of state (see Sec. <ref>), as one would expect within a local density approximation <cit.>. § CONCLUSION AND OUTLOOK In this work, we have outlined and validated a machine learning procedure for representing the local functional map from the density profile to the one-body direct correlation function via a neural network. The resulting neural functional was shown to be applicable as a powerful surrogate in the description of inhomogeneous equilibrium fluids. This was demonstrated for the hard sphere fluid, where we have used GCMC simulations in randomized inhomogeneous planar environments for the generation of training, validation and test data. Density and one-body direct correlation profiles followed respectively from direct sampling and from evaluation of Eq. (<ref>). DFT elevates the role of the one-body direct correlation function c_1(x) to that of an intrinsic functional c_1(x; [ρ]) depending on the density profile ρ(x) but being independent of the external potential. We exploited this fact in the construction of our neural network, which takes as input a local section of the discretized density profile around a fixed location x and outputs the value of the one-body direct correlation functional c_1(x; [ρ]) at that specific location. Establishing a pointwise inference of c_1(x; [ρ]) instead of trying to represent the global functional mapping of the entire one-body profiles comes with various advantages, such as independence of the box size, the correct description of the short-range behavior of c_1(x; [ρ]), and a very significant improvement of training statistics. The nonlinear and nonlocal functional relationship was realized by fully-connected hidden layers with smooth activation functions and a standard supervised training routine was used. The achieved mean average error over the test set was of the same order of magnitude as the noise floor of the simulations, thus being indicative of full representational power of the neural correlation functional within the considered simulation data. Whether the quality of the model can be improved further by performing more extensive sampling to reduce the statistical noise of the simulation profiles remains to be investigated in the future. Additionally, active and reinforcement machine learning techniques could be useful for interleaving the training and simulation process, thereby guiding the generation of reference data in order to explore the space of inhomogeneous systems more efficiently and exhaustively. The neural functional was put to test by verifying numerous physical relations in bulk and in inhomogeneous systems. In particular, it was shown that the two-body direct correlation functional c_2(x, x'; [ρ]) as well as higher-order correlations are accessible from the model via automatic differentiation. In bulk, the pair structure as described by the neural network significantly outperforms the Percus-Yevick theory and is even able to compete with simulation results <cit.>, although no bulk data was used during training. In inhomogeneous situations, the conformance of the neural functional to the thermal Noether sum rules (<ref>) and (<ref>) as well as to spatial symmetry requirements holds to high accuracy. The excess free energy [ρ] is readily and efficiently available via functional line integration of the model according to Eq. (<ref>) and the results agree with those obtained from simulations. The bulk equation of state can be acquired consistently from various routes and its accuracy is comparable to the Carnahan-Starling result. Dimensional crossover is feasible for the calculation of the bulk equation of state for the two-dimensional hard disk system. Arguably the most important consequence of the neural functional framework is the applicability of c_1^⋆(x; [ρ]) in the self-consistent calculation of density profiles by solving the Euler-Lagrange Eq. (<ref>) of classical DFT. As the one-body direct correlation function is faithfully represented by the neural network, one is exempted from having to find analytic approximations for c_1(x; [ρ]) or for its generating functional [ρ]. Although FMT provides such approximations for the hard sphere fluid with high precision, we could demonstrate that our neural functional outperforms both the Rosenfeld <cit.> as well as the White Bear MkII <cit.> functional. For this, Eq. (<ref>) was solved self-consistently for all 150 randomized local chemical potentials of the test set to obtain ρ(x), where c_1(x; [ρ]) was given either analytically by FMT or evaluated via c_1^⋆(x; [ρ]). The comparison of the results to the simulated density profiles reveals that neural DFT yields global and local errors that are up to an order of magnitude lower than those of FMT. Furthermore, due to the flexibility that comes with the local functional mapping, the neural network could be used as a means to “simulate beyond the box”. That is, while the training was based solely on simulation data from systems of manageable size, the resulting model c_1^⋆(x; [ρ]) is directly applicable for predictions on much larger length scales. We demonstrated this by imposing a spatial sequence of randomized external potentials on a length of 1000 σ. While the explicit numerical simulation of such a system is comparatively cumbersome, neural DFT offers a way to achieve close to simulation-like accuracy at low computational effort. Furthermore, we have considered a sedimentation column with a height of 1000 σ that is bounded by hard walls. Neural DFT is capable to both resolve microscopically the adsorption at the walls as well as to efficiently capture the long-range density decay with increasing height. The presented fusion of machine learning and DFT can therefore be another useful technique to make headway in the multiscale description of soft matter <cit.>. On the opposite side of the length scale spectrum, it might also be worthwile to consider quantum mechanical approaches, either in the context of ab initio simulation methods for the generation of training data or for cross-fertilization of machine learning ideas, in particular regarding topical applications in quantum DFT <cit.>. While much insight could be gained by considering the well-studied hard sphere fluid, the application of our machine learning procedure is arguably even more useful for particle models that lack satisfactory analytic density functional approximations. Although mean-field descriptions account surprisingly well for soft and attractive contributions <cit.>, e.g. in the Lennard-Jones fluid, analytic efforts to go beyond this approximation are sparse <cit.>. In the future, the application of neural DFT to such thermal systems may prove to be useful either via isothermal training or by providing the temperature as a further input quantity. We expect the general method to hold up even for complex particle models, e.g. containing many-body contributions <cit.> or orientational degrees of freedom as treated within molecular DFT <cit.>, provided that sufficiently accurate training data of sufficient quantity can be generated. A proper treatment of the arising phase transitions and interfacial phenomena might be subtle both in simulation as well as from a machine learning perspective. Even though we saw no need for a more sophisticated training procedure in our investigations, it could be useful to consider physics-informed machine learning <cit.> as a technique for enforcing exact physical relations of the underlying problem directly during training. Sum rules in bulk or in inhomogeneous systems, e.g. the thermal Noether identities (<ref>) and (<ref>), might be suitable candidates for this task. Analogous to the evaluation of derivatives in physics-informed neural networks, we have shown the necessary quantities to be accessible by automatic differentiation of the neural functional. When considering nonequilibrium systems, power functional theory (PFT) <cit.> establishes an exact functional many-body framework which is analogous to that of DFT in equilibrium. A central ramification of PFT is the existence of a functional map from the time-dependent one-body density ρ(r⃗, t) and current J⃗(r⃗, t) to the internal force profile f⃗_int(r⃗, t; [ρ, J⃗]), which is in general nonlocal in space and causal in time t. Recent work by <cit.> demonstrated that machine learning this kinematic internal force functional yields highly promising results and overcomes the analytic and conceptual limitations of dynamical density functional theory. In this regard, our method can be put into a more general context as it may be viewed as a mere special case for equilibrium systems where J⃗(r⃗, t) = 0. The topical problem of accurately describing nonequilibrium many-body physics is certainly a natural contender for the application and extension of our neural functional framework, with many practical questions arising, e.g. concerning the generation of training data or the choice of neural network architecture. Lastly, the possibility of extending the machine learning procedure from planar symmetry to more general two-dimensional geometries or even to the full three-dimensional problem is worth contemplating. Especially for the latter, the amount of required training data seems restrictive at first if one considers randomized simulations in the fully inhomogeneous geometry. However, results obtained in the planar case could be leveraged since they already capture the crux of the internal interactions, as was shown in this work. Therefore, it may be possible to supplement the planar data with only a few select higher-dimensional simulations to incorporate the remaining nontrivial effects due to the more general geometry. As data-efficiency will be vital in this case, one might benefit from more extensive data augmentation, and the use of equivariant neural networks <cit.> could provide a way of casting certain symmetries directly into the model architecture. We thank T. Zimmermann, T. Eckert and N. C. X. Stuhlmüller for useful comments. This work is supported by the German Research Foundation (DFG) via Project No. 436306241. § HIGHER-ORDER CORRELATIONS Analogous to Sec. <ref>, we demonstrate that higher-order correlations can be obtained from the neural correlation functional by nested automatic differentiation. This is due to the fact that the hierarchy of direct correlation functions c_n(r⃗, r⃗', …, r⃗^(n); [ρ]), n ≥ 2, is accessible from successive functional derivatives of the one-body direct correlation functional <cit.>, c_n(r⃗, r⃗', …, r⃗^(n-1); [ρ]) = δ^n-1 c_1(r⃗; [ρ])/δρ(r⃗') …δρ(r⃗^(n-1)). As illustrated in the main text, translational symmetry can be applied in bulk fluids such that the resulting bulk correlation function c_n^b(r⃗, …, r⃗^(n-2)) = c_n(0, r⃗, …, r⃗^(n-2); [ρ_b]) only incorporates n - 2 remaining position coordinates. We specialize again to the planar geometry of our neural functional and show in Fig. <ref> the three-body bulk correlation function c̅_3^b ⋆(x, x') for a bulk density of ρ_b = 0.7 σ^-3. While the computation of c̅_2^b ⋆(x) is practically immediate via a single reverse mode autodifferentiation pass, going to the three-body correlation function comes at the price of having to evaluate the Hessian of c_1^⋆(x; [ρ]), for which different strategies exist <cit.>. In principle, one can proceed by nesting autodifferentiation layers to obtain further members of the hierarchy (<ref>), albeit being restricted by the practicability of the actual evaluation and the efficacy of the result. Note that the computational effort at the three-body level is by no means restrictive and that growing numerical demands are expected when considering higher-order correlations. The computation and analysis of c̅_3^b(x, x') might be especially useful for more complex fluid models, e.g. containing internal three-body interactions <cit.>. We compare c̅_3^b ⋆(x, x') to analytic approximations based on FMT. For both the Rosenfeld and the White Bear MkII functional, the three-body bulk direct correlation function is analytic in Fourier space. We point the reader to Ref. Rosenfeld1989 for an expression of the original Rosenfeld result in terms of vectorial weight functions and to Refs. Kierlik1990,Phan1993 for an equivalent representation via scalar weights. As the weight functions remain unchanged, the White Bear MkII result follows immediately from the modification of the excess free energy density as laid out in Ref. HansenGoos2006. A cumulant expansion of the bulk result of the three-body direct correlation function in Fourier space can be transformed to real space analytically, which in planar geometry gives c̅_3^b(x, x') = - b R^4/aexp(-x^2 + x x' - x'^2/a R^2), where the width parameter a and the prefactor b are determined by a = ν/κ3/553 - 25 η + 8 η^2/30 + 2 η + 5 η^2 - η^3, b = κ8 π/3 √(3)30 + 2 η + 5 η^2 - η^3/(1 - η)^5, with the packing fraction η = πρ_b / 6. The correction factors ν and κ are set to unity in the Rosenfeld FMT and attain the forms ν = 53 - 35 η + η^2 + 5 η^3/53 - 25 η + 8 η^2, κ = 30 - 6 η/30 + 2 η + 5 η^2 - η^3, in the White Bear MkII case. The comparison reveals that the form of the neural three-body bulk correlation function c̅_3^b ⋆(x, x') is plausible and that it captures genuine features which go beyond both FMT descriptions. The Rosenfeld FMT yields a large discrepancy in the core region x, x' ≈ 0, which is significantly unterestimated as compared to the results from the neural functional and from the White Bear theory. We recall that, as in Sec. <ref>, the tensorial weights of <cit.> have not been used in the FMT functionals and that their inclusion might be particularly relevant on the level of higher-order correlations. In this vein, investigating members of the direct correlation hierarchy (<ref>) with the neural correlation functional could be a valuable aid for testing and refining analytic FMT functionals. § SPATIAL SYMMETRY OF THE NEURAL TWO-BODY DIRECT CORRELATION FUNCTIONAL A further consistency test of c_2^⋆(x, x'; [ρ]) arises due to its expected symmetry with respect to an interchange of the planar position coordinates x and x'. Recall that the excess free energy functional [ρ] generates the two-body direct correlation function according to c_2(r⃗, r⃗'; [ρ]) = - δ^2 β[ρ]/δρ(r⃗) δρ(r⃗'), see Eqs. (<ref>) and (<ref>). One can directly recognize from the symmetry of the second functional derivative in Eq. (<ref>) that c_2(r⃗, r⃗'; [ρ]) = c_2(r⃗', r⃗; [ρ]) must hold. On the basis of the neural direct correlation functional in planar geometry, assessing the validity of the identity c_2^⋆(x, x'; [ρ]) = c_2^⋆(x', x; [ρ]) is a highly nontrivial test. This is due to the fact that c_2^⋆(x, x'; [ρ]) evaluated at certain positions x and x' follows from automatic differentiation of c_1^⋆(x; [ρ]), where the input density window is centered around the location x, see Sec. <ref>. On the other hand, when formally evaluating c_2^⋆(x', x; [ρ]), where the arguments x and x' are now reversed, the density window is centered around x', hence constituting a generally very different and a priori unrelated input profile. One can expect Eq. (<ref>) to be recovered only if the physical implications of Eq. (<ref>) are captured correctly by the neural functional. Note that Eq. (<ref>) is a necessary condition for the existence of a unique neural excess free energy functional ^⋆[ρ], which can practically be obtained via functional line integration, see Sec. <ref>. We exemplify in Fig. <ref> that the neural two-body direct correlation functional c_2^⋆(x, x'; [ρ]) obtained via autodifferentiation of c_1^⋆(x; [ρ]) indeed satisfies the symmetry requirement (<ref>) to very high accuracy. § NEURAL EQUATION OF STATE FOR HARD DISKS VIA DIMENSIONAL CROSSOVER Although the neural functional c_1^⋆(x; [ρ]) was acquired explicitly for the three-dimensional hard sphere fluid, one can use dimensional crossover techniques to obtain bulk results for the two-dimensional hard disk system. This is facilitated by investigating the behavior of the hard sphere fluid under narrow confinement, which constitutes a quasi-two-dimensional scenario. With this method, one obtains the equation of state for the hard disk fluid from c_1^⋆(x; [ρ]), as we demonstrate in the following. We proceed similar to Sec. <ref> and utilize Eq. (<ref>) to express the pressure P(ρ_b) via the excess free energy density ψ_b(ρ_b), which we aim to compute for a range of bulk densities ρ_b. Whereas c_1^⋆(x; [ρ]) was evaluated for the three-dimensional bulk fluid at spatially constant density, cf. Eq. (<ref>), here a suitable density profile ρ_2D(x) is constructed as input to the neural direct correlation functional in order to emulate narrow planar confinement. For this, we choose ρ_2D(x) = ρ_b/x_wΘ(|x - x_w/2|) with the Heaviside function Θ(·); note that Eq. (<ref>) is a Dirac series and yields the Dirac distribution for x_w → 0. The neural direct correlation functional is then evaluated at the center of this assumed slit, and the values c_1^⋆(0; [ρ_2D]) are used analogous to Sec. <ref> for the determination of P_2D^⋆(ρ_b). The equation of state for the associated two-dimensional hard disk system follows formally for x_w → 0. As this limit is not directly accessible in practice, we assess the obtained values for finite but small slit widths 0.3 ≤ x_w / σ≤ 1 and extrapolate to x_w = 0 via a quadratic fit. The resulting equation of state P_2D^⋆(ρ_b) for the two-dimensional hard disk fluid as obtained from this dimensional crossover on the basis of the neural network is shown in Fig. <ref>. We additionally display analytic equations of state from scaled particle theory <cit.> and by <cit.> which serve as reference. One recognizes that reasonable results can be achieved for low and medium densities, but that deviations to analytic results become noticeable for ρ_b > 0.7 σ^-2. Nevertheless, it is both surprising and reassuring that the neural functional is capable of predicting correlations in narrow confinement, as no such situations were explicitly included in the training data. Recall that hard walls were imposed only at the borders of the simulation box of length L = 20 σ and that the inhomogeneous external potential within the simulation domain consisted solely of Fourier modes and of piecewise linear functions, see Eq. (<ref>). Presumably, improvements over the results presented in Fig. <ref> could be obtained especially for large densities by including situations of very narrow confinement explicitly in the training data. From our outset, the successful achievement of a viable two-dimensional equation of state serves as a demonstration that c_1^⋆(x; [ρ]) indeed captures the intricate functional relationship of the underlying physical problem instead of acting as a mere interpolation tool with respect to the encountered training data.
http://arxiv.org/abs/2307.05856v1
20230712003348
High-order finite element method for atomic structure calculations
[ "Ondřej Čertík", "John E. Pask", "Isuru Fernando", "Rohit Goswami", "N. Sukumar", "Lee A. Collins", "Gianmarco Manzini", "Jiří Vackář" ]
physics.atom-ph
[ "physics.atom-ph", "physics.comp-ph" ]
Quantum-Enhanced Metrology for Molecular Symmetry Violation using Decoherence-Free Subspaces Nicholas R. Hutzler August 12, 2023 ============================================================================================= We introduce , an open source code that implements a high-order finite element solver for the radial Schrödinger, Dirac, and Kohn-Sham equations. The formulation accommodates various mesh types, such as uniform or exponential, and the convergence can be systematically controlled by increasing the number of, or polynomial order of the finite element basis functions. The Dirac equation is solved using a squared Hamiltonian approach to eliminate spurious states. To address the slow convergence of the κ=±1 states due to divergent derivatives at the origin, we incorporate known asymptotic forms into the solutions. We achieve a high level of accuracy (10^-8 Hartree) for total energies and eigenvalues of heavy atoms such as uranium in both Schrödinger and Dirac Kohn-Sham solutions. We provide detailed convergence studies and computational parameters required to attain commonly required accuracies. Finally, we compare our results with known analytic results as well as the results of other methods. In particular, we calculate benchmark results for atomic numbers (Z) from 1 to 92, verifying current benchmarks. We demonstrate significant speedup compared to the state-of-the-art shooting solver . An efficient, modular Fortran 2008 implementation, is provided under an open source, permissive license, including examples and tests, wherein particular emphasis is placed on the independence (no global variables), reusability, and generality of the individual routines. § INTRODUCTION Over the past three decades, Density Functional Theory (DFT) <cit.> has established itself as a cornerstone of modern materials research, enabling the understanding, prediction, and control of a wide variety of materials properties from the first principles of quantum mechanics, with no empirical parameters. However, the solution of the required Kohn-Sham equations <cit.> is a formidable task, which has given rise to a number of different solution methods <cit.>. At the heart of the majority of methods in use today, whether for isolated systems such as molecules or extended systems such as solids and liquids, lies the solution of the Schrödinger and/or Dirac equations for the isolated atoms composing the larger molecular or condensed matter systems of interest. Particular challenges arise in the context of relativistic calculations, which require solving the Dirac equation, since spurious states can arise due to the unbounded nature of the Dirac Hamiltonian operator and inconsistencies in discretizations derived therefrom <cit.>. A number of approaches have been developed to avoid spurious states in the solution of the Dirac equation. Shooting methods, e.g., <cit.> and references therein, avoid such states by leveraging known asymptotic forms to target desired eigenfunctions based on selected energies and numbers of nodes. However, due to the need for many trial solutions to find each eigenfunction, efficient implementation while maintaining robustness is nontrivial. In addition, convergence parameters such as distance of grid points from the origin must be carefully tuned. Basis set methods, e.g., <cit.> and references therein, offer an elegant alternative to shooting methods, solving for all states at once by diagonalization of the Schrödinger or Dirac Hamiltonian in the chosen basis. However, due to the unbounded spectrum of the Dirac Hamiltonian, spurious states have been a longstanding issue <cit.>. Many approaches have been developed to avoid spurious states, with varying degrees of success. These include using different bases for the large and small components of the Dirac wavefunction <cit.>, modifying the Hamiltonian <cit.>, and imposing various boundary constraints <cit.>. In the finite-difference context, defining large and small components of the Dirac wavefunction on alternate grid points <cit.>, replacing conventional central differences with asymmetric differences <cit.>, and adding a Wilson term to the Hamiltonian <cit.> have proven effective in eliminating spurious states. While in the finite element (FE) context, the use of different trial and test spaces in a stabilized Petrov-Galerkin formulation has proven effective in mitigating the large off-diagonal convection (first derivative) terms and absence of diffusion (second derivative) terms causing the instability <cit.>. In the this work, we present the open-source code, , for the solution of the Schrödinger, Dirac, and Kohn-Sham equations in a high-order finite element basis. The FE basis enables exponential convergence with respect to polynomial order while allowing full flexibility as to choice of radial mesh. To eliminate spurious states in the solution of the Dirac equation, we square the Dirac Hamiltonian operator <cit.>. Since the square of the operator has the same eigenfunctions as the operator itself, and the square of its eigenvalues, determination of the desired eigenfunctions and eigenvalues is immediate. Most importantly, since the square of the operator is bounded from below, unlike the operator itself, it is amenable to direct solution by standard variational methods, such as FE, without modification. This affords simplicity, robustness, and well understood convergence. Moreover, squaring the operator rather than modifying it and/or boundary conditions upon it, ensures key properties are preserved exactly, such as convergence to the correct non-relativistic (Schrödinger) limit with increasing speed of light <cit.>. Squaring the operator also stabilizes the numerics naturally, without approximation, by creating second-derivative terms. To accelerate convergence with respect to polynomial order, we incorporate known asymptotic forms as r→0 into the solutions: rather than solving for large and small Dirac wavefunction components P(r) and Q(r), we solve for P(r)/r^α and Q(r)/r^α, with α based on the known asymptotic forms for P and Q as r→0. This eliminates derivative divergences and non-polynomial behavior in the vicinity of the origin and so enables rapidly convergent solutions in a polynomial basis for all quantum numbers κ, including κ=±1. By combining the above ideas, is able to provide robust, efficient, and accurate solutions for both Schrödinger and Dirac equations. The package is MIT licensed and is written in Fortran, leveraging language features from the 2008 standard, with an emphasis on facilitating user extensions. Additionally, it is designed to work within the modern Fortran ecosystem and leverages the build system. There are several benchmark calculations that serve as tests. The package supports different mesh-generating methods, including support for a uniform mesh, an exponential mesh, and other meshes defined by nodal distributions and derivatives. Multiple quadrature methods have been implemented, and their usage in the code is physically motivated. Gauss–Jacobi quadrature is used to accurately integrate problematic integrals for the Dirac equation and the Poisson equation as well as for the total energy in the Dirac-Kohn-Sham solution, whereas Gauss–Legendre quadrature is used for the Schrödinger equation. To ensure numerical accuracy, we employ several techniques, using Gauss–Lobatto quadrature for the overlap matrix to recover a standard eigenproblem, precalculation of most quantities, and parsimonious assembly of a lower-triangular matrix with a symmetric eigensolver. With these considerations we show that the resulting code outperforms the state-of-the-art code , using fewer parameters for convergence while retaining high accuracy of 10^-8 Hartree in total energy and eigenvalues for uranium (both Dirac and Schrödinger) and all lighter atoms. The remainder of the paper is organized as follows. Section <ref> describes the electronic structure equations solved. This is followed by Sections <ref> and <ref>, which detail the unified finite element solution for the radial Schrödinger and Dirac equations. Section <ref> details the numerical techniques employed to efficiently construct and solve the resulting matrix eigenvalue problem, including mesh and quadrature methods. In Section <ref>, we present results from analytic tests and benchmark comparisons against the shooting solver , followed by a brief discussion of findings. Finally, in Section <ref>, we summarize our main conclusions. § ELECTRONIC STRUCTURE EQUATIONS Under the assumption of a central potential, we establish the conventions used for the electronic structure problems under the purview of in this section. Starting from the nonrelativistic radial Schrödinger equation and its relativistic counterpart, the Dirac equation, we couple these to the Kohn–Sham equations with a Poisson equation for the Hartree potential. The interested reader may find more details of this formulation in <cit.>. By convention, we use Hartree atomic units throughout the manuscript. §.§ Radial Schrödinger equation Recall that the 3D one-electron Schrödinger equation is given by (-∇^2+V( x))ψ( x) = Eψ( x). When the potential considered is spherically symmetric, i.e., V( x):=V(r), the eigenstates of energy and angular momentum can be written in the form ψ_nlm( x)=R_nl(r) Y_lm( x/r), where n is the principal quantum number, l is the orbital angular momentum quantum number, and m is the magnetic quantum number. It follows that R_nl(r) satisfies the radial Schrödinger equation -(r^2 R_nl'(r))' + (r^2 V + l(l+1))R_nl(r) =E r^2 R_nl(r). The functions ψ_nlm( x) and R_nl(r) are normalized as ∫ |ψ_nlm( x)|^2 [3]x = 1, ∫_0^∞ R_nl^2(r) r^2 r = 1. §.§ Radial Dirac equation The one-electron radial Dirac equation can be written as P_nκ'(r) = -κ/rP_nκ(r)+(E-V(r)/c+2c)Q_nκ(r), Q_nκ'(r) = -(E-V(r)/c)P_nκ(r)+κ/rQ_nκ(r), where P_nκ(r) and Q_nκ(r) are related to the usual large g_nκ(r) and small f_nκ(r) components of the Dirac equation by P_nκ(r) =rg_nκ(r), Q_nκ(r) =rf_nκ(r). A pedagogical derivation of these results can be found in the literature, for example in <cit.>. We follow the solution labeling in <cit.>, in which the relativistic quantum number κ is determined by the orbital angular momentum quantum number l and spin quantum number s=±1 on the basis of the total angular momentum quantum number j=l±1/2 using κ = -l-1 for j=l+, i.e. s=+1, l for j=l-, i.e. s=-1. By not including the rest mass energy of an electron, the energies obtained from the radial Dirac equation can be compared to the non-relativistic energies obtained from the Schrödinger equation. The normalization of P_nκ(r) and Q_nκ(r) is ∫_0^∞ (P_nκ^2(r) + Q_nκ^2(r)) r = 1. We note that both P_nκ and Q_nκ are solutions of homogeneous equations and are thus only unique up to an arbitrary multiplicative constant. §.§ Poisson equation The 3D Poisson equation for the Hartree potential V_H due to electronic density n is given by ∇^2V_H( x) = -4π n( x). For a spherical density n( x) = n(r), this becomes 1 r^2(r^2 V_H')' = V_H”(r) + 2 rV_H'(r) = -4π n(r), where n(r) is the radial particle (number) density, normalized such that N = ∫ n( x) [3]x = ∫_0^∞ 4π n(r) r^2 r, where N is the number of electrons. §.§.§ Initial conditions Substituting (<ref>) into (<ref>) and integrating, we obtain lim_r→∞ r^2 V_H'(r) = -N, from which it follows that the asymptotic behavior of V_H'(r) is V_H'(r) ∼ -N/r^2, r →∞. Integrating (<ref>) and requiring V_H → 0 as r →∞ then gives the corresponding asymptotic behavior for V_H(r), V_H(r) ∼N/r, r →∞. For small r, the asymptotic behavior can be obtained by expanding n(r) about r=0: n(r) = ∑_j=0^∞c_j r^j. Substituting into Poisson equation (<ref>) gives (r^2 V_H')' = -4π∑_j=0^∞c_j r^j+2. Integrating and requiring V_H(0) to be finite then gives V_H'(r) = -4π∑_j=0^∞ c_j r^j+1/j+3, with linear leading term, so that we have V_H'(r) ∼ r, r → 0. Integrating (<ref>) then gives V_H(r) = -4π∑_j=0^∞ c_j r^j+2/(j+2)(j+3) + C, with leading constant term C = V_H(0) determined by Coulomb's law: V_H(0) = 4 π∫_0^∞r n(r) r. Finally, from (<ref>) we have that V_H'(0) = 0. So V_H→0 as r→∞ and V_H'(0)=0. This asymptotic behavior provides the initial values and derivatives for numerical integration in both inward and outward directions. §.§ Kohn-Sham equations The Kohn–Sham equations consist of the radial Schrödinger or Dirac equations with an effective potential V(r) = V_(r) given by (see, e.g., <cit.>) V_ = V_H + V_xc + v, where V_H is the Hartree potential given by the solution of the radial Poisson equation (<ref>), V_xc is the exchange-correlation potential, and v = -Z/r is the nuclear potential. The total energy is given by E[n] = T_s[n] + E_H[n] + E_xc[n] + V[n], the sum of kinetic energy T_s[n] = ∑_nl f_nlε_nl -4π∫ V_(r) n(r) r^2 r, where ε_nl are the Kohn-Sham eigenvalues, Hartree energy E_H[n] = 2π∫ V_H(r) n(r) r^2 r, exchange-correlation energy E_xc[n] = 4π∫ε_xc(r; n) n(r) r^2 r, where ε_xc(r; n) is the exchange and correlation energy density, and Coulomb energy V[n] = 4π∫ v(r) n(r) r^2 r = -4π Z∫ n(r) r r with electronic density in the nonrelativistic case given by n(r) = 14π∑_nl f_nlP_nl^2(r) r^2, where P_nl is the radial wavefunction in (<ref>) and f_nl the associated electronic occupation. In the relativistic case, the electronic density is given by n(r) = 14π∑_nls f_nlsP_nls^2(r) + Q_nls^2(r) r^2, where P_nls and Q_nls are the two components of the Dirac solution ((<ref>), (<ref>)) and f_nls is the occupation. In both the above cases, n(r) is the electronic particle density [electrons/volume], everywhere positive, as distinct from the electronic charge density ρ(r) [charge/volume]: ρ(r) = -n(r) in atomic units. We adopt a self-consistent approach <cit.> to solve for the electronic structure. Starting from an initial density n_ and corresponding potential V_, we solve the Schrödinger or Dirac equation to determine the wavefunctions R_nl or spinor components P and Q, respectively. From these, we construct a new density n_ and potential V_. Subsequently, we update the input density and potential, using for example a weighting parameter α∈[0, 1]: n_→α n_ + (1-α) n_, V_→α V_ + (1-α) V_. This process is repeated until the difference of V_ and V_ and/or n_ and n_ is within a specified tolerance, at which point self-consistency is achieved. This fixed-point iteration is known as the self-consistent field (SCF) iteration. We employ an adaptive linear mixing scheme, with optimized weights for each component of the potential to construct new input potentials for successive SCF iterations. In order to reduce the number of SCF iterations, we use a Thomas–Fermi (TF) approximation <cit.> for the initial density and potential: V(r) = -Z_eff(r)/r, Z_eff(r) = Z(1 + α√(x) + β x e^-γ√(x))^2 e^-2α√(x), x = r (128 Z/9π^2)^1/3, α = 0.7280642371, β = -0.5430794693, γ = 0.3612163121. The corresponding charge density is then ρ(r) = -1 3 π^2(-2 V(r))^32. We demonstrate the methodology with the local density approximation (LDA) and relativistic local-density approximation (RLDA) exchange and correlation functionals. We note that the modular nature of the code and interface mechanism make it straightforward to incorporate functionals from other packages such as the library of exchange correlation functions, <cit.>. The parameters used in are taken from the same NIST benchmark data <cit.> as the current state-of-the-art program for an accurate comparison. For the local density approximation, V_xc(r;n) = n(nε_xc^LD(n)), where the exchange and correlation energy density ε_xc^LD can be written as <cit.> ε_xc^LD(n)=ε_x^LD(n)+ε_c^LD(n), with electron gas exchange term <cit.> ε_x^LD(n)=-34π(3π^2 n)^13 and Vosko-Wilk-Nusair (VWN) <cit.> correlation term ε_c^LD(n)∼A2{log(y^2 Y(y)) +2b Qarctan(Q 2y+b) . . -by_0 Y(y_0)[ log((y-y_0)^2 Y(y)) +2(b+2y_0) Qarctan(Q 2y+b) ] }, in which y=√(r_s), Y(y)=y^2+by+c, Q=√(4c-b^2), y_0=-0.10498, b=3.72744, c=12.9352, A=0.0621814, and r_s=(34π n)^13 is the Wigner-Seitz radius, which gives the mean distance between electrons. In the relativistic (RLDA) case, a correction to the LDA exchange energy density and potential is given by MacDonald and Vosko <cit.>: ε_x^RLD(n) = ε_x^LD(n) R, R = 1-32(βμ-log(β+μ)β^2)^2, V_x^RLD(n) = V_x^LD(n) S, S = 3log(β+μ) 2 βμ - , where μ=√(1+β^2) and β=(3π^2n)^13 c =-4πε_x^LD(n) 3c. § SOLUTION METHODOLOGY Having described the Schrödinger, Dirac, and Kohn-Sham electronic structure equations to be solved, and key solution properties, we now detail our approach to solutions in a high-order finite-element basis. §.§ Radial Schrödinger equation To recast (<ref>) in a manner that will facilitate our finite element solution methodology, we make the substitution P_nl(r) = rR_nl(r) to obtain the canonical radial Schrödinger equation in terms of P_nl(r) is -1/2 P_nl”(r) + (V(r) + l(l+1)/2r^2)P_nl(r) = E P_nl(r) . The corresponding normalization of P(r) is ∫_0^∞ P_nl^2(r) r = 1. §.§.§ Asymptotics The known asymptotic behavior of P_nl as r→0 is <cit.> P_nl(r) ∼ r^l+1, where P_nl, being a solution of a homogeneous system of equations, is only unique up to an arbitrary multiplicative constant. We use (<ref>) as our starting point but from now on drop the nl index from P_nl for simplicity: -1/2 P”(r) + (V(r) + l(l+1)/2r^2)P(r) = E P(r). In order to facilitate rapid convergence and application of desired boundary conditions in a finite element basis, we generalize (<ref>) to solve for P̃=P(r)/r^α for any chosen real exponent α≥0 by substituting P=r^αP̃ to obtain -1/21 r^2α(r^2αP̃'(r))' + (V(r) + l(l+1) - α(α-1)/2r^2) P̃(r) = E P̃(r). We note that: * For α=0 we get P̃(r) = P(r)/r^0=P(r) and (<ref>) reduces to (<ref>). * For α=1 we get P̃(r) = P(r)/r^1=R(r) and (<ref>) reduces to (<ref>). * Finally, for α=l+1, (which corresponds to the known asymptotic (<ref>)) we obtain P̃(r) = P(r)/r^l+1, which tends to a nonzero value at the origin for all l and (<ref>) becomes -1/21 r^2(l+1)(r^2(l+1)P̃'(r))' + V(r) P̃(r) = E P̃(r). §.§.§ Weak form To obtain the weak form, we multiply both sides of (<ref>) by a test function v(r) and integrate from 0 to ∞. In addition, to facilitate the construction of a symmetric bilinear form, we multipy by a factor r^2α to get ∫_0^∞[ -1/2 (r^2αP̃'(r))' v(r) + (V(r) + l(l+1) - α(α-1)/2r^2) P̃(r) v(r) r^2α ] r = E ∫_0^∞P̃(r) v(r) r^2α r. We can now integrate by parts to obtain ∫_0^∞[1/2 r^2αP̃'(r) v'(r) + (V(r) + l(l+1) - α(α-1)/2r^2) P̃(r) v(r) r^2α] r -[r^2αP̃'(r) v(r)]_0^∞ = E ∫_0^∞P̃(r) v(r) r^2α r. Setting the boundary term to zero, [r^2αP̃'(r) v(r)]_0^∞ = 0, we then obtain the desired symmetric weak formulation ∫_0^∞[P̃'(r) v'(r) + (V(r) + l(l+1) - α(α-1)/2r^2) P̃(r) v(r) ] r^2α r = E ∫_0^∞P̃(r) v(r) r^2α r. As discussed below, by virtue of our choice of α and boundary conditions on v(r), the vanishing of the boundary term (<ref>) imposes no natural boundary conditions on P̃. §.§.§ Discretization To discretize (<ref>), we introduce finite element basis functions ϕ_i(r) to form trial and test functions P̃(r) = ∑_j=1^N c_j ϕ_j(r) and v(r) = ϕ_i(r), and substitute into (<ref>). In so doing, we obtain a generalized eigenvalue problem ∑_j=1^N H_ij c_j = E ∑_j=1^N S_ij c_j, where H_ij is the Hamiltonian matrix, H_ij = ∫_0^∞[ϕ_i'(r) ϕ_j'(r) + ϕ_i(r) (V + l(l+1) - α(α-1)/2r^2) ϕ_j(r) ] r^2α r, and S_ij is the overlap matrix, S_ij = ∫_0^∞ϕ_i(r) ϕ_j(r) r^2α r. §.§.§ Boundary conditions Since we seek bound states, which vanish as r→∞, we impose a homogeneous Dirichlet boundary condition on v(r) and P̃(r) at r=∞ by employing a finite element basis {ϕ_i} satisfying this condition. For α=0, P̃(r)=P(r)=0 at r=0 for all quantum numbers l according to (<ref>). Thus we impose a homogeneous Dirichlet boundary condition on v(r) and P̃(r) at r=0 by employing a finite element basis {ϕ_i} satisfying this condition, whereupon the boundary term (<ref>) as a whole vanishes, consistent with the weak formulation (<ref>). For α=1, P̃(r)= P(r)/r=R(r)=0 at r=0 for all quantum numbers ℓ>0 according to (<ref>). However, P̃(r)≠0 at r=0 for ℓ=0. Thus a homogeneous Dirichlet boundary condition cannot be imposed for ℓ=0. However, for α>0, the r^2α factor in (<ref>) vanishes at r=0, whereupon the boundary term as a whole vanishes, consistent with the weak formulation (<ref>), regardless of boundary condition on v(r) and P̃(r) at r=0. Hence for α > 0, we impose a homogeneous Dirichlet boundary condition on v(r) and P̃(r) at r=∞ only, by employing a finite element basis {ϕ_i} satisfying this condition. This is sufficient due to the singularity of the associated Sturm-Liouville problem. Similarly, for α=ℓ+1, P̃(r)=P(r)/r^ℓ+1≠0 at r=0 for all quantum numbers ℓ according to (<ref>) and, since α>0, we again impose a homogeneous Dirichlet boundary condition on v(r) and P̃(r) at r=∞ only. In practice, the code defaults to α=0 with homogeneous Dirichlet boundary conditions at r=0 and r=∞. Note that, while α=0 yields a rapidly convergent formulation in the context of the Schrödinger equation, it does not suffice in the context of the Dirac equation, as discussed in Section <ref>. §.§ Radial Dirac equation For small r, the central potential has the form V(r) = -Z/r + Z_1 + O(r), which gives rise to the following asymptotic behavior at the origin for Z 0 (Coulombic) <cit.>: P_nκ(r) ∼r^β, Q_nκ(r) ∼r^β c(β+κ)Z, β= √(κ^2-(Zc)^2). For Z=0 (nonsingular) the asymptotic is, for κ < 0 <cit.> P_nκ(r) ∼ r^l+1, Q_nκ(r) ∼ r^l+2E + Z_1 c(2l+3), and for κ > 0 P_nκ(r) ∼ -r^l+2E + Z_1 c(2l+1), Q_nκ(r) ∼ r^l+1. For large r, assuming V(r) → 0 as r→∞, the asymptotic is <cit.> P_nκ(r) ∼e^-λr, Q_nκ(r) ∼- √(-EE + 2c^2) P_nκ(r), λ= √(c^2 -(E+c^2)^2c^2) =√(-2E - E^2c^2). Consistent with the coupled equations (<ref>) and (<ref>), the Dirac Hamiltonian can be written as (see, e.g., <cit.>) H = [ V(r) c(-r+κ r); c(r+κ r) V(r) - 2c^2; ]. The corresponding eigenvalue problem is then H [ P(r); Q(r); ] = E [ P(r); Q(r); ]. To discretize, we expand the solution vector in a basis: [ P(r); Q(r) ] = ∑_j=1^2N c_j [ ϕ_j^a(r); ϕ_j^b(r) ], where a and b denote the upper and lower components of the vector. We have 2N basis functions (degrees of freedom), N for each wave function component. As such, the function P(r) is expanded in terms of basis functions ϕ_i^a(r) and the function Q(r) in terms of ϕ_i^b(r). However, these two expansions are not independent but are rather coupled via coefficients c_i. §.§.§ Squared Hamiltonian formulation The above eigenvalue formulation of the radial Dirac equation can be solved using the finite element method. However, due to the fact that the operator is not bounded from below, one obtains spurious eigenvalues in the spectrum <cit.>. To eliminate spurious states, we work with the square of the Hamiltonian <cit.>, which is bounded from below, rather than the Hamiltonian itself. Since the eigenfunctions of the square of an operator are the same as those of the operator itself, and the eigenvalues are the squares, the approach is straightforward and enables direct and efficient solution by the finite element method. Let us derive the equations for the squared radial Dirac Hamiltonian. First we shift the energy by c^2 to obtain the relativistic energy, making the Hamiltonian more symmetric, using (<ref>) to get H + 𝕀c^2 = [ V(r) + c^2 c(-r+κ r); c(r+κ r) V(r) - c^2; ]. Then we square the Hamiltonian using (<ref>) to obtain the following equations to solve for P(r) and Q(r): (H+𝕀c^2)^2 [ P(r); Q(r); ] = (E+c^2)^2 [ P(r); Q(r); ], where 𝕀 is the 2×2 identity matrix. As can be seen, the eigenvectors P(r) and Q(r) are the same as before but the eigenvalues are now equal to (E+c^2)^2 and the original non-relativistic energies E can be obtained by taking the square root of these new eigenvalues and subtracting c^2. §.§.§ Weak form We follow a similar approach as for the Schrödinger equation: instead of solving for P(r) and Q(r), we solve for P̃(r) = P(r) r^α and Q̃(r) = Q(r) r^α, which introduces a parameter α that can be chosen to facilitate rapid convergence and the application of the desired boundary conditions in a finite element basis. We then multiply the eigenvectors by r^α to obtain P(r) and Q(r). Now we can write the finite element formulation as A = ∫_0^∞ [ ϕ_i^a(r) ϕ_i^b(r); ] r^α(H+𝕀c^2)^2 r^α[ ϕ_j^a(r); ϕ_j^b(r) ] r, S = ∫_0^∞[ ϕ_i^a(r) ϕ_i^b(r); ] r^αr^α[ ϕ_j^a(r); ϕ_j^b(r) ] r, A x = (E+c^2)^2 S x. This is a generalized eigenvalue problem, with eigenvectors x (coefficients of (P̃(r),Q̃(r))), eigenvalues (E+c^2)^2, and 2 × 2 block matrices A and S. Note that the basis functions from both sides were multiplied by r^α due to the substitutions P(r) = r^αP̃(r) and Q(r) = r^αQ̃(r). We can denote the middle factor in the integral for A as H' = r^α(H+𝕀c^2)^2 r^α and compute it using (<ref>) with rearrangement of r^α factors as follows: H' = r^α(H+𝕀c^2)^2 r^α = r^2α[ V(r) + c^2 c (-r+κ-α r); c (r+κ+α r) V(r) - c^2; ]^2. Let H'= [ H^11 H^12; H^21 H^22 ]. Then H^11 = r^2α(V(r) + c^2)^2 + r^2αc^2(-[2]r-2α rr+ Φ), H^12 = r^2αc(2(κ-α) r V(r) - 2V(r) r - V'(r) ), H^21 = r^2αc(2(κ+α) r V(r) + 2V(r) r + V'(r) ), H^22 = r^2α(V(r) - c^2)^2 + r^2αc^2 (-[2]r-2α rr+ Φ), where Φ = (κ(κ+1)-α(α-1))/r^2. The full derivation of these expressions is given in Appendix <ref>. The usual approach when applying the finite element method to a system of equations is to choose the basis functions in the following form (see <cit.>): ϕ_i^a(r) = π_i(r) for i=1, …, N, 0 for i=N+1, …, 2N. ϕ_i^b(r) = 0 for i=1, …, N, π_i-N(r) for i=N+1, …, 2N. Substituting (<ref>) into (<ref>), we obtain the following expressions after simplification (Appendix <ref>): A = [ A^11 A^12; A^21 A^22 ], S = [ S^11 0; 0 S^22; ], with components given by A^11_ij = ∫_0^∞( c^2 π_i'(r) π_j'(r) + ((V(r) + c^2)^2 + c^2 Φ) π_i(r) π_j(r) ) r^2αd r, A^22_ij = ∫_0^∞( c^2 π_i'(r) π_j'(r) + ((V(r) - c^2)^2 + c^2 Φ) π_i(r) π_j(r) ) r^2αd r, A^12_ij = ∫_0^∞ c V(r) (+π_i'(r)π_j(r) - π_i(r) π_j'(r) + 2κ rπ_i(r) π_j(r) ) r^2αd r, A^21_ij = ∫_0^∞ c V(r) (-π_i'(r)π_j(r) + π_i(r) π_j'(r) + 2κ rπ_i(r) π_j(r) ) r^2αd r, S^11_ij = S^22_ij = ∫_0^∞π_i(r) π_j(r) r^2αd r, where Φ = κ(κ+1)-α(α-1)/r^2. These are the expressions implemented in the code. §.§.§ Boundary conditions To construct a symmetric weak form, the following two boundary terms are set to zero in the derivation (Appendix <ref>): c^2 r^2απ_i(r) π_j'(r) |^∞_0 = 0 , c V(r) r^2απ_i(r) π_j(r) |^∞_0 = 0 . As in the Schrödinger case, since we seek bound states which vanish as r→∞, we impose a homogeneous Dirichlet boundary condition on the test function v(r) and solutions P̃(r) and Q̃(r) at r=∞ by employing a finite element basis {π_i} satisfying this condition. Unlike the Schrödinger case, however, the appropriate exponent α and boundary condition at r=0 depends on the regularity of the potential V(r). For nonsingular potentials V(r), P(r)∼ r^ℓ + 1 and Q(r)∼ r^ℓ+2 as r→0 for relativistic quantum numbers κ<0 while P(r)∼ r^ℓ+2 and Q(r)∼ r ^ℓ+1 for κ>0 according to (<ref>) and (<ref>). Hence, as in the Schrödinger case, for α=0, P̃(r)=P(r)=0 and Q̃(r)=Q(r)=0 at r=0 for all quantum numbers κ and ℓ. Thus we impose a homogeneous Dirichlet bounday condition on v(r), P̃(r), and Q̃(r) at r=0 by employing a finite element basis {π_i} satisfying this condition, whereupon the boundary terms (<ref>) vanish, consistent with the weak formulation (<ref>). This is the default in the code for such potentials. For singular potentials V(r) with leading term -Z/r (Coulombic), however, P(r) and Q(r) have leading terms varying as r^β as r→0 for all relativistic quantum numbers κ according to (<ref>). However, while β>1 for |κ|>1, we have 0.74<β<0.99998 for κ=± 1 (for 1≤ Z ≤ 92), so that P(r) and Q(r) have non-polynomial behavior at small r and divergent derivatives at r=0, leading to numerical difficulties for methods attempting to compute them directly. To address this issue, we leverage the generality of the formulation (<ref>) to solve for P̃(r) = P(r)/r^α and Q̃(r) = Q(r)/r^α with α = β, rather than solving for P(r) and Q(r) directly. With α=β, P̃(r) and Q̃(r) have polynomial behavior at small r and bounded nonzero values at r=0 for all κ, including κ=±1, thus eliminating the aforementioned numerical difficulties completely and facilitating rapid convergence in a polynomial basis. Finally, for α = β, the r^2α factors in the boundary terms (<ref>) vanish at r=0, whereupon the boundary terms as a whole vanish, consistent with the weak formulation (<ref>), regardless of boundary condition on v(r), P̃(r), and Q̃(r) at r=0. Hence, for α=β, we impose a homogeneous Dirichlet boundary condition at r=∞ only, by employing a finite element basis {π_i} satisfying this condition. This is the default in the code for such potentials. For integrals involving non-integer α, we employ Gauss-Jacobi quadrature for accuracy and efficiency, while for integrals involving only integer exponents, we use Gauss-Legendre quadrature with the exception of overlap integrals where we employ Gauss-Lobatto quadrature to obtain a diagonal overlap matrix. § RESULTS AND DISCUSSION To demonstrate the accuracy and performance of the implementation of the above described finite element formulation, we present results for fixed potentials as well as self-consistent DFT calculations, with comparisons to analytic results where available and to the state-of-the-art solver otherwise. Code outputs are collected in Appendix <ref>. §.§ Coulombic systems The accuracy of the Schrödinger and Dirac solvers was verified using the Coulomb potential V = -Z/r for Z=92 (uranium). Eigenvalues are given by the corresponding analytic formula <cit.>. All eigenvalues with n < 7 are used for the study. The reference eigenvalues here are from the analytic solutions: E_nl=-Z^2/2n^2 for the Schrödinger equation, and E_nκ = c^2/√(1+(Z/c)^2/(n-|κ|+β)^2)-c^2, β =√(κ^2-(Z/c)^2) for the Dirac equation. Figure <ref> shows the convergence of the Schrödinger total energy error with respect to the polynomial order p for different numbers of elements N_e. The graph, when observed for a given number of elements, forms a straight line on the log-linear scale, showing that the error decreases exponentially with the polynomial order p until it reaches the numerical precision limit of approximately 10^-9. For five elements, we considered the behavior of the error with respect to r_max. Figure <ref> shows the total energy error. It is observed that r_max≥ 10 results in an error of 10^-8 or less. The eigenvalues converge to 10^-9 for r_max≥ 10, as depicted in Figure <ref>. The theoretical convergence rate for the finite element method is given by N_e^-2p. Figure <ref> shows the error with respect to N_e, juxtaposed with the theoretical convergence represented as a dotted line. The slope of the solid lines is used to determine the rate of convergence. In all instances, we observe that the theoretical convergence rate is achieved. For polynomial orders p≤8, it is attained asymptotically, while for p>8, it approaches the limiting numerical precision of around 10^-9. Figure <ref> shows the error in the Dirac total energy with respect to p for various N_e. We find the same exponential relationship between the error and polynomial order p. As before, for five elements, we consider the error with respect to r_max. Figure <ref> shows the total Dirac energy error, and it is observed that r_max≥ 10 results in an error of 10^-8 or less. The eigenvalues converge to 10^-8 for r_max≥ 10, as depicted in Figure <ref>. Figure <ref> shows the error as N_e is increased alongside the theoretical convergence represented as a dotted line. In all instances, we observe that the theoretical convergence rate is achieved. For polynomial orders p≤8, it is attained asymptotically, while for p>8, it approaches the limiting numerical precision of around 10^-9. §.§ Quantum harmonic oscillator Next, we consider a nonsingular potential: the harmonic oscillator potential given by V(r)=1/2ω^2r^2. For the Schrödinger equation, we take ω=1 and compare against the exact values given by E_nl = ω(2n-l-1/2). Figure <ref> shows the total Schrödinger energy error with respect to the polynomial order p for various N_e values. For fixed N_e, the straight line on the log-linear graph shows the exponential dependency of the error on p until hitting the numerical precision limit ( 10^-10). The effect of r_max on the error was tested with five elements. Figure <ref> indicates that r_max≥ 10 gives a total energy error of 10^-10 or lower. Figure <ref> shows that the eigenvalues converge to 10^-11 under the same condition. The dependence of the error on N_e, together with the theoretical convergence rate (N_e^-2p), is shown in Figure <ref>. The slope of the solid lines confirms that the theoretical convergence rate is achieved. It is asymptotically achieved for p≤8, while for p>8, it approaches the limiting numerical precision of approximately 10^-9. For the Dirac equation, comparisons are with respect to results. Figure <ref> shows an exponential error decrease with respect to p for various N_e. A similar pattern is observed with r_max≥ 10, reducing the total Dirac energy error to 10^-8 or less (Figure <ref>). Eigenvalue convergence to 10^-8 is also seen (Figure <ref>). The theoretical convergence rate (N_e^-2p) is confirmed in Figure <ref>, attained asymptotically for p≤8 and reaching a numerical precision limit of  10^-9 for p>8. §.§ DFT calculations The accuracy of the DFT solvers using both Schrödinger and Dirac equations is compared against results for the challenging case of uranium. The non-relativistic Schrödinger calculation yields the following electronic configuration: 1s^2 2s^2 2p^6 3s^2 3p^6 3d^10 4s^2 4p^6 4d^10 4f^14 5s^2 5p^6 5d^10 5f^3 6s^2 6p^6 6d^1 7s^2. With the Dirac solver, the l-shell occupation splits according to the degeneracy of the j=l+1/2 and j=l-1/2 subshells. Figures <ref> and <ref> demonstrate an exponential decrease in total energy error with increasing p for various N_e. As r_max increases to ≥ 25 for five elements, the error decreases to 10^-8 or less (Figures <ref> and <ref>). Convergence of eigenvalues to 10^-9 is seen at r_max≥ 30 (Figures <ref> and <ref>). In Figures <ref> and <ref>, the theoretical convergence rate (N_e^-2p) manifests when N_e≥ 12, below which numerical instabilities prevent convergence. §.§ Numerical considerations The mesh parameters used to achieve 10^-8 a.u. accuracy for the DFT Schrödinger and Dirac calulations are shown in Table <ref>. The mesh parameters used for 10^-6 a.u. accuracy are shown in Table <ref>. §.§ Benchmarks The presented implementation is written in Fortran and runs on every platform with a modern Fortran compiler. To get an idea of the speed, we benchmarked against on a laptop with an Apple M1 Max processor using GFortran 11.3.0. We carry out the uranium DFT calculation to 10^-6 a.u. accuracy in total energy and all eigenvalues. The timings are as follows: § SUMMARY AND CONCLUSIONS We have presented a robust and general finite element formulation for the solution of the radial Schrödinger, Dirac, and Kohn–Sham equations of density functional theory; and provided a modular, portable, and efficient Fortran implementation, , along with interfaces to other languages and full suite of examples and tests. To eliminate spurious states in the solution of the Dirac equation, we work with the square of the Hamiltonian rather than the Hamiltonian itself. Additionally, to eliminate convergence difficulties associated with divergent derivatives and non-polynomial variation in the vicinity of the origin, we solve for P̃=P/r^α and Q̃ = Q/r^α rather than for P and Q directly. We then employ a high-order finite element method to solve the resulting Schrödinger, Dirac, and Poisson equations which can accommodate any potential, whether singular Coulombic or finite, and any mesh, whether linear, exponential, or otherwise. We have demonstrated the flexibility and accuracy of the associated code with solutions of Schrödinger and Dirac equations for Coulombic and harmonic oscillator potentials; and solutions of Kohn–Sham and Dirac–Kohn–Sham equations for the challenging case of uranium, obtaining energies accurate to 10^-8 a.u., thus verifying current benchmarks <cit.>. We have shown detailed convergence studies in each case, providing mesh parameters to facilitate straightforward convergence to any desired accuracy by simply increasing the polynomial order. At all points in the design of the associated code, we have tried to emphasize simplicity and modularity so that the routines provided can be straightforwardly employed for a range of applications purposes, while retaining high efficiency. We have made the code available as open source to facilitate distribution, modification, and use as needed. We expect the present solvers will be of benefit to a broad range of large-scale electronic structure methods that rely on atomic structure calculations and/or radial integration more generally as key components. § ACKNOWLEDGEMENTS We would like to thank Radek Kolman, Andreas Klöckner and Jed Brown for helpful discussions. This work performed, in part, under the auspices of the U.S. Department of Energy by Los Alamos National Laboratory under Contract DE-AC52-06NA2539 and U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. RG was partially supported by the Icelandic Research Fund, grant number 217436052. § APPENDIX: DERIVATIONS FOR THE SQUARED RADIAL DIRAC FINITE ELEMENT FORMULATION §.§ Components of the Squared Radial Dirac Hamiltonian Starting from (<ref>), we have H', H' = r^2α[ V(r) + c^2 c (-r+(κ-α) r); c (r+(κ+α) r) V(r) - c^2; ]^2. Let r^-2α H'= [ G^11 G^12; G^21 G^22; ], where G^11 = (V(r) + c^2)^2 + c^2 (-r+(κ-α) r) (r+(κ+α) r), G^12 = (V(r) + c^2) c (-r+(κ-α) r) + c (-r+(κ-α) r) (V(r) - c^2), G^21 = c (r+(κ+α) r) (V(r) + c^2) + (V(r) - c^2) c (r+(κ+α) r), G^22 = (V(r) - c^2)^2 + c^2 (r+(κ+α) r) (-r+(κ-α) r) We now simplify each term to obtain G^11f = (V(r) + c^2)^2 f + c^2 (-r+(κ-α) r) (r+(κ+α) r) f, G^11f = (V(r) + c^2)^2 f + c^2 (-[2]fr+(κ-α) rfr+ (κ^2-α^2) f r^2 -r(κ+α)f r). Recall that -r(κ+α)f r = -(κ+α) ( 1 rfr - f r^2), -r(κ+α)f r = -(κ+α) rfr + (κ+α)f r^2. Substituting (<ref>) in (<ref>), with Φ=(κ(κ + 1) - α(α - 1))/r^2 we have G^11f = (V(r) + c^2)^2 f + c^2 (-[2]fr+(κ-α) rfr+ (κ^2-α^2) f r^2. .-(κ+α) rfr + (κ+α)f r^2) = (V(r) + c^2)^2 f + c^2 (-[2]fr-2α rfr+ Φ f), So we obtain G^11 = (V(r) + c^2)^2 + c^2 (-[2]r-2α rr+ Φ). For G^22, we have G^22 = (V(r) - c^2)^2 + c^2 (r+(κ+α) r) (-r+(κ-α) r) = (V(r) - c^2)^2 + c^2 (-r+(-κ-α) r) (r+(-κ+α) r). Since (-r+(κ-α) r) (r+(κ+α) r) = (-[2]r-2α rr. .+ (κ(κ + 1) - α(α - 1))/r^2), (-r+(-κ-α) r) (r+(-κ+α) r) = (-[2]r-2α rr. .+ (-κ(-κ+1)-α(α-1)) r^2), = (-[2]r-2α rr. .+ (κ(κ-1)-α(α-1)) r^2). Therefore, after substituting (<ref>) in (<ref>) G^22 = (V(r) - c^2)^2 + c^2 (-[2]r-2α rr+ (κ(κ-1)-α(α-1)) r^2). For G^12, we have G^12 f = (V(r) + c^2) c (-r+(κ-α) r)f + c (-r+(κ-α) r) (V(r) - c^2)f, = c(κ-α) r(V(r) + c^2 + V(r) - c^2)f - (V(r) + c^2) c fr -c (V(r) - c^2)fr, = c(2(κ-α) r V(r)f - (V(r) + c^2) fr - (V(r) - c^2)fr - V'(r)f ), = c(2(κ-α) r V(r)f - 2V(r) fr - V'(r)f ), so G^12 = c(2(κ-α) r V(r) - 2V(r) r - V'(r) ). Similarly G^21f = c (r+(κ+α) r) (V(r) + c^2)f + (V(r) - c^2) c (r+(κ+α) r)f = c(κ+α) r(V(r) + c^2 + V(r) - c^2)f + (V(r) - c^2) c fr +c (V(r) + c^2)fr = c(2(κ+α) r V(r)f + (V(r) - c^2) fr + (V(r) + c^2)fr + V'(r)f ) = c(2(κ+α) r V(r)f + 2V(r) fr + V'(r)f ) which leads to G^21 = c(2(κ+α) r V(r) + 2V(r) r + V'(r) ). §.§ Weak formulation of the squared Hamiltonian for the radial Dirac Starting from (<ref>), A = ∫_0^∞[ ϕ_i^a(r) ϕ_i^b(r); ] H' [ ϕ_j^a(r); ϕ_j^b(r); ] r, A = [ A^11 A^12; A^21 A^22; ] and also ϕ_i^a(r) = π_i(r) for i=1, …, N, 0 for i=N+1, …, 2N. ϕ_i^b(r) = 0 for i=1, …, N, π_i-N(r) for i=N+1, …, 2N. With Φ=(κ(κ + 1) - α(α - 1))/r^2 we have A^11_ij = ∫_0^∞π_i(r)( r^2α(V(r) + c^2)^2 + r^2αc^2( - [2]r - 2α/rr + Φ) ) π_j(r)d r, A^12_ij = ∫_0^∞π_i(r)r^2αc( 2(κ - α)/rV(r) - 2V(r)r - V'(r) )π_j(r)d r, A_ij^21 = ∫_0^∞π_i(r)r^2αc( 2(κ + α)/rV(r) + 2V(r)r + V'(r) )π_j(r)d r, A_ij^22 = ∫_0^∞π_i(r)( r^2α( V(r) - c^2 )^2 + r^2αc^2( -[2]r - 2α/rr + Φ) )d r. The second term in (<ref>) and (<ref>) can be simplified as ∫_0^∞π_i(r) r^2αc^2(-[2]r-2α rr)π_j(r)d r = ∫_0^∞π_j(r)r c^2 (r^2απ_i(r)r + 2α r^2α-1π_i(r) )d r - π_i(r)(r^2αc^22α rr)π_j(r) - 0π_i(r) r^2α c^2 π_j(r)r|^∞_0 = ∫_0^∞π_j(r)r c^2 r^2απ_i(r)rd r. Therefore, A^11_ij = ∫_0^∞ r^2α( π_i(r) ((V(r) + c^2)^2 + c^2 Φ)π_j(r) + π_j'(r) c^2 π_i'(r))d r A^22_ij = ∫_0^∞ r^2α( π_i(r) ((V(r) - c^2)^2 + c^2 Φ)π_j(r) + π_j'(r) c^2 π_i'(r))d r which implies that A^11 and A^22 are symmetric. The off diagonal terms can be simplified by rewriting the derivative of the potential V' using integration by parts A^12_ij = ∫_0^∞π_i(r) r^2αc(2(κ-α) r V(r) - 2V(r) r - V'(r) ) π_j(r) d r = ∫_0^∞π_i(r) r^2αc( 2(κ-α) r V(r) - 2V(r) r) π_j(r)d r + ∫_0^∞π_i(r) r^2αc( - V'(r) ) π_j(r)d r = ∫_0^∞π_i(r) r^2αc( 2(κ-α) r V(r) - 2V(r) r) π_j(r)d r + ∫_0^∞ V(r) ( π_i(r) r^2αc π_j(r) )' d r + 0V(r)π_i(r) r^2α c π_j(r) |^∞_0 = ∫_0^∞π_i(r) r^2αc( 2(κ-α) r V(r) - 2V(r) r) π_j(r)d r + ∫_0^∞ c V(r) ( π_i'(r)π_j(r) + π_i(r)π_j'(r) + 2π_iπ_jα r) r^2αd r = ∫_0^∞ r^2α c V(r) ( - π_i(r) π_j'(r) + 2κ rπ_i(r) π_j(r) + π_i'(r) π_j(r) )d r. Similarly, A^21_ij = ∫_0^∞π_i(r) r^2αc(2(κ+α) r V(r) + 2V(r) r + V'(r) ) π_j(r) d r = ∫_0^∞ r^2α c V(r) ( + π_i(r) π_j'(r) + 2κ rπ_i(r) π_j(r) - π_i'(r) π_j(r))d r By exchanging the ij indices, we see that A^12_ij = A^21_ji. Since A^11 and A^22 are symmetric and A^12_ij = A^21_ji, the finite element matrix A is symmetric. The last four equations for A^11, A^22, A^12 and A^21 are the equations that are implemented in the code. § APPENDIX: CONVERGED RUNS FOR SYSTEMS The Coulomb potential results and the harmonic Schrödinger results are compared to the analytic results. The remaining systems are compared against  <cit.>. §.§ Schrödinger equation with a Coulomb potential with Z=92 §.§ Dirac equation with a Coulomb potential with Z=92 §.§ Schrödinger equation with a harmonic oscillator potential §.§ Dirac equation with a harmonic oscillator potential §.§ DFT with the Schrödinger equation for uranium §.§ DFT with the Dirac equation for uranium
http://arxiv.org/abs/2307.07261v1
20230714102603
Numerical evaluation of oscillatory integrals via automated steepest descent contour deformation
[ "A. Gibbs", "D. P. Hewett", "D. Huybrechs" ]
math.NA
[ "math.NA", "cs.NA", "65D30, 30E20, 41A60" ]
AudioInceptionNeXt: TCL AI LAB Submission to EPIC-SOUND Audio-Based-Interaction-Recognition Challenge 2023 Kin Wai Lau, Yasar Abbas Ur Rehman, Yuyang Xie, Lan Ma TCL AI Lab {stevenlau, yasar, yuyang.xie, rubyma} @tcl.com August 12, 2023 ======================================================================================================================================= Steepest descent methods combining complex contour deformation with numerical quadrature provide an efficient and accurate approach for the evaluation of highly oscillatory integrals. However, unless the phase function governing the oscillation is particularly simple, their application requires a significant amount of a priori analysis and expert user input, to determine the appropriate contour deformation, and to deal with the non-uniformity in the accuracy of standard quadrature techniques associated with the coalescence of stationary points (saddle points) with each other, or with the endpoints of the original integration contour. In this paper we present a novel algorithm for the numerical evaluation of oscillatory integrals with general polynomial phase functions, which automates the contour deformation process and avoids the difficulties typically encountered with coalescing stationary points and endpoints. The inputs to the algorithm are simply the phase and amplitude functions, the endpoints and orientation of the original integration contour, and a small number of numerical parameters. By a series of numerical experiments we demonstrate that the algorithm is accurate and efficient over a large range of frequencies, even for examples with a large number of coalescing stationary points and with endpoints at infinity. As a particular application, we use our algorithm to evaluate cuspoid canonical integrals from scattering theory. A Matlab implementation of the algorithm is made available and is called PathFinder. § INTRODUCTION In this paper we consider numerical evaluation of the integral I = ∫_Γ f(z)^ω g(z)z, where Γ is a contour in , possibly starting and/or ending at infinity, f and g are functions of a complex variable, and ω>0 is a frequency parameter. Such integrals arise in numerous application areas, particularly in wave phenomena and quantum mechanics, and are generally challenging to evaluate numerically, especially when ω is large, because the presence of the exponential factor ^ω g(z) means that the integrand may undergo rapid oscillations and/or significant variations in amplitude along the integration contour. When f and g are analytic, Cauchy's theorem provides the possibility of deforming the integration contour so as to make numerical evaluation easier. This is the basis of steepest descent (SD) methods, in which one aims to deform Γ onto a contour, or, more typically, a union of contours, which we term the steepest descent (SD) deformation, on which [g(z)] is constant, so that the exponential factor ^ω g(z) is no longer oscillatory. By the Cauchy-Riemann equations, these contours coincide with the steepest descent curves of -[g(z)], and they connect endpoints of the original integration contour, valleys at infinity (sectors in which the integrand decays rapidly as |z|→∞), and stationary points of g, which are points ξ∈ at which g'(ξ)=0. [Stationary points are often referred to as “saddle points” because they are saddle points of the functions [g(z)] and [g(z)], which cannot possess local maxima or minima by the maximum modulus principle.] Along each SD contour, away from stationary points the integrand typically decays exponentially, with the rate of decay increasing with increasing ω, and as ω→∞ the value of the integral is dominated by local contributions close to the endpoints of Γ and any stationary points traversed by the SD deformation. In the asymptotic steepest descent method (described e.g. in <cit.>), one exploits this to obtain an asymptotic expansion for the integral, valid as ω→∞, by performing a local Taylor expansion of the integrand around the endpoints and relevant stationary points, and reducing the local integrals along the SD contours to simpler integrals that can be expressed in terms of known special functions. In the numerical steepest descent (NSD) method (described e.g. in <cit.>) one evaluates the integrals along the SD contours numerically. This involves numerically tracing an appropriate segment of each SD contour in the SD deformation and applying suitable numerical quadrature rules to evaluate the associated contributions to the integral. In principle, NSD is a highly accurate and efficient method for evaluating integrals of the form (<ref>) for moderate or large ω. Indeed, under appropriate assumptions, the NSD method outputs approximations which, for a fixed number of quadrature points N, are asymptotic to (<ref>) as ω→∞, with the asymptotic accuracy improving with increasing N (see, e.g., <cit.>). Furthermore, if f and g are sufficiently well behaved it can also be the case that the NSD approximation converges to (<ref>) as N→∞, for fixed ω>0, with a cost that remains bounded as ω→∞. In practice, however, applying the NSD method to an integral of the form (<ref>) often requires significant expert user input. This is because: () Determining the SD contour deformation corresponding to a given g and Γ requires careful a priori analysis. () Parametrizing SD contours from or near stationary points, and evaluating integrals along them, is fraught with numerical difficulties, especially when stationary points are close to other stationary points or endpoints of Γ. The issues described in () and () are particularly troublesome when one wishes to evaluate (<ref>) for multiple instances of a phase function g(z)=g(z,𝐜) depending on a set of parameters 𝐜∈^q. This is because, firstly, the number and location of the stationary points, and the nature of the SD deformation, have to be determined for each different value of 𝐜, and, secondly, stationary points may coalesce as 𝐜 approaches certain regions in parameter space, leading to a non-uniformity in the accuracy of the resulting NSD approximations. The problem of stationary point coalescence in the context of NSD was studied in detail in <cit.> in the special case of the cubic phase function g(z,c)=z^3/3-cz, for c∈, which for c≠ 0 has a pair of order one stationary points which coalesce as c→0 (at z=0) into a single stationary point of order two for c=0.[The order of a stationary point ξ is the multiplicity of ξ as a root of g'.] In this case, the SD deformation and contour parametrization were carried out manually by analytically inverting the phase (illustrating (P1)), but the resulting integrals were found to be nearly singular for small c, leading to poor accuracy of standard NSD approximations (illustrating (P2)). It was shown in <cit.> how to construct a family of non-standard quadrature rules for this integral which perform uniformly well for c≈ 0 using complex-valued Gaussian quadrature, producing quadrature nodes that in general lie off the SD deformation. In principle, similar rules could be developed for more complicated coalescences involving higher order stationary points and/or endpoints of Γ. However, for each type of coalescence a bespoke quadrature rule would have to be developed, and a general catalogue of such rules is not yet available in the literature. In contrast to <cit.>, our aim is not to develop an optimized method for a specific instance of (<ref>), but rather to present a relatively simple algorithm that can evaluate (<ref>) accurately, for a general class of f and g, without the need for expert user input or a priori analysis, even in the case of coalescing stationary points, thus addressing problems (P1) and (P2). Our specific focus in this paper is on the case where f is entire and g is a polynomial. The extension of our approach to more general cases where f and/or g have pole or branch point singularities is the subject of ongoing research. Necessarily, in aiming for generality and robustness we will sacrifice some efficiency. However, our method is designed to be rapidly convergent as N→∞ with approximately ω-independent cost, and the fact that this is realised in practice is demonstrated by extensive numerical experiments in <ref>. Our algorithm follows the basic principles of NSD, combining complex contour deformation with numerical quadrature. However, in contrast to standard NSD our algorithm does not trace SD contours directly from stationary points. Instead, stationary points are enclosed in a bounded “non-oscillatory region” within which the integrand is guaranteed to undergo at most a fixed number of oscillations. The original contour Γ is replaced by a “quasi-SD deformation” comprising a union of straight-line contours in the non-oscillatory region, for which numerical quadrature is straightforward, and SD contours outside the non-oscillatory region, on which standard NSD quadrature techniques can be applied. By excluding a neighbourhood of the stationary points from the region in which SD contours are traced, our algorithm avoids the problems mentioned in () associated with stationary-point/stationary-point and/or stationary-point/endpoint coalescence. This not only “uniformizes” the accuracy of our algorithm compared to standard NSD, but it also enables us to tackle the problem () by automating the contour deformation step. For the latter, we first perform low accuracy SD contour tracing outside the non-oscillatory region to build a graph describing the global connections (via SD contours) between the endpoints of Γ, the different components of the non-oscillatory region, and the valleys at infinity, and then determine the quasi-SD deformation using a shortest path algorithm, before refining the accuracy of the SD contour tracing at the quadrature stage. One other problem with standard NSD is that it typically degenerates as ω→ 0, because the quadrature points diverge to infinity <cit.>. This issue has been addressed in the special case g(z)=z for bounded Γ in <cit.>; however, it remains an open problem for general g(z). Our algorithm is well-behaved in the limit as ω→ 0 for general polynomial g(z), since it reduces to standard non-oscillatory quadrature for sufficiently small ω for any bounded Γ. Our algorithm is implemented in the open-source Matlab code “PathFinder”, available at <cit.>. The basic user input to the code is a function handle for the amplitude , the coefficients of the polynomial phase , endpoints and (complex numbers, or angles in the case of infinite endpoints), the frequency parameter , and a parameter controlling the number of quadrature points to be used. Approximating the integral (<ref>) using PathFinder can be done with the following Matlab command: Here is an optional input for which the user should supply a Boolean array (whose default value is ) such that (respectively ) is if the endpoint (resp. ) is infinite and if it is finite. Examples of PathFinder code will be given in <ref>. Advanced users can also adjust a small number of other tuning parameters, whose role will be discussed during the presentation of our algorithm. An outline of the paper is as follows. In <ref> we provide a detailed description of our algorithm, first presenting an overview of the main steps, and then providing details of how each step is realised in PathFinder. In <ref> we present some theoretical results underpinning our approach. In <ref> we discuss some further implementation details, and in <ref> we exhibit numerical results demonstrating the performance of our algorithm on a range of challenging integrals with large ω and complicated stationary point configurations. We end this introduction by remarking that integrals with coalescing stationary points are of fundamental importance in numerous applications, including the study of optics and high frequency (short wavelength) acoustics, where they describe the wave field in the vicinity of geometrical singularities (or “catastrophes”) in the classical ray-theoretic framework, Kelvin's celebrated ship-wave problem, and the theory of molecular collisions in quantum mechanics and theoretical chemistry. A catalogue of such integrals, along with links to relevant literature, can be found in <cit.>. In <ref> we show how PathFinder can be applied to accurately calculate these types of integrals. § ALGORITHM DESCRIPTION In this section we present our algorithm for the numerical approximation of (<ref>) when f is entire[When Γ is infinite we additionally implicitly assume that f is sufficiently well-behaved at infinity for the integral (<ref>) to converge. Note that in many cases of interest, numerical evaluation of the integral to high accuracy after path deformation only requires f to be analytic in a small (and shrinking with increasing ω) neighbourhood of the stationary points.] and g is a polynomial. We start with some definitions and basic facts. Let g(z) = ∑_j=0^Jα_jz^j, for some J∈, J≥ 1, and α_j∈, j=0,…,J, with α_J≠ 0. Then g has at most J-1 stationary points, which are the solutions of g'(z) = ∑_j=1^J jα_jz^j-1=0. We denote the set of all stationary points by . We define the valleys at infinity to be the sectors of angular width π/J centred on the angles := {(2(m-1)+1/2)π - (α_J)/J: m=1,…,J }. These have the property that if z=r^θ with θ∈ (v-π/(2J),v+π/(2J)) for some v∈ then ^ω g(z)→ 0 as r→∞. For each η∈∖ there exists a unique SD contour γ_η beginning at η and ending either at a stationary point ξ∈ or at a valley v∈, on which g(z)= g(η) for z∈γ_η (see, e.g., <cit.>). We let denote the set of finite endpoints of the integration contour Γ, which could have zero, one or two elements. We assume for now that any infinite endpoint of Γ is at one of the valleys v∈; see <ref> for extensions. We now provide a high-level overview of our algorithm. The following steps will be explained in more detail in sections <ref>-<ref>. * Compute the set of stationary points (the solutions of (<ref>)). * For each ξ∈, select a radius r_ξ>0 for which the function ^ω g(z) is considered “non-oscillatory” on the closed ball Ω_ξ of radius r_ξ centred at ξ. These balls may overlap. However, if two balls overlap significantly, indicating near coalescence, one of the stationary points (along with its associated ball) is removed from the set . This removal process continues recursively until no pair of balls is judged to overlap too much. We call {Ω_ξ}_ξ∈ the non-oscillatory balls, and their union Ω:=⋃_ξ∈Ω_ξ the non-oscillatory region. * Find the local minima of |^ω g(z)| on the boundary of the non-oscillatory region Ω. We call these points exits, and denote by the set of all exits. * For each η∈∪ (∖Ω), trace the SD contour γ_η from η, and determine whether * (i) γ_η enters Ω at some point z∈∂Ω∖{η}, or * (ii) γ_η converges towards a valley v∈ without entering Ω. We call points z∈∂Ω determined in case (i) entrances, and denote by the set of all entrances. * Construct a graph G with a vertex for each of the elements of , , , and . Add edges between the vertices of G as follows: * For each ξ∈, add an edge between each pair of elements of (∪∪∪)∩Ω_ξ. * For each pair ξ,ξ'∈, ξ≠ξ', for which Ω_ξ∩Ω_ξ'≠∅, add an edge between ξ and ξ', if not already added in the previous step. * For each η∈∪ (∖Ω), add an edge between η and the entrance z∈ or the valley v∈ to which the SD contour γ_η leads. Find the shortest path (in the graph-theoretic sense) between the vertices corresponding to the endpoints of Γ. * Generate quadrature nodes and weights for the evaluation of each of the contour integrals corresponding to the edges in the shortest path. For an edge between two points in the non-oscillatory region, use a straight-line contour. For an edge between an exit or an endpoint of Γ to an entrance or a valley, use a refined version of the SD contour traced in step 4. The union of all the contours corresponding to the edges of the shortest path defines the “quasi-SD deformation” of the original integration contour. Finally, use the quadrature nodes and weights to approximate the integrals over the contours in the quasi-SD deformation and sum them to obtain an approximation of the original integral (<ref>). In Figures <ref> and <ref> we illustrate the outcome of the above steps for the particular choice of phase function g(z)= z^7/7+z^6 (7/20+13/30i)+z^5 (-1047/2000+543/1000i)+z^4 (-4409/8000-5077/8000i) +z^3 (711/2000-4441/6000i)+z^2 (237/800-207/800i)+z (63/1000-77/2000i) and the parameters ω=40, a=-1.5, b=2, N=10, using the default parameter set for PathFinder (see Table <ref>). For this choice of g there is one order 2 stationary point and 4 order one stationary points. In Figure <ref> we plot these stationary points, along with their non-oscillatory balls, and the SD contours traced from the exits. Such plots can be generated in PathFinder by adding the optional flag. The ball centred at the stationary point ξ=- contains two entrances, reached by SD contours from the balls above. In Figure <ref> we plot the graph G, using the optional PathFinder input flag . This graph, in addition to edges corresponding to the SD contours shown in Figure <ref>, contains edges corresponding to contours between points in the non-oscillatory region, including connections within the two overlapping balls. The shortest path between a and b, which is highlighted with thick lines in Figure <ref>, corresponds to the quasi-SD deformation, the integral over which is equal to (<ref>) by Cauchy's Theorem. The integral is discretised using N quadrature points on each contour in the quasi-SD deformation that makes a non-negligible contribution to the integral (see <ref>) - these points are plotted in Figure <ref> in red. The process of computing all the SD contours and the selection of a subset thereof via the shortest path algorithm addresses problem (P1). Surrounding stationary points by balls, and only tracing SD contours outside the balls, means that we avoid having to determine the local structure of the SD contours and compute integrals along them near stationary points, addressing problem (P2). In the following subsections we provide further details of how we carry out the steps outlined above in PathFinder. §.§ Step 1 - Computing stationary points Computing the stationary points of g (the roots of g'(z)) requires us to find the complex roots of the polynomial (<ref>). In our implementation we compute stationary points using the Matlab command, which applies a companion matrix approach. We note that obtaining highly accurate values for the positions of stationary points is not critical to our algorithm, since the stationary points are enclosed in the non-oscillatory region and we never trace SD contours from them. Indeed, the difficulty in distinguishing numerically between multiple roots and roots of higher order contributes to the motivation for considering such non-oscillatory regions. §.§ Step 2 - Defining the non-oscillatory region The non-oscillatory region Ω was defined in (<ref>) to be a union of balls centred at the elements of . We choose the radii of the balls as follows. First fix some user-defined constant C_ ball>0. Then, given ξ∈, define r_ξ:= max{r>0: |z-ξ|≤ r⇒ω|g(z)-g(ξ)|≤ C_ ball}. This definition enforces an upper bound on the number of oscillations within each ball. Accordingly, the region Ω shrinks to the stationary points as ω→∞ and expands to fill the whole complex plane as ω→ 0. In our implementation we approximate r_ξ numerically as follows. Let N_ ball∈ be a user-defined parameter. For each n∈{1,…,N_ ball} we consider the ray {z=ξ + r^ 2π n/N_ ball, r>0}, and compute the smallest positive root r_n>0 of the function u_n(r):=ω^2|g(ξ + r^ 2π n/N_ ball)-g(ξ)|^2-C_ ball^2, which is a polynomial in r of degree 2J. For this root-finding problem we use the Matlab command; in case this command produces no positive real roots (because of stability issues) we resort to a bisection approach instead. We then take as our approximation to r_ξ the positive number max_n∈{1,…,N_ ball} r_n. When elements of are close it is natural to amalgamate their respective non-oscillatory balls. To do this systematically we adopt an iterative approach. Let δ_ ball>0 be a user-defined parameter. * For each pair ξ_1,ξ_2∈ compute d_ξ_1,ξ_2:=|ξ_1-ξ_2|/max(r_ξ_1,r_ξ_2). * If min_ξ_1,ξ_2 d_ξ_1,ξ_2<δ_ ball let ξ_1,ξ_2 be a pair realising the minimum. Remove from whichever of ξ_1,ξ_2 has the smaller associated ball radius (or choose arbitrarily between them if r_ξ_1=r_ξ_2). * Repeat the previous step until either min_ξ_1,ξ_2 d_ξ_1,ξ_2≥δ_ ball, or there is only one element of remaining. §.§ Step 3 - Determining the exits The exits associated with each ξ∈ are defined to be the local minima on ∂Ω_ξ∖⋃_ξ'∈,ξ'≠ξΩ_ξ'^∘ of the function |^ω g(z)|, equivalently of the function - g(z). For each ξ∈ the function - g(z) restricted to the ∂Ω_ξ is a trigonometric polynomial. Using this fact, in our implementation we determine the local minima of - g(z) on ∂Ω_ξ by finding the roots of the derivative of - g(z) in the angular direction (which is also a trigonometric polynomial) by the companion matrix approach of <cit.>, and keep only the real roots corresponding to local minima. We discard all those minima corresponding to points inside ⋃_ξ'∈,ξ'≠ξΩ_ξ'^∘, and add the remaining minima to the set . §.§ Step 4 - Tracing the SD contours Given η∈∪ (∖Ω), the SD contour γ_η beginning at η is the unique curve on which g(z) is constant, with - g(z) decreasing along γ_η. It can be parametrized in terms of a parameter p≥ 0 as z=h_η(p), where h_η(p) is defined implicitly by g(h_η(p)) = g(η) + p, h_η(0)=η. Differentiating (<ref>) with respect to p gives h_η'(p) = /g'(h_η(p)) =:F(h_η(p)), h_η(0)=η, which is a first order ODE initial value problem for h_η(p). By solving (<ref>) numerically one can trace the contour γ_η until it either (i) enters the non-oscillatory region Ω, or (ii) one can be sure that it will tend to a valley v∈, without entering Ω. For (ii) we appeal to the theoretical result in Theorem <ref>, which provides a “region of no return” R_v associated with each valley v∈ for which it is guaranteed that if an SD contour enters R_v it will never leave R_v, and will converge to v. Staying away from stationary points ensures that the factor 1/g' in the right-hand side of (<ref>) does not get too large. In our implementation we trace the SD contour using a predictor-corrector approach, combining a forward Euler step for (<ref>) and a Newton iteration for (<ref>), to generate approximations h_η^(n)≈ h_η(p_n) on a mesh 0=p_0<p_1<p_2<…<p_n_ max, where the total number of steps n_ max is determined as part of the algorithm, as discussed below. As the initial value we take h_η^(0)=η. Then, given h_η^(n), to compute h_η^(n+1) we first apply a forward Euler step for the ODE (<ref>), with adaptive step length p_n+1-p_n = δ_ ODEmin(2|g'(h_η^(n))|^2/|g”(h_η^(n))|,|g'(h_η^(n))| (h_η^(n),)), where δ_ ODE∈(0,1) is a user-specified parameter. The first argument of the minimum is included to ensure stability of the solver - note that F'(h) = - g”(h)/(g'(h))^2 and we might expect instability if the local step length were as large as 2/|F'(h)| = 2|g'(h)|^2/|g”(h)|. The second argument is included to ensure that the solver “slows down” as it approaches the non-oscillatory region Ω, so that we can detect whether γ_η enters Ω or not. To ensure that |h_η^(n+1)-h_η^(n)| ≤δ_ ODE d, where d:=(h_η^(n),)=min_ξ∈|h_η^(n)-ξ|, we require that p_n+1-p_n≤δ_ ODE d/|F(h_η^(n))|=δ_ ODE d|g'(h_η^(n))|. This also ensures that h_η^(n+1) remains far enough from , so that (<ref>) doesn't get too large. After each forward Euler step, we correct h_η^(n+1) by running a Newton iteration to enforce (<ref>) (with p=p_n+1 fixed), until the Newton step size |g(h_η^(n+1))-g(η)- p_n+1/g'(h_η^(n+1))| is smaller than δ_ coarse(h_η^(n+1),), for some user-specified tolerance δ_ coarse>0. We repeat this process for n=0,1,2,… until either (i) h_η^(n)∈Ω_ξ for some ξ∈, in which case we add z=h_η^(n) to the set of entrances. Note that in general the point z=h_η^(n) will lie inside Ω_ξ^∘ rather than on ∂Ω_ξ, but will be closer to ∂Ω_ξ the smaller δ_ ODE is; or (ii) h_η^(n)∈ R_v for some v∈, in which case, by Theorem <ref>, γ_η converges to the valley v. Here the “region of no return” R_v is defined by R_v:={z∈: | z-v|_2π <π/(2J) and G(|z|,| z-v|_2π)>0}, where |θ|_2π:=min_m∈|θ-2π m|, and, for r>0 and θ∈(0,π/(2J)), G(r,θ) := J|α_J|r^J-1min(1/√(2),cosJθ) - ∑_j=1^J-1 j|α_j|r^j-1. For further explanation of the meaning of R_v see <ref> below. A necessary condition for a point z to lie in R_v is that |z|≥ r_*, where r_*>0 is the unique positive solution of the polynomial equation G(r_*,π/(4J))=0, i.e. the solution of J|α_J|r_*^J-1/√(2)= ∑_j=1^J-1 j|α_j|r_*^j-1. Having found r_* once and for all (using the Matlab commnad), to check that a point z lies in R_v we first check that |z|≥ r_*. If so, we then check that | z-v|_2π<π/(2J). If so, we then check that G(|z|,| z-v|_2π)>0. The point of introducing r_* is so that we don't compute G(|z|,| z-v|_2π) unless absolutely necessary. In either case, tracing of the SD contour stops and we set n_ max=n for this contour. §.§ Step 5 - Finding the shortest path The construction of the graph G requires no further explanation. To find the shortest path in G between the endpoints of the original contour Γ we apply the standard Dijkstra shortest path algorithm <cit.>. §.§ Step 6 - Evaluating the contour integrals The quasi-SD contour deformation corresponding to the graph-theoretic shortest path between the endpoints of G calculated during step 5 involves integrals over three types of contour: Type 1: Straight line contours between points in the non-oscillatory region; Type 2: Infinite SD contours from exits/endpoints to valleys; Type 3: Finite SD contours from exits/endpoints to entrances. Some of these contours will make a larger contribution to the value of the original integral (<ref>) than others. It is natural to neglect contours that make a negligibly small contribution. In our implementation, we only compute the contribution from a contour γ in the quasi-SD deformation if at least one of the finite endpoints η of γ satisfies |^ω g(η)|/M>δ_ quad, where δ_ quad≥ 0 is a small, user-specified parameter and M:=max |^ω g(ξ)|, where the maximum is taken over all ξ∈∪∪ appearing in the shortest path corresponding to the quasi-SD deformation. In our implementation, for Type 1 contours we use Gauss-Legendre quadrature, for Type 2 contours we use either Gauss-Laguerre quadrature (which is the default choice in PathFinder) or truncated Gauss-Legendre quadrature, and for Type 3 contours we use (possibly truncated) Gauss-Legendre quadrature, as detailed below. By default our implementation uses the same number N of quadrature points on each contour in the quasi-SD deformation whose contribution we compute, regardless of the type of integral (we comment on this in <ref>). Accordingly, if N_ cont is the number of these contours then the total number of quadrature points used in the algorithm, N_ tot, is given by N_ tot = N N_ cont. §.§.§ Evaluation of integrals over Type 1 contours Let z_0,z_1∈Ω, and let γ be the straight-line contour in starting at z_0 and ending at z_1, parametrized by z_[z_0,z_1](t) = 1/2((z_1-z_0)t + (z_0+z_1)), t∈[-1,1]. Given N∈, let t_m^ Leg and w_m^ Leg, for m=1,…,N, denote the nodes and weights for standard N-point Gauss-Legendre quadrature on [-1,1]. Our quadrature approximation to the integral over γ is then: ∫_γf(z)^ω g(z) z ≈z_1-z_0/2∑_m=1^N w_m^ Leg f(z_[z_0,z_1](t_m^ Leg))^ω g(z_[z_0,z_1](t_m^ Leg)). §.§.§ Evaluation of integrals over Type 2 contours Let η∈∪(∖Ω) be such that the SD contour γ from η leads to a valley. Parametrizing γ by (<ref>), for p∈[0,∞), noting (<ref>), and rescaling p=p̃/ω, we have ∫_γf(z)^ω g(z) z = ^ω g(η)/ω∫_0^∞f̃(p̃)^-p̃ p̃, where f̃(p̃):=f(h_η(p̃/ω))h_η'(p̃/ω)=f(h_η(p̃/ω))/g'(h_η(p̃/ω)). By tracing contours outside of Ω, the contours remain a positive distance from . This ensures that f̃ is analytic in a complex neighbourhood of [0,∞). By default, PathFinder evaluates the integral on the right-hand side of (<ref>) by Gauss-Laguerre quadrature. Let t_m^ Lag and w_m^ Lag, for n=1,…,N, denote the standard Gauss-Laguerre nodes and weights on [0,∞). Our quadrature approximation to the integral over γ is then: ∫_γf(z)^ω g(z) z ≈^ω g(η)/ω∑_m=1^Nw_m^ Lagf̃(t_m^ Lag). To evaluate f̃(t_m^ Lag) we need accurate computations of h_η(t_m^ Lag/ω) for m=1,…,N. For this, for each m we run a Newton iteration on (<ref>) with p=t_m^ Lag/ω fixed, until the magnitude of the increment is smaller than a user-specified tolerance δ_ fine>0. Typically we take δ_ fine to be considerably smaller than the tolerance δ_ coarse used in the Newton iteration in step 4, since when carrying out quadrature we require higher accuracy in our approximation of the SD contour than is required for determining the global structure of the quasi-SD deformation in step 4. As the initial guess for the Newton method we use a piecewise linear interpolant of the points {(p_0,h_η^(0)),(p_1,h_η^(1)),…,(p_n_ max,h_η^(n_ max))} computed in step 4, where n_ max denotes the total number of steps taken in the ODE solve in step 4 before the contour tracing algorithm terminated. If p_n_ max<t_N^ Lag/ω then before running the Newton iteration we first need to restart the contour tracing algorithm of step 4 to extend the SD contour until p_n_ max≥ t_N^ Lag/ω, so that there are points to interpolate between. As an alternative, one can evaluate the integral over a Type 2 contour using truncated Gauss-Legendre quadrature, as suggested in <cit.>. To activate this alternative in PathFinder one should add the optional input . In this case we truncate the integral to ∫_γf(z)^ω g(z) z ≈^ω g(η)/ω∫_0^Pf̃(p̃)^-p̃ p̃, for some P>0, then apply Gauss-Legendre quadrature on [0,P], to obtain the approximation ∫_γf(z)^ω g(z) z ≈P^ω g(η)/2ω∑_m=1^Nw_m^ Legf̃(z_[0,P](t_m^ Leg))^-z_[0,P](t_m^ Leg), where we compute h_η(z_[0,P](t_m^ Leg/ω)) (which is required for the evaluation of f̃(z_[0,P](t_m^ Leg))) by the same Newton iteration discussed above for h_η(t_m^ Lag/ω). For the truncation point P we take P = L, where L := -log(δ_ quad M /|^ω g(η)|), which describes the point at which the magnitude of the exponential part of the integrand drops below δ_ quad times its maximum value M on the quasi-SD deformation. §.§.§ Evaluation of integrals over Type 3 contours Let η∈∪(∖Ω) be such that the SD contour γ from η leads to an entrance z∈. In this case we apply (possibly truncated) Gauss-Legendre quadrature as in formulas (<ref>) and (<ref>), but now with P = min(p_n_ max/ω,L), where p_n_ max is defined as in <ref> and L is defined as in <ref>. In the case where the minimum is attained by p_n_ max/ω, so that the whole contour is considered, a potential inconsistency arises, because the application of the higher accuracy Newton iteration described in <ref> for the calculation of h_η(z_[0,P](t_m^ Leg/ω)) corresponds implicitly to a slight shifting of the endpoint of the contour γ away from the entrance z=h_η^(n_ max) added to the graph G in step 5. To avoid this inconsistency, in our implementation, in step 4, whenever the contour tracing terminates in case (i), we run a Newton iteration on the final point h_η^(n_ max) with the high accuracy tolerance δ_ fine, before adding it to the list of entrances . Note that this may mean that h_η^(n_ max) lies very slightly outside Ω. § THEORETICAL RESULTS In this section we collect some theoretical results that motivate the design of our algorithm. §.§ Removal of stationary points In <ref> we described our algorithm for removing stationary points from the set when they are close. When removing stationary points and their associated non-oscillatory balls, we need to ensure that the removed stationary points still lie inside one of the remaining non-oscillatory balls, so that we don't encounter any stationary points along the trajectory in our ODE solve for the SD contour tracing (see the discussion in <ref> below). In this section we provide a sufficient condition on the parameter δ_ ball for this to be guaranteed. Suppose that in the removal algorithm of <ref>, n stationary points have been removed from . Then for any stationary point ξ that was removed, there exists ξ'∈ such that |ξ-ξ'|≤ nδ_ ball r_ξ'. We proceed by induction on n. The result is trivially true for n=0. Assume that it is true after the removal of n points, and suppose that the (n+1)st point is now to be removed. Let ξ_1,ξ_2 denote the pair of points selected as realising min_ξ_1,ξ_2 d_ξ_1,ξ_2, and without loss of generality suppose that ξ_2 is the point to be removed (so that r_ξ_1≥ r_ξ_2). Then |ξ_2-ξ_1|≤δ_ ballr_ξ_1≤ (n+1) δ_ ballr_ξ_1, so the claimed property holds for ξ_2. Furthermore, by the inductive hypothesis, for each point ξ previously removed, there exists ξ'∈ such that |ξ-ξ'|≤ nδ_ ball r_ξ'. If ξ'≠ξ_2 then ξ' will still be present in after the removal of ξ_2, and |ξ-ξ'|≤ nδ_ ball r_ξ'≤ (n+1)δ_ ball r_ξ'. On the other hand, if ξ'=ξ_2 then ξ' will not be present in after the removal of ξ_2, but ξ_1 will be, and by the triangle inequality |ξ-ξ_1|≤ |ξ-ξ_2|+ |ξ_2-ξ_1| ≤ nδ_ ball r_ξ_2 + δ_ ball r_ξ_1≤ (n+1)δ_ ball r_ξ_1, completing the inductive step. As a consequence, we obtain the following. If J>2 and 0<δ_ ball≤ 1/(2(J-2)) then, after the removal algorithm has run, for every stationary point ξ there exists ξ'∈ such that ξ∈Ω_ξ' and (ξ,∂Ω_ξ')≥ r_ξ'/2. §.§ Region of no return for SD contours The following result establishes a region of no return: once an SD contour enters this region, we can say with certainty which valley it will converge to. The idea behind this result is that in the region of no return the highest degree term α_J z^J of the polynomial g is sufficiently dominant over the lower degree terms that the SD contours inside the region converge to the same valley as those corresponding to the monomial phase α_J z^J. Let g, and R_v, for v∈, be as in (<ref>), (<ref>) and (<ref>). The regions R_v, v∈, contain no stationary points of g. Furthermore, if an SD contour enters R_v for some v∈, it never leaves R_v. That R_v contains no stationary points follows because if G(r,θ)>0 then J|α_j||z|^J-1>∑_j=1^J-1 j|α_j||z|^j-1≥| ∑_j=1^J-1 jα_jz^j-1|, so that g'(z)≠0. Now fix v∈. Given θ'∈(0,π/(2J)) and R>0 we define the sector S_v(R,θ'):={z∈: |z-v|_2π<θ' and |z|>R}, with |·|_2π defined as in (<ref>). We also define the function G̃(R,θ') := |J||α_J|R^J-1min(sinJ θ',cosJθ') - ∑_j=1^J-1 j|α_j|R^j-1, which for each fixed θ' is a polynomial in R of degree J-1. We claim that if θ'∈(0,π/(2J)) and G̃(R,θ')>0, then if an SD contour enters S_v(R,θ') it never leaves S_v(R,θ'). To prove this, we show that if an SD contour intersects ∂ S_v(R,θ') then the direction of descent always points into S_v(R,θ'). Since ∂ S_v(R,θ') is the union of the sets {z∈:| z-v|_2π≤θ' and |z|=R} and {z∈:| z-v|_2π=θ' and |z|∈[R,∞)}, it suffices to show that, in polar coordinates (r,θ), ∂ g/∂ r>0, for |θ-v|_2π≤θ' and r=R, ∓1/r∂ g/∂θ>0, for θ=v±θ' (mod 2π) and r≥ R. For (<ref>), let |θ-v|_2π≤θ'. Since ∂ g(r^θ)/∂ r =∑_j=1^J jα_j^ jθr^j-1 and [α_J ^ J θ] = |α_J|cos(J|θ-v|_2π) (using the definition of v) we have that ∂ g(r^θ)/∂ r ≥ J|α_J|r^J-1cos(J|θ-v|_2π) - ∑_j=1^J-1 j|α_j|r^j-1, so a sufficient condition for (<ref>) to hold is that J|α_J|R^J-1cos(Jθ') - ∑_j=1^J-1 j|α_j|R^j-1>0. For (<ref>), let θ=v±θ' (mod 2π). Since 1/r∂ g(r^θ)/∂θ =∑_j=1^J jα_j^ jθr^j-1, and [α_J ^ J θ]=[α_J ^ J (v±θ')] = ∓|α_J|sin(Jθ') we have that .∓1/r∂ g(r^θ)/∂θ|_θ=v±θ' ≥ J|α_J|r^J-1sin(Jθ') - ∑_j=1^J-1 j|α_j|r^j-1=:ϕ(r). The function ϕ(r) has the property that if R>0 and ϕ(R)>0 then ϕ(r)>0 for all r≥ R. To see this, note that ϕ(r) = r^J-1(J|α_J|sin(Jθ') - ∑_j=1^J-1 j|α_j|r^j-J), and that the term in brackets is a strictly decreasing function of r, which tends to -∞ as r→ 0 and to J|α_J|sin(Jθ')>0 as r→∞. Hence a sufficient condition for (<ref>) is that ϕ(R)>0, i.e. J|α_J|R^J-1cos(Jθ') - ∑_j=1^J-1 j|α_j|R^j-1>0. Since the assumption G̃(R,θ')>0 implies both (<ref>) and (<ref>), our claim is proved. The statement of the theorem then follows by noting that the region R_v is the union of all the sectors S_v(R,θ') such that 0<θ'≤π/(2J) and G̃(R,θ')>0. We note that if 0<θ'<π/(4J) then sinJθ'<sinπ/4, so that if G̃(R,θ')>0 then G̃(R,π/(4J))>0. This implies that the union can actually be taken over π/(4J)≤θ'<π/(2J) only, justifying the definition of the function G in (<ref>). §.§ Quadrature error In <ref> we defined the non-oscillatory region as a union of balls on which the exponential ^ω g(z) undergoes a bounded number of oscillations. Here we show that the definition (<ref>) strikes a balance between the accuracy of our quadrature approximations to the integrals outside and inside this region. §.§.§ Quadrature in the non-oscillatory region The Type 1 straight line contour integrals between points in the non-oscillatory region are evaluated using Gauss-Legendre quadrature, as detailed in <ref>. To assess the accuracy of this we note the following theorem, which is a simple consequence of the standard error analysis presented in <cit.>. Let z_0,z_1∈. Suppose that γ is a straight-line contour in starting at z_0 and ending at z_1 and that there exists ρ>0, C>0 and ξ_*∈ such that f is analytic and bounded in z_[z_0,z_1](B_ρ), where B_ρ is a standard Bernstein ellipse (relative to [-1,1]) and z_[z_0,z_1] is defined as in (<ref>), and ω|g(ξ_*)-g(z)|≤ C, z∈ z_[z_0,z_1](B_ρ). Let I and Q denote the left- and right-hand sides of (<ref>), respectively. Then, for some C̃>0, depending only on ρ, | I-Q | ≤C̃|z_1-z_0|f_L^∞(z_[z_0,z_1](B_ρ))^-ω[g(ξ_*)]^Cρ^-2N. Noting that I = ^ω g(ξ_*)∫_γf(z)^ω (g(z)-g(ξ_*)) z and that |f(z)^ω (g(z)-g(ξ_*))|≤f_L^∞(z_[z_0,z_1](B_ρ))^C, z∈ z_[z_0,z_1](B_ρ), the result follows from <cit.>. Theorem <ref> motivates the definition of the non-oscillatory region in (<ref>). Indeed, if the assumptions of Theorem <ref> hold with ρ and C independent of ω then the bound (<ref>) guarantees ω-independent exponential convergence for ω bounded away from zero. However, even when (<ref>) is satisfied, the relationship between ξ_*, ρ, C and ω is beyond our control in general because the ellipse may extend beyond the non-oscillatory region, so that C>C_ ball. Thus we cannot control the factor ^C entirely based on condition (<ref>). Still, the bound (<ref>) shows that the quadrature error decreases with increasing N. The precise rate of decrease depends on a balance between the decay of ρ^-2N and the growth of ^C and f_L^∞(z_[z_0,z_1](B_ρ)) for increasing ρ. We quantify this in the special case of monomial phase in <ref>. §.§.§ Quadrature for the SD contours For Type 2 or Type 3 integrals along SD contours we use either Gauss-Laguerre or (possibly truncated) Gauss-Legendre quadrature, as detailed in <ref> and <ref>. We expect these rules to converge rapidly to the true value of the integral as the number of quadrature points N tends to infinity, provided that the integrand is analytic and bounded in a suitable region of the complex p̃ plane. For Gauss-Laguerre the following result appeared recently in <cit.>. Suppose that f̃ is analytic inside and on the parabola P_ρ:={z∈:√(-z)=ρ} for some ρ>0, where the branch cut is along the positive real axis and √(-z) is real and positive on the negative real axis, that f̃ grows at most algebraically as z→∞ inside the parabola, and that the integral 𝒦_ρ:=∫_P_ρ|^-z√(-z)f̃(z)| z is finite. Let I and Q denote the left- and right-hand sides of (<ref>), respectively. Then | I-Q | ≤𝒦_ρ^-ω[ g(η)]/ω^-4ρ√(N). This result implies that our Gauss-Laguerre quadrature approximation should converge root-exponentially as N→∞, provided that f is sufficiently well-behaved at infinity. The presence of singularities in the complex p̃-plane limits the size of ρ, and hence the convergence rate. We know from (<ref>) that our integrand is singular at points p̃∈ where g'(h_η(p̃/ω))=0, i.e. where h_η(p̃/ω)=ξ for some stationary point ξ. Since we only trace SD contours outside the non-oscillatory region (which contains the stationary points), we know that there cannot be singularities on the SD contour itself. If the start point η lies on an SD contour emanating from a stationary point ξ then we expect there to be a singularity in the p̃-plane at p̃=ω[g(ξ)-g(η)]<0. We show in <ref> that in the special case of monomial phase this singularity lies at p̃=-C_ ball, which implies root-exponential convergence independent of ω for ω bounded away from zero. Determining the locations of the other possible singularities in the complex p̃-plane is more challenging, since it involves study of the (multivalued) inverse of g. We leave further theoretical investigation of this to future work. §.§.§ Results for monomial phase It is instructive to consider the special case of a monomial phase g(z)=z^J for some J∈. In this case there is a single stationary point of order J-1 at ξ=0, and g(0)=0. Following the prescription (<ref>), we obtain a ball radius r_0=(C_ ball/ω)^1/J. We first consider a Type 1 integral in the non-oscillatory region. For simplicity we choose f(z)≡ 1. Specifically, we consider the evaluation of the integral ∫_0^r_0^θ^ω g(z) z, for some θ∈[0,2π]. Taking ξ_*=0, we can apply Theorem <ref> with any ρ>1, and the resulting scaled and translated Bernstein ellipse surrounding [0,r_0^θ] is tightly contained in the disc |z|≤ sr_0, where ρ and s are related by ρ = 2s-1+√((2s-1)^2-1) =2s-1+2√(s^2-s). Hence condition (<ref>) is satisfied, independently of θ, with C = C_ ball s^J, which is independent of ω but dependent on J. When s is large, we have ρ≈ 4s, and in this regime the error bound provided by (<ref>) for Gauss-Legendre quadrature is approximately proportional to (C_ ball/ω)^1/J^C_ ball s^J(4s)^-2N. As a function of s, with J and N fixed, this quantity is minimised where its s-derivative vanishes, which occurs where C_ ballJs^J -2N=0, i.e. where s = (2N/C_ ballJ)^1/J. Accordingly, the error bound is approximately proportional to (C_ ball/ω)^1/J 16^J (8eN/C_ ballJ)^-2N/J. Thus we expect super-exponential convergence as N→∞ for fixed J. However, we expect the convergence to be slower the larger J is. Next we consider a Type 2 integral over an SD contour, again with f(z)≡ 1. Specifically, we consider the evaluation of the integral ∫_r_0^ v^∞^ v^ω g(z) z, where v=((2j+1/2)π)/J for some j∈{1,…,J}. Following our method, the contour is parametrized by h_η(p) = (r_0^J + p)^1/J^ v, p∈ [0,∞), and, recalling (<ref>) and (<ref>), after rescaling p=p̃/ω the integral becomes ^-C_ ball^ v/ω^1/J J∫_0^∞ (C_ ball + p̃)^1/J - 1^-p̃ p̃. The integrand has a branch point at p̃ = -C_ ball, but we note that the distance between the branch point and the positive real p̃-axis equals C_ ball, which is independent of both ω and J. For truncated Gauss-Legendre the relevant theory can be found in <cit.> (and see also <cit.>). Due to the branch point at p̃=-C_ ball, as N→∞ we obtain exponential convergence to the integral over the interval [0,P], where P is given by either (<ref>) or (<ref>). In the case where P=L, by the definition of L in (<ref>), we expect the truncation error to have relative order δ_ quad. §.§.§ Number and distribution of quadrature points PathFinder uses a fixed number N of quadrature points on each contributing contour, and that number is the same both for integrals within and outside the non-oscillatory region, i.e., for Gauss–Legendre and Gauss–Laguerre quadrature. Thus, increasing the single parameter N provides a way of uniformly improving accuracy. The theoretical results in this section (specifically, Theorems <ref> and <ref>) imply that the precise rate of improvement with respect to N depends on the type of integral being approximated. They suggest even that a different strategy for the distribution of quadrature points may be superior. Indeed, exponential convergence of Gauss–Legendre for Type 1 integrals in the non-oscillatory region is not balanced with root-exponential convergence of Gauss–Laguerre for Type 2 integrals outside. Similarly, convergence rates of Gauss-Laguerre and truncated Gauss-Legendre outside the non-oscillatory region are different. Our choice of a fixed parameter N is inspired on the one hand by simplicity, and on the other hand by the lack of robust methods to optimize parameters in alternative schemes. For example, we have shown in <ref> that the convergence rate of Gauss-Legendre for Type 1 integrals may depend on the order of nearby stationary points. While this can be quantified precisely for the case of monomial phase, it is not at all clear how to generalise this analysis when a cluster of multiple stationary points is present. Hence, stationary point order is a quantity that we deliberately do not explicitly compute, estimate or rely on in any way. Implicitly, of course, it plays a big role, and it does so mainly via the definition of the ball of the radius in (<ref>). The main practical benefit of the theoretical analysis of quadrature error in this section is the guarantee that N is a robust parameter for improving accuracy. Concerning possible future improvements, rather than attempting to optimize the quadrature point distribution a priori, we believe a more promising development would be the ability to invoke standard adaptive quadrature schemes along the contours for a given function f. However, it should be borne in mind that quadrature forms just one step in our algorithm, and that the other steps (particularly the SD path tracing) incur a non-negligible cost overhead, that should also be considered when trying to further optimize performance. § FURTHER IMPLEMENTATION ASPECTS In this section we discuss some additional aspects of the implementation of our algorithm in PathFinder. §.§ Default parameter values In Table <ref> we list the user-specified parameters in our algorithm, along with the default values used in all our numerical results in <ref>. These were determined as the result of extensive numerical experiments on a range of examples, not detailed here. Instructions on how to adjust these parameters away from their default values can be found at . §.§ Small ω While our algorithm is geared towards the case where ω is moderate or large, we make a brief comment on the case where ω is small. If Γ is infinite then the integral (<ref>) typically diverges for ω= 0. However, if Γ is finite then the integral converges for ω=0 and for small enough ω it is non-oscillatory. In PathFinder we detect and deal with this case in the following way. If both endpoints are finite, then before starting step 1 of the algorithm we construct non-oscillatory balls around the endpoints (using the process in <ref>) and check whether the balls intersect non-trivially. If so, we apply standard Gauss-Legendre quadrature to evaluate (<ref>); if not, the balls are discarded and we proceed with the rest of the algorithm. §.§ The case J=1 In the case J=1 (linear phase) there are no stationary points, and our algorithm simplifies dramatically. Furthermore, the SD contours are simply parallel straight lines in the direction of the single valley at angle π/2-(α_1), and there is no need to trace them numerically. Hence when J=1 PathFinder skips the ODE contour tracing step and exploits the exact characterization of the SD contours mentioned above. §.§ Specifying infinite endpoints In the description of our algorithm in <ref> we made the assumption that any infinite endpoint of the contour Γ should be at a valley v∈. PathFinder is actually more flexible than this. The user is permitted to specify an infinite endpoint at any θ∈ [v-π/(2J),v+π/(2J)] and the code will automatically adjust this to equal v. The case θ = v±π/(2J) is delicate because the highest order term in the phase does not provide exponential decay along the contour. Nonetheless, we include it, because in applications one often encounters this case, with the integral converging conditionally (under appropriate assumptions on f) and the contour deformation to v being justified by Jordan's Lemma. § NUMERICAL RESULTS In this section we present numerical results illustrating the performance of our algorithm and its implementation in PathFinder. All results in this section were produced using PathFinder Version 1.0 <cit.>. §.§ A “generic” example We begin by illustrating the performance of PathFinder on the integral I = ∫_-1^1 (2z^4+7z^3+z^2+8z+2) ^ω(3z^9 + z^8 + 4 z^7 + z^6 + 5 z^5 + 9 z^4 + 2 z^3 + 6 z^2 + 5 z + 3) z, where, to convey the message that our approach is applicable to truly “generic” amplitudes and polynomial phase functions, the coefficients of f and g are chosen to be the first 5 digits of and the first 10 digits of π, respectively. This can be approximated by PathFinder via the Matlab code (cf. (<ref>)) VerbA|PathFinder(-1,1,@(z) 2*z.^4+7*z.^3+z.^2+8*z+2,...| VerbB|[3 1 4 1 5 9 2 6 5 3],omega,N)| VerbA VerbB In Figure <ref> we plot the quasi-SD deformations and quadrature point distributions (using the PathFinder option) for (<ref>) for ω∈{0.01,1,5,50} and N=10. As explained in <ref>, for smaller ω the non-oscillatory balls are larger, and can overlap, while for larger ω they shrink around the stationary points. In more detail, in Figure <ref>(a) (ω=0.01), ω is small enough that both endpoints are inside the same non-oscillatory ball. Hence the integral is treated as non-oscillatory and is approximated by Gauss-Legendre quadrature along a single straight-line contour. In Figure <ref>(b) (ω=1), ω is still small enough that many of the balls overlap, and the quasi-SD deformation comprises two SD contours (one from an exit and one from an endpoint) plus four straight-line contours in the non-oscillatory region. In Figure <ref>(c) (ω=5), ω is large enough that only two balls overlap, and the quasi-SD deformation comprises five SD contours (two from endpoints, two from exits to valleys, and one from an exit to an entrance), plus four straight-line contours in the non-oscillatory region. Finally, in Figure <ref>(d) (ω=50), ω is so large that none of the balls overlap, and the quasi-SD deformation comprises eight contributing SD contours (two from endpoints and six from exits to valleys), plus three straight-line contours in the non-oscillatory region. However, in this case the two SD contours and one straight-line contour associated with the stationary point near 0.2+0.5 are judged to make a negligible contribution to the integral, so are not assigned any quadrature points. We emphasize that this intricate behaviour is fully automated, with no expert input required from the user. In Figure <ref>(a) we plot the error in the PathFinder approximation of (<ref>), compared to reference values computed using the Julia package when ω<500, and using PathFinder with N=500 when ω≥ 500. For fixed ω we observe rapid convergence as N→∞, at a rate that appears independent of ω. In Figure <ref>(b) we show the associated computation times, which remain bounded as ω increases. §.§ Coalescence and the Airy function The canonical example of an integral with two coalescing stationary points is provided by the integral representation for the Airy function, viz. (see <cit.>) (x) = ∫_∞^-π/3^∞^π/3^z^3/3-xz z= ∫_∞^-π/3^∞^π/3^(-(z^3/3-xz)) z, x∈, which is of the form (<ref>) with f≡ 1, ω=1 and g(z;x)=-(z^3/3- xz). Up to a change of variable this is the same example for which, as mentioned in <ref>, a bespoke, complex Gaussian quadrature rule was developed in <cit.>. can be approximated by PathFinder via the Matlab code where the input for indicates that f≡ 1. In Figure <ref> we plot the quasi-SD deformations, along with the distribution of quadrature points for N=20, for the evaluation of (x) at x∈{-5,-1,-0.5,0,5}. Here one observes in detail how our algorithm deals with stationary point coalescence, as the non-oscillatory balls overlap, merge, then split. In Figure <ref>(a) the quasi-SD deformation comprises four SD contours from exits, plus two straight-line contours inside balls (which do not go via stationary points). In Figure <ref>(b) the balls overlap and this changes to two SD contours from exits plus three straight-line contours inside balls (which go via both stationary points). In Figure <ref>(c) the balls overlap enough that both stationary points are contained in both balls, so we get two SD contours from exits plus just two straight-line contours inside balls (which go via only one of the stationary points). In Figure <ref>(d) the balls have merged completely and in addition to the two SD contours from exits there is just one straight-line contour inside a ball (which does not go via the stationary point). In Figure <ref>(e) the balls have split again, but we see the same deformation structure as in Figure <ref>(d). Again, we emphasize that these calculations are fully automated. In Figure <ref> we show the accuracy of the PathFinder approximation for this example as a function of x∈[-10,4], for different N. Our reference is the built-in Matlab command . We note that between x=-3 and x=0 the error for the smaller values of N undergoes some jumps. These are due to the fact that near stationary point coalescence the topology of the quasi-SD deformation, the number of contours constituting it, and hence the total number of quadrature points along it (recall (<ref>)), all change discontinuously as a function of x (as illustrated in Figure <ref>). However, as N increases we see a clear, approximately exponential decrease in the error, and, although the rate of decrease depends slightly on x (because of the factors mentioned above), for N=30 we achieve approximately 10^-13 error uniformly across the interval. §.§ A high order stationary point - comparison with Mathematica's implementation of Levin quadrature We now consider the integral I = ∫_-1^1sin(z)^ω z^9 z, which has a stationary point of order 8 at the origin. The integral (<ref>) can be approximated by PathFinder via the command Figure <ref> shows the quasi-SD deformation and quadrature point distribution obtained by PathFinder for ω=100,000 and N=50. There are small contributions from the endpoints, but the main contribution comes from the ball containing the stationary point. In the Mathematica documentation <cit.>, it is stated that oscillatory integrals with monomial phase functions such as (<ref>) can be evaluated efficiently using the built-in Mathematica function , via its implementation of Levin quadrature (which is described, e.g., in <cit.>). To do this one can use the Mathematica command: VerbA|NIntegrate[Sin[x]Exp[omega*I*x^9],x,-1,1,| VerbB|Method->"LevinRule","Kernel"->Exp[omega*I*x^9]]| VerbA VerbB In Figure <ref>(a) we show a plot of the relative accuracy of our PathFinder approximation, compared to the Mathematica approximation (using the default settings), as a function of ω, for different N values. For all three N values the accuracy of our approximation is approximately uniform in ω, and for N=50 our approximation agrees with Mathematica's to approx 13 digits. In Figure <ref>(b) we report the corresponding computation times (averaged over 100 identical runs) for the Mathematica routine and for the PathFinder approximation with N=50. These results were obtained on a laptop (i7-1185G7, 32GB RAM) running Mathematica v13.0 and Matlab v2021b. The results suggest that PathFinder is highly competitive with Mathematica for this problem, especially for large ω. §.§ Coalescence to a high order stationary point We now investigate the robustness of our algorithm in the presence of a large number of coalescing stationary points. Specifically, we consider the integral ∫_-1^1 ^ω(z^7/7-r^6 z) z, where r≥ 0 is a parameter controlling the coalescence. For r>0 there are 6 stationary points with |ξ|=r, namely the solutions of ξ^6 = r^6, and for r=0 there is a single stationary point of order 6. To evaluate this integral in PathFinder for a given r, one can use the command VerbA|PathFinder(-1,1,[],[1/7 0 0 0 0 0 -r^6 0],omega,N)| VerbA In Figure <ref> we plot the quasi-SD deformations and quadrature point distributions for a some different r values, showing how the balls first intersect and then merge as r→0. In Figure <ref> we show convergence (with respect to a PathFinder reference with N=500) and CPU times (averaged over 100 runs) for fixed r=0.01. We see that both the error and the CPU time are essentially independent of ω in this case. In Figure <ref> we plot errors for two fixed N values N=10,50, as a function of r. We observe that as r→0, the error stays bounded. For N=10 the error jumps up between r=10^-3 and r=10^-2, at a point depending on ω. This represents the point at which the balls around the stationary points merge, resulting in a reduction of N_ tot, and hence a reduction in accuracy. But after this point we observe no further reduction in accuracy as r→0. We remark that for sufficiently small r>0 the six stationary points are numerically indistinguishable, but this isn't a problem for our algorithm because in that case the problem will be treated identically to that of a monomial phase. §.§ Canonical cuspoid integrals and their generalisations In this section we show how our algorithm can be applied to the computation of some of the canonical integrals catalogued by Berry and Howls in <cit.>, which, as mentioned already in <ref>, are of fundamental importance in numerous application areas including optics, acoustics and quantum mechanics. In this context, our algorithm is related to that of <cit.>, where an adaptive contour deformation approach was applied to evaluate the cuspoid integrals considered in <ref>. The algorithm in <cit.> is similar in spirit to our approach, in that it deforms the integration contour so that it terminates in valleys at infinity, and splits the contour into portions close to stationary points plus portions away from stationary points. However, in contrast to our approach, the algorithm in <cit.> does not attempt to trace SD contours, and hence is susceptible to rounding errors associated with the “violent” behaviour of the exponential factor ^ω g(z) when one is not on a true SD contour - see <cit.>. Furthermore, while the algorithm in <cit.> was specialised to the case of integration over the real line, our algorithm can handle much more general contours, as we illustrate in <ref>. §.§.§ Cuspoid integrals The so-called “cuspoid integrals” listed in <cit.> are all of the form (<ref>) with polynomial phase g and unit amplitude f≡ 1, unit frequency ω=1, and integration along the real line. Our algorithm is ideally suited to the evaluation of these integrals, and to demonstrate this we compute two of them. In the notation of <cit.>, we consider the cusp catastrophe integral Ψ_2(x,y) = P(y,x) = ∫_-∞^∞^ (t^4+yt^2 + x t) t, where P is the Pearcey function, and the swallowtail catastrophe integral Ψ_3(x,y,z) = ∫_-∞^∞^ (t^5+zt^3 + yt^2 + x t) t. Both exhibit coalescence of stationary points on certain algebraic varieties (see <cit.>) on which both the first and second derivatives of the phase function vanish. In the case of (<ref>) this occurs when y = -3/2|x|^2/3, and for (<ref>) this occurs when 400x^3-360x^2z^2-135y^4-27y^2z^3 + 540xy^2z + 81xz^4 = 0. The integrals (<ref>) and (<ref>) can be computed in Pathfinder via the commands Figure <ref> shows plots of the magnitude of (<ref>) and (<ref>) (the latter over the plane z=-7.5), computed using PathFinder with the default settings and N=50. The plots agree qualitatively with those presented in <cit.>, and, for (<ref>), agree quantitatively (to all five decimal places presented) with the values presented in <cit.>. Computation times on a small desktop computer (Intel i7-4790, 32GB RAM) were less than a minute for the cusp (which required the computation of 10000 instances of (<ref>), averaging 0.005s per instance) and less than an hour for the swallowtail (which required 250000 instances of (<ref>), averaging 0.01s per instance). §.§.§ Generalisations In <cit.> the authors considered a family of generalisations of certain canonical cuspoid integrals, with integration no longer over the real line, but rather over a complex contour starting and ending at valleys at infinity, and possibly with a non-unit amplitude function. A specific aim of <cit.> was to investigate the relevance of such integrals to the study of the so-called “inflection point problem”, a canonical problem in wave scattering originally introduced over 50 years ago by Popov in <cit.>. This problem, which remains unsolved in closed form, concerns two-dimensional time-harmonic wave propagation near a boundary with an inflection point, and seeks a solution for the wave field near the inflection point that describes the transition from an incoming “whispering gallery wave” supported on the concave portion of the boundary, to outgoing “creeping waves” along the convex portion of the boundary, along with a scattered “searchlight” beam (for details and further references see <cit.>). In this context, in <cit.> the authors studied the family of integrals A_ij(x,y) = ∫_Γ_ij f(t) ^(2t^5/5 - xt^4/2 -yt^2) t, where f(t) is some amplitude to be specified, and Γ_ij denotes any contour from valley v_i to valley v_j, where v_j:=(2(j-1)+1/2)π/5, j=1,…,5. These integrals have stationary point coalescence on the cubic curve y+4x^3/27=0, which suggests that, by appropriately choosing f and Γ_ij, they might exhibit certain features of the solution of the inflection point problem. Indeed, in <cit.> it was shown that as x→-∞ near the cubic curve, the integral A_32 has the character of an incoming whispering gallery type wave, and that, as x→+∞ near the cubic curve, the integral A_52 has the character of an outgoing creeping wave. However, plots of the resulting fields could not be presented in <cit.> due to the lack of a suitable numerical evaluation method and implementation. Using PathFinder we are able to remedy this. In Figures <ref>(a) and <ref>(b) we provide plots of the magnitude of A_32 and A_52 with f≡ 1. To evaluate the integrals we used the PathFinder code We only plot A_52 above the cubic curve y+4x^3/27=0, because below this curve A_52 becomes exponentially large (cf. <cit.>). In Figures <ref>(c) and <ref>(d) we present corresponding plots of the modulated plane wave u(x_0,y_0) = A_ij(x,y) ^ k x_0, where (x_0,y_0) are outer variables, related to the inner variables (x,y) by x=k^1/5x_0, y=k^3/5y_0, which is an asymptotic solution of the Helmholtz equation Δ u + k^2 u = 0 as k→∞ in the region x_0=O(k^-1/5), y_0=O(k^-3/5) <cit.>. Here one observes the predicted incoming whispering gallery type behaviour of A_32 near the top of Figure <ref>(c) between x_0=-2 and x_0=-1, with oscillations giving way to an exponentially small field in the caustic shadow, and the predicted creeping wave type behaviour of A_52 near the bottom of Figure <ref>(d) between x_0=1 and x_0=2, with waves propagating along the cubic curve, shedding rays tangentially. In ongoing and future studies we plan to use PathFinder to further investigate the properties of integrals of the form (<ref>), and generalisations involving different choices of f and higher degree phase functions (see <cit.>), which we hope may shed new light on the inflection point problem and related problems in high frequency wave propagation. § ACKNOWLEDGEMENTS DPH and AG were supported by EPSRC grants EP/S01375X/1 and EP/V053868/1. DPH and AG would like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme Mathematical Theory and Applications of Multiple Wave Scattering when work on this paper was undertaken. This work was supported by EPSRC grant number EP/R014604/1. Additionally, AG acknowledges support from KU Leuven project C14/15/05, and DH acknowledges support FWO-Flanders project G.088.622N. siam
http://arxiv.org/abs/2307.04010v1
20230708164045
Understanding the Efficacy of U-Net & Vision Transformer for Groundwater Numerical Modelling
[ "Maria Luisa Taccari", "Oded Ovadia", "He Wang", "Adar Kahana", "Xiaohui Chen", "Peter K. Jimack" ]
physics.flu-dyn
[ "physics.flu-dyn", "cs.CE", "cs.LG" ]
[ Kipton Barros August 12, 2023 =================== []School of Civil Engineering, University of Leeds, Leeds, UK, Email: [email protected]. []Department of Applied Mathematics, Tel-Aviv University, Tel-Aviv, Israel. []School of Computing, University of Leeds, Leeds, UK. []Department of Applied Mathematics, Tel-Aviv University, Tel-Aviv, Israel. []School of Civil Engineering, University of Leeds, Leeds, UK. []School of Computing, University of Leeds, Leeds, UK. [ Kipton Barros August 12, 2023 =================== This paper presents a comprehensive comparison of various machine learning models, namely U-Net <cit.>, U-Net integrated with Vision Transformers (ViT) <cit.>, and Fourier Neural Operator (FNO) <cit.>, for time-dependent forward modelling in groundwater systems. Through testing on synthetic datasets, it is demonstrated that U-Net and U-Net + ViT models outperform FNO in accuracy and efficiency, especially in sparse data scenarios. These findings underscore the potential of U-Net-based models for groundwater modelling in real-world applications where data scarcity is prevalent. § INTRODUCTION Groundwater numerical models, such as MODFLOW <cit.>, are crucial for water resource management, although they are computationally demanding. To alleviate this, surrogate modelling through data-driven methods offers efficient approximations of these complex numerical techniques. Neural Operators <cit.>, particularly the Fourier Neural Operator (FNO) <cit.>, have been at the forefront of recent advances, having shown potential to approximate arbitrary continuous functions. However, the computational demand of FNO is particularly high during training phase while these neural operators require architectural enhancements to deliver promising results in subsurface problems <cit.>. This is evident in the work of Wen et al. <cit.>, where the integration of FNO with U-Net architecture showed improved accuracy, speed, and data efficiency in multiphase flow problems. However, Gupta and Brandstetter's work <cit.>, showing that U-Net outperforms FNOs across various fluid mechanics problems, raises a question about the necessity of neural operators when the vanilla U-Net architecture already exhibits remarkable performance. Recently, transformers <cit.> have seen considerable success in various fields, including physical systems <cit.>, for which the datasets are typically smaller compared to other domains. Only one study explores the use of transformers in groundwater modeling <cit.>, demonstrating that the models were outperformed by both GRU and LSTM models to predict groundwater levels across various stations in France with meteorological and hydrological data. Finally, the integration of U-Net with Transformers, as exemplified in studies like TransUNet <cit.> and ViTO <cit.>, has demonstrated their utility across a broad range of applications, particularly in the field of medical image segmentation and operator learning for inverse PDE problems. Yet, the applicability of these combinations in addressing time-dependent forward problems, real-world data scenarios, and in situations with sparse data, remain areas yet to be fully explored. Several studies, such as the one by Brakenhoff et al. <cit.>, primarily focus on individual time series when analysing the impact of various hydrological stressors, including pumping rates, precipitation excess, and river stage variations, on groundwater levels of individual monitoring wells. While this approach provides valuable insights, it does not account for spatial correlations, thereby limiting its use to existing time series or monitoring wells. Similarly, previous comparisons have been predominantly limited to specific models like LSTM, CNNs and NARX in the context of groundwater level forecasting <cit.>, leaving room for broader explorations. In this paper, we present a comprehensive comparison among models—specifically U-Net, U-Net integrated with Vision Transformers (U-Net+ViT), and Fourier Neural Operator (FNO)—for their efficacy in modeling time-dependent forward and inverse problems in groundwater systems. We test our model extensively on synthetic datasets, simulating conditions from the Overbetuwe region in the Netherlands, including sparse data scenarios. We show that both U-Net and U-Net+ViT are particularly well-suited to these important sparse data scenarios, with the addition of the Transformer providing enhanced predictive capability in many cases. § METHODOLOGY §.§ Example of study site and data This subsection provides context and rationale for our study via an example case study based upon the polder region of Overbetuwe in the Netherlands (Figure <ref>). This region showcases the characteristic Dutch system of water management where the area is divided into several polders in a mix of agriculture, nature, and urban environments. Alongside its sparse data and heterogeneous soil, these unique characteristics underscore the inherent complexities of water management in similar settings, making this dataset a suitable choice for our research. The subsoil is primarily composed of clay and sandy clay, with soil properties being determined via borehole and cone penetration tests. The study area features numerous observation wells for monitoring groundwater heads while well fields (indicated as groundwater usage facilities in the figure) are utilized for the extraction of drinking water. The work of Brakenhoff et al. <cit.> considers a dataset consisting of 250 head time series, with daily recordings starting from the year 1990 and drawdown attributed to the extraction from up to four well fields. For the purposes of this study, we employ synthetic data to validate the proposed methodology, with the intention to subsequently apply the validated method to the real-world data of the Overbetuwe region. Figure <ref> represents a sample of the high-fidelity labeled dataset, which is constructed using the U.S. Geological Survey (USGS) finite-difference flow model, MODFLOW. The model is composed of a single-layer representation of a confined aquifer with a 128×128 grid. The aquifer's heterogeneity is reflected through varying horizontal hydraulic conductivity within the bounds k ∈[0.1, 0.5] m/d. The hydraulic conductivity fields in our study are created using random fields which are then thresholded to delineate different classes. A maximum of ten pumping wells are extracting water with variable rates in the range Q ∈[0, 30] m^3/d over a simulation period of T = 10 days. The pumping wells are located in random locations which vary for each sample. The boundary conditions are delineated as Dirichlet, with the head equal to zero, mimicking a polder encircled by ditches where a stable water level is maintained through a comprehensive network of pumping stations. The datasets consist of N_train = 5000 training instances and N_test = 1000 testing instances. To mirror the inherent sparsity of real-world data, a data selection strategy is adopted for the test dataset. The locations of the boreholes for estimating the hydraulic conductivity are chosen following a radial distribution pattern, and a helical pattern is used for the wells monitoring hydraulic head (Figure  <ref>). §.§ Architectures The architectures of the three models under comparison in this study encompass the U-Net structure, a U-Net with attention mechanism in the bottleneck, and the Fourier neural operator (FNO). The U-Net architecture is designed with an encoder-decoder structure where the decoder receives the upsampled feature map, which is then concatenated with the corresponding feature map from the encoder through a skip connection. Detailed diagrams of the the U-Net encoder and decoder can be found in Figures <ref> and <ref> in Appendix A. The encoder consists of three bottleneck blocks, where each block utilizes three layers of Conv2d, Instance Normalization, and GELU activation to extract spatial features. These blocks increase the number of channels by a factor of 2 and perform downsampling with a stride of 2. The decoder is composed of a series of upsampling blocks, where each block consists of a bilinear upsampling operation (Upsample), followed by a double convolution operation. Each convolution within the decoderis followed by Instance Normalization and GELU activation function. The bottleneck consists on a single convolutional layer. In the time-dependent scenario, the time series data of the historical pumping rates is processed through two layers of feed-forward neural network (FNN) prior to being concatenated to the input for the latent space representation (Figure <ref>). The second model, here called UNet+ViT, employs the Vision Transformer (ViT) <cit.>, in the latent space representation of the U-Net, as per implementation of TransUNet <cit.> and ViTO <cit.>. The input is tokenized into a sequence of flattened 2D patches, each of size 1×1. Positional information is retained by employing trainable convolutional projection to learn and add specific position embeddings to the patch embeddings. The structure of the Transformer includes L blocks, with each block comprising Multi-Head Attention (MSA) and FNN. This configuration involves the use of 2 blocks, each with 2 Multihead Self-Attentions, and a FNN composed of 128 neurons. For a more detailed visualization of the Vision Transformer, attention block, and multihead attention, please refer to Appendix A, Figure <ref>. The Fourier neural operator (FNO) <cit.> model leverages the fast Fourier Transform to parameterize the integral kernel directly in the Fourier space. The implementation of FNO for the 2D Darcy Flow problem as presented in <cit.> is followed in this study. The total amount of parameters of FNO corresponds to 2.38 million, that is 15 times more than UNet+ViT (151k) and 17 times more than UNet (137k). § RESULTS §.§ Forward problem with sparse observations This section presents the prediction of the hydraulic head at sparse monitoring wells after a constant 10-day pumping period under two different training conditions. We employ distinct sampling strategies for both input and output data in our methodology. Our training data is sampled from a regular quadratic grid, while for testing we have explored other arrangements, such as radial and helical, to understand their potential impact on the prediction performance. In the first scenario, training is conducted using sparse data, with a spacing of 20 grid points for the input hydraulic conductivity field and a spacing of 8 for the output hydraulic head. Testing is then carried out on sparse data points, following the radial and helical patterns delineated in subsection <ref>. The resulting root mean square error (RMSE) is found to be 5.2 × 10^-2, 3.5 × 10^-2 and 8.1 × 10^-2 for the vanilla U-Net, the UNet+ViT models and FNO respectively. These results underline the superior performance of the UNet+ViT model in handling sparse data, exhibiting a lower RMSE compared to both the vanilla U-Net and the FNO models. In contrast, when training is performed using the entire field and testing on the same sparse dataset, the error marginally escalates to 3.9 × 10^-1 for FNO, 3.8 × 10^-1 for UNet and 3.6 × 10^-1 for UNet+ViT model. This outcome is anticipated considering the training set exhibits sparsity in the first scenario, but not in the latter. Additionally, Figure <ref> displays the prediction over the entire domain, resulting in a lower RMSE of 1.0 × 10^-2 for FNO, 1.7 × 10^-2 and 1.9 × 10^-2 for the vanilla U-Net and UNet+ViT models, respectively. The FNO model, while superior when dealing with full data, exhibits the highest predictive error under sparse data observations. These results highlight the practical advantages of the U-Net and especially UNet+ViT model in real-world scenarios for which data sparsity is common. It should be noted that traditional simpler neural networks and other machine learning techniques may not provide adequate solutions for this specific problem. This assertion is backed by a comparison of the results from a fully connected neural network, a linear regression model and a random forest, detailed in Appendix <ref>. Despite the substantial number of trainable parameters, reaching 51.17 million, inherent to the fully connected neural network and the application of linear regression and random forest, these methods significantly underperform compared to the U-Net, the UNet+ViT models, and FNO. §.§ Identification of pumping wells In this section, we focus on an inverse problem: specifically the identification of pumping wells. This task requires determining the locations and rates of pumping wells based on the observed hydraulic heads. Throughout these experiments, we employ a single hydraulic conductivity field, which, while spatially varying, remains identical across all samples within the dataset. In evaluating the performance of our models, we use both RMSE and accuracy. The RMSE calculates the average difference between the true and the predicted value for each pump location in the test dataset, giving a quantitative measure of the prediction error. Complementing this, the accuracy was determined by counting the proportion of correct pump predictions, where a prediction is considered correct if the predicted and actual pump locations align. This gives a sense of how often the model correctly identifies the location of pumps. The U-Net model performs optimally, achieving an RMSE of 5.6 × 10^-2. Interestingly, the integration of the Vision Transformer with the U-Net model does not confer any additional precision in this scenario, yielding a near RMSE of 6.1 × 10^-2. The FNO model exhibits a higher RMSE of 1.1 × 10^-1, indicating a somewhat lower accuracy in identifying the pumping well locations. To visually illustrate these results, Figure <ref> presents a test sample using the U-Net + ViT model. It demonstrates an accuracy of 93% in locating the pumps, calculated across the entire test dataset. The figure visualizes the model's ability to accurately identify the positions and the pumping rate of the wells. In comparison, the FNO model achieved a notably lower detection accuracy of 79% in the same task. §.§ Example results for time series data This section unveils the results achieved from the analysis of time series data, starting with a simplified scenario, for which the inputs are the varying hydraulic conductivity field and the pumping rate of a single pump which varies over a 10-day simulation period. Results are evaluated in terms of root mean square error (RMSE) with a focus on the comparison of different configurations of the U-Net architecture with transformers. Figure <ref> presents a comparison of results over 5 time frames for the U-Net with the Vision Transformer under autoregressive testing conditions. The RMSE for each method was calculated to quantify the models' performance. The U-Net architecture alone yielded an RMSE of 1.79 × 10^-2. When supplemented with a Vision Transformer, consisting of 2 attention blocks and 2 heads, the performance improves, registering an RMSE of 1.67 × 10^-2. However, increasing the complexity of the Vision Transformer to 8 blocks and 8 heads did not further improve the performance, instead, it led to a slight degradation in the RMSE (1.77 × 10^-2). Adding an Axial Transformer <cit.> to the U-Net architecture also did not enhance the performance, yielding an RMSE of 1.83 × 10^-2. These results suggest that while adding a Vision Transformer to the U-Net architecture leads to performance improvement, increasing the complexity of the latent space does not necessarily do so. § CONCLUSION This paper explores and evaluates the capabilities of different machine learning models, with a particular focus on U-Net, U-Net integrated with Vision Transformers (ViT), and Fourier Neural Operator (FNO), in the context of predicting hydraulic head in groundwater studies. Our analysis and testing, conducted on synthetic datasets designed to simulate the conditions from the Overbetuwe region in the Netherlands and including scenarios with sparse data, firmly establish that both U-Net and U-Net + ViT models are particularly adept at dealing with such tasks. Importantly, these models are also preferred due to their fewer requisite parameters. Specifically, in the case of sparse observation scenarios, the vanilla U-Net and the U-Net + ViT models outperformed the FNO model. In particular the performance of the UNet+ViT model was superior when handling sparse data, highlighting the potential of the model in real-world applications, where data scarcity is a common issue. The U-Net model demonstrated optimal performance in identifying pumping wells. Interestingly, the integration of the Vision Transformer with the U-Net model did not confer any additional accuracy in this scenario. As for the analysis of time series data, supplementing the U-Net architecture with a Vision Transformer improved the model performance, recording an RMSE of 1.67 × 10^-2 compare to 1.79 × 10^-2 of the vanilla U-Net. However, increasing the complexity of the Vision Transformer did not further enhance the model performance, indicating that a more complex architecture does not necessarily yield better results. Future research will involve applying this validated methodology to real-world data, beginning with the Overbetuwe region in the Netherlands. This will offer an opportunity to further validate and refine the model, accounting for the sparsity and uncertainties inherent in real-world data. § BROADER IMPACT The implications of this research span a wide range of potential societal impacts, with a primary focus on improving the efficiency and reliability of groundwater level forecasting. Given that groundwater is a crucial resource for approximately 2.5 billion people worldwide, fulfilling their daily water needs, and a significant source of global irrigation water, the importance of reliable forecasts cannot be overstated. Our work, through enhancing the performance of groundwater numerical models, offers an opportunity to revolutionize the management and distribution of this vital resource. By providing more accurate and data-efficient predictions, we can aid in the formulation of informed and sustainable water management strategies. This is particularly crucial considering the pressing challenges of population growth and climate change. § ACKNOWLEDGEMENTS This work was carried out with support of the Leeds-York-Hull Natural Environment Research Council (NERC) Doctoral Training Partnership (DTP) Panorama under grant NE/S007458/1. Our sincere appreciation is extended to Professor Karniadakis of Brown University. The financial assistance provided by the Leeds Institute of Fluid Dynamics and Deltares, which made possible the research visit to Brown University, is also gratefully acknowledged. Lastly, we would like to express our gratitude to the reviewers. Their critiques and suggestions have greatly enhanced the overall clarity of our work. 99 brakenhoff Brakenhoff, D. A., Vonk, M. A., Collenteur, R. A., Van Baar, M., & Bakker, M. (2022). Application of Time Series Analysis to Estimate Drawdown From Multiple Well Fields. Frontiers in Earth Science, 10. modflow Hughes, J. D., Russcher, M. J., Langevin, C. D., Morway, E. D., & McDonald, R. R. (2022). The MODFLOW Application Programming Interface for simulation control and software interoperability. Environmental Modelling & Software, 148. gupta2022multispatiotemporalscale Gupta, J. K., & Brandstetter, J. (2022). Towards Multi-spatiotemporal-scale Generalized PDE Modeling. arXiv preprint arXiv:2209.15616. li2020fourier Li, Z., Kovachki, N., Azizzadenesheli, K., Liu, B., Bhattacharya, K., Stuart, A., & Anandkumar, A. (2020). Fourier neural operator for parametric partial differential equations. arXiv preprint arXiv:2010.08895. vito Ovadia, O., Kahana, A., Stinis, P., Turkel, E., & Karniadakis, G. E. (2023). ViTO: Vision Transformer-Operator. arXiv preprint arXiv:2303.08891. transunet Chen, J., Lu, Y., Yu, Q., Luo, X., Adeli, E., Wang, Y., ... & Zhou, Y. (2021). TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation. arXiv preprint arXiv:2102.04306. WEN2022104180 Wen, G., Li, Z., Azizzadenesheli, K., Anandkumar, A., & Benson, S. M. (2022). U-FNO—An enhanced Fourier neural operator-based deep-learning model for multiphase flow. Advances in Water Resources, 163. DINO_loket DINO loket. (2023). Retrieved from https://www.dinoloket.nl/en/subsurface-data li2023transformer Li, Z., Meidani, K., & Farimani, A. B. (2023). Transformer for Partial Differential Equations' Operator Learning. arXiv preprint arXiv:2205.13671. cao2021choose Cao, S. (2021). Choose a Transformer: Fourier or Galerkin. arXiv preprint arXiv:2105.14995. dosovitskiy2021image Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., ... & Houlsby, N. (2021). An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv preprint arXiv:2010.11929. ronneberger2015unet Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv preprint arXiv:1505.04597. wen2022ufno Wen, G., Li, Z., Azizzadenesheli, K., Anandkumar, A., & Benson, S. M. (2022). U-FNO – An enhanced Fourier neural operator-based deep-learning model for multiphase flow. arXiv preprint arXiv:2109.03697. francestudy Mellouli, N., Rabah, M. L., & Farah, I. R. (2022). Transformers-based time series forecasting for piezometric level prediction. In 2022 IEEE International Conference on Evolving and Adaptive Intelligent Systems (EAIS). Wunsch_comparison Wunsch, A., Liesch, T., & Broda, S. (2021). Groundwater level forecasting with artificial neural networks: a comparison of long short-term memory (LSTM), convolutional neural networks (CNNs), and non-linear autoregressive networks with exogenous input (NARX). Hydrology and Earth System Sciences, 25(3), 1671-1687. jiang2023fouriermionet Jiang, Z., Zhu, M., Li, D., Li, Q., Yuan, Y. O., & Lu, L. (2023). Fourier-MIONet: Fourier-enhanced multiple-input neural operators for multiphase modeling of geological carbon sequestration. arXiv preprint arXiv:2303.04778. ho2019axial Ho, J., Kalchbrenner, N., Weissenborn, D., & Salimans, T. (2019). Axial Attention in Multidimensional Transformers. arXiv preprint arXiv:1912.12180. seidman2022nomad Seidman, J. H., Kissas, G., Perdikaris, P., & Pappas, G. J. (2022). NOMAD: Nonlinear Manifold Decoders for Operator Learning. arXiv preprint arXiv:2206.03551. deeponet Lu, L., Jin, P., Pang, G., Zhang, Z., & Karniadakis, G. E. (2021). Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nature Machine Intelligence, 3(3), 218-229. vaswani2017attention Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention Is All You Need. arXiv preprint arXiv:1706.03762. § APPENDIX A This appendix provides detailed diagrams of the model structures. § APPENDIX B This appendix sets out to examine whether simpler machine learning models, specifically a fully connected neural network, a linear regression model, and a Random Forest model, can achieve the same level of accuracy as more advanced models like the U-Net, the UNet+ViT models, and FNO in predicting groundwater levels. The particular Random Forest model tested here used 30 estimators. The fully connected neural network, employed for this comparison, comprises three hidden layers, each containing 1000 nodes and using ReLU activation functions. The model holds an impressive count of 51.17 million trainable parameters. Unfortunately, none of the models was able to predict accurately the groundwater levels neither capturing the location of the wells. Specifically, the fully connected neural network and the linear regression model yielded high RMSEs of 1.17 × 10^-1 and 1.24 × 10^-1, respectively. The Random Forest model fared slightly better, achieving a lower RMSE of 1.02 × 10^-1, but it still fell short of the U-Net, the UNet+ViT models, and FNO. Figure <ref> visually contrasts the predictions of these simpler models gainst the ground truth. Their significant underperformance becomes evident when compared to more sophisticated models. For a comparison of these results with accurate outcomes produced by the UNet+ViT model, the reader is directed to Figure <ref>.
http://arxiv.org/abs/2307.04755v1
20230710175732
Information decomposition to identify relevant variation in complex systems with machine learning
[ "Kieran A. Murphy", "Dani S. Bassett" ]
cs.LG
[ "cs.LG", "cond-mat.soft", "cs.IT", "math.IT", "physics.data-an" ]
Dept. of Bioengineering, School of Engineering & Applied Science, Dept. of Bioengineering, School of Engineering & Applied Science, Dept. of Electrical & Systems Engineering, School of Engineering & Applied Science, Dept. of Neurology, Perelman School of Medicine, Dept. of Psychiatry, Perelman School of Medicine, Dept. of Physics & Astronomy, College of Arts & Sciences, University of Pennsylvania, Philadelphia, PA 19104, USA The Santa Fe Institute, Santa Fe, NM 87501, USA To whom correspondence should be addressed: [email protected] One of the fundamental steps toward understanding a complex system is identifying variation at the scale of the system's components that is most relevant to behavior on a macroscopic scale. Mutual information is a natural means of linking variation across scales of a system due to its independence of the particular functional relationship between variables. However, estimating mutual information given high-dimensional, continuous-valued data is notoriously difficult, and the desideratum—to reveal important variation in a comprehensible manner—is only readily achieved through exhaustive search. Here we propose a practical, efficient, and broadly applicable methodology to decompose the information contained in a set of measurements by lossily compressing each measurement with machine learning. Guided by the distributed information bottleneck as a learning objective, the information decomposition sorts variation in the measurements of the system state by relevance to specified macroscale behavior, revealing the most important subsets of measurements for different amounts of predictive information. Additional granularity is achieved by inspection of the learned compression schemes: the variation transmitted during compression is composed of distinctions among measurement values that are most relevant to the macroscale behavior. We focus our analysis on two paradigmatic complex systems: a Boolean circuit and an amorphous material undergoing plastic deformation. In both examples, specific bits of entropy are identified out of the high entropy of the system state as most related to macroscale behavior for insight about the connection between micro- and macro- in the complex system. The identification of meaningful variation in data, with the full generality brought by information theory, is made practical for the study of complex systems. Information decomposition to identify relevant variation in complex systems with machine learning Dani S. Bassett Version of June 20, 2023 =================================================================================================== A complex system is a system of interacting components where some sense of order present at the scale of the system is not apparent, or even conceivable, from the observations of single components <cit.>. A broad categorization, it includes many systems of relevance to our daily lives, from the economy to the internet and from the human brain to artificial neural networks <cit.>. Before attempting a reductionist description of a complex system, one must first identify variation in the system that is most relevant to emergent order at larger scales. The notion of relevance can be formalized with information theory, wherein mutual information serves as a general measure of statistical dependence to connect variation across different scales of system behavior <cit.>. Information theory and complexity science have a rich history; information theory commonly forms the foundation of definitions of what it means to be complex <cit.>. Machine learning is well-suited for the analysis of complex systems, grounded in its natural capacity to identify patterns in high dimensional data <cit.>. However, distilling insight from a successfully trained model is often infeasible due to a characteristic lack of interpretability of machine learning models <cit.>. Restricting to simpler classes of models, for example linear combinations of observables, recovers a degree of interpretability at the expense of functional expressivity <cit.>. For the study of complex systems, such a trade-off is unacceptable if the complexity of the system is no longer faithfully represented. In this work, we do not attempt to explain the relationship between microscale and macroscale, and are instead interested in identifying the information contained in microscale observables that is most predictive of macroscale behavior—independent of functional relationship. We employ a recent method from interpretable machine learning that identifies the most relevant information in a set of measurements <cit.>. Based on the distributed information bottleneck <cit.>, a variant of the information bottleneck (IB) <cit.>, the method lossily compresses a set of measurements while preserving information about a relevance quantity. Optimization serves to decompose the information present in the measurements, providing a general-purpose method to identify the important variation in composite measurements of complex systems. Identifying important variation is a powerful means of analysis of complex systems, as we demonstrate on two paradigmatic examples. First we study a Boolean circuit, whose fully-specified joint distribution and intuitive interactions between variables facilitate understanding of the information decomposition found by the distributed IB. Boolean circuits are networks of binary variables that interact through logic functions, serving as the building blocks of computation <cit.> and as elementary models of gene control networks <cit.>. Second, we decompose the information contained in the local structure of an amorphous material subjected to global deformation. Amorphous materials are condensed matter systems composed of simple elements (e.g., atoms or grains) that interact via volume exclusion and whose disorder gives rise to a host of complex macroscale phenomena, such as collective rearrangement events spanning a wide range of magnitudes <cit.> and nontrivial phase transitions <cit.>. Although the state space that describes all of the degrees of freedom is large, as is generally true of complex systems, the proposed method is able to identify important bits of variation by partitioning entropy and leveraging machine learning to process the high dimensional data. § METHODS Mutual information is a measure of statistical dependence between two random variables X and Y that is independent of the functional transformation that relates X and Y (in contrast to linear correlation, for example, which measures the degree to which two variables are linearly related). Mutual information is defined as the entropy reduction in one variable after learning the value of the other <cit.>, I(X;Y) = H(Y) - H(Y|X), with H(X)=𝔼_x∼ p(x)[-log p(x)] Shannon's entropy <cit.>. The distributed information bottleneck is an optimization objective to extract the information most relevant to a variable Y from a composite measurement: a random vector X = (X_1, ..., X_N) <cit.>. Each component X_i undergoes lossy compression to an auxiliary variable U_i=f(X_i), and then the compressed variables U=(U_1, ..., U_N) are used to predict the output Y. Minimization of the distributed IB Lagrangian, ℒ_DIB = β∑_i=1^N I(U_i;X_i) - I(U;Y), extracts the entropy (or information) in X that is most descriptive of Y. By sweeping over the magnitude of the bottleneck strength β, a continuous spectrum of approximations to the relationship between X and Y is found. The optimized compression schemes for each component of X reveal the amount of relevant information and the specific entropy selected for every level of approximation. In place of Eqn. <ref>, variational bounds on mutual information have been developed that are amenable to data and machine learning <cit.>. The lossy compression schemes are parameterized by neural networks that encode data points to probability distributions in a continuous latent space. Transmitted information is upper bounded by the expectation of the Kullback-Leibler divergence <cit.> between the encoded distributions and an arbitrary prior distribution, identical to the process of information restriction in a variational autoencoder <cit.>. Over the course of training, the amount of information conveyed by each compression scheme I(U_i;X_i) is estimated using bounds derived in Ref. <cit.>. Although mutual information is generally difficult to estimate from data <cit.>, compressing the partial measurements X_i separately isolates the information such that the amount of mutual information is small enough to allow precise estimates, with the interval between bounds on the order of 0.01 bits. Details about mutual information estimation are in Appendix A. § RESULTS Boolean circuit. A Boolean circuit (Fig. <ref>a) was constructed with ten binary inputs X=(X_1,...,X_10) and a binary output Y. Assuming a uniform distribution over inputs, the truth table specifies the joint distribution p(x_1,...,x_10,y), and the interactions between inputs are prescribed by a wiring of logical , , and gates. An information bottleneck was distributed to every input X_i to monitor from where the predictive information originated via compressed variables U_i (Fig. <ref>b). We trained a multilayer perceptron (MLP) to learn the relationship between the lossy compressions U and Y. Over the course of a single training run, the coefficient of the information bottleneck strength β was swept to obtain a spectrum of predictive models. The distributed information plane (Fig. <ref>c) <cit.> displays the predictive power as a function of the total information about the inputs ∑ I(U_i;X_i). The predictive performance ranged from zero predictive information without any information about the inputs (Fig. <ref>c, lower left) to all entropy H(Y) accounted for by utilizing all ten bits of input information (Fig. <ref>c, upper right). For every point on the spectrum there was an allocation of information over the inputs; the distributed IB objective identified the information across all inputs that was most predictive. The most predictive information about Y was found to reside in X_3—the input that routes through the fewest gates to Y—and then in the pair X_3,X_10, and so on. Powered by machine learning, we traversed the space of lossy compression schemes of X_i, decomposing the information contained in the circuit inputs about the output. Included in the space of compression schemes is information transmitted about each of the 2^10 discrete subsets of the inputs. To be concrete, there are ten subsets of a single input, 45 pairs of inputs, and so on, with each subset sharing mutual information with Y based on the role of the specific inputs inside the circuit. Fig. <ref>d displays the information contained in every discrete subset of inputs (black points) along with the continuous trajectory found by optimization of the distributed IB (gray curve). The distributed IB, maximizing predictive information while minimizing information taken from the inputs, closely traced the upper boundary of discrete information allocations and identified a majority of the most informative subsets of inputs. To decompose the information in the circuit's inputs required only a single sweep with the distributed IB, not an exhaustive search through all subsets of inputs. We note that the product of the distributed IB is not an ordering of single variable mutual information terms I(X_i;Y), which would be straightforward to calculate, but instead the ordering of information selected from all of X that is maximally informative about Y. Decomposing structural information in a physical system. Linking structure and dynamics in amorphous materials—complex systems consisting of particles that interact primarily through volume exclusion—has been a longstanding challenge in physics <cit.>. Searching for signatures of collective behavior in the multitude of microscopic degrees of freedom is an endeavor emblematic of complex systems more generally and one well-suited for machine learning and information theory. We accept that the functional relationship between the micro- and macroscale variation is potentially incomprehensible, and are instead interested in the information at the microscale that is maximally predictive of behavior at the macroscale. While prior work has analyzed the information content of hand-crafted structural descriptors individually <cit.>, the distributed IB searches through the space of information from many structural measurements in combination. Two-dimensional simulated glasses, prepared by either rapid or gradual quenching and composed of small (type A) and large (type B) particles that interact with a Lennard-Jones potential, were subjected to global shear deformation <cit.>. Local regions were identified as the origins of imminent rearrangement and paired with negative samples from elsewhere in the system to create a binary classification dataset. We first considered a scheme of measurements of the microscale structure that has been associated with plastic rearrangement in a variety of amorphous systems: the densities of radial bands around the center of a region <cit.>. By training a support vector machine (SVM) to predict rearrangement based on the radial density measurements, a linear combination of the values is learned. In the literature, that combination is commonly referred to as softness, and has proven to be a useful local order parameter <cit.>. We approached the same prediction task from an information theoretic perspective, seeking the specific bits of variation in the density measurements that are most predictive of collective rearrangement. Each radial density measurement underwent lossy compression by its own neural network before all compressions were concatenated and used as input to an MLP to predict rearrangement. By sweeping β, a single optimization recovered a sequence of approximations, each allocating a limited amount of information across the 100 density measurements to be most predictive of imminent rearrangement (Fig. <ref>). The trajectories in the distributed information plane, for both gradually and rapidly quenched glasses, reflect the growth of predictive information and prediction accuracy given maximally predictive information about the radial densities (Fig. <ref>a,c). With only one bit of information from the density measurements, 71.8% predictive accuracy was achieved for the gradually quenched glass and 69.5% was achieved for the rapidly quenched glass; with twenty bits, the accuracy jumped to 91.3% and 85.4%, respectively. Beyond twenty bits of density information, the predictive accuracy became comparable to that of the support vector machine, which can utilize all of the continuous-valued density measurements for prediction with a linear relationship. For every point along the trajectory, information was identified from the density measurements that, together, formed the combination of bits that were most predictive of rearrangement (Fig. <ref>b,d). The majority of the information was selected from smaller radii (close to the center of the region), which can be expected given the localized nature of rearrangement events <cit.>. Less intuitive is the information decomposition as it relates to the radial distribution functions g_AA(r) and g_AB(r), the system-averaged radial densities of type A and B particles in regions with a type A particle at the center. For both glasses, the most predictive bits originated in the low density radial bands nearest the center. As more information was incorporated into the prediction, the additional bits came from radial bands that corresponded to particular features of g_AA(r) and g_AB(r). Outside of the first low density trough, the selected information often came from the high density radii of type A particles and the low density radii of type B particles; this trend held true for both glasses. While the information decomposition highlighted similar features in both glasses, the more pronounced structure of selected information out to larger radii for the gradually quenched glass is indicative of its higher structural regularity, which is also seen in the pronounced features of its radial distribution functions g_AA(r) and g_AB(r). The amount of information utilized from each density measurement was predominantly a single bit or less. Of the ways to compress the infinite entropy of a continuous-valued density to a single bit, what was the specific variation extracted from each density measurement? Through inspection of the learned compression schemes, the extracted information can be further decomposed by the degree of distinctions between density values that were preserved for the predictive model (Fig. <ref>a) <cit.>. The single most important bit of information for the gradually quenched glass was a composition of partial bits from multiple density measurements, mostly arising from the first low-density shell of each type of particle (Fig. <ref>b). For both measurements, the compression scheme acted as a threshold on the range of possible density values: values less than a cutoff ρ^' were indistinguishable from each other for the purposes of prediction and were partially distinguishable from density values above the cutoff. By examining the distribution of density values in these radial shells, we see that the cutoff values leverage the separability of the density distributions when conditioned on rearrangement. With more information utilized for prediction, some of the compression schemes differed from simple thresholds (shown for the rapidly quenched glass in Fig. <ref>c). For the predictive model operating with a total of twenty bits of density information, two density measurements contributed more than a bit each. The learned compression of the first high-density shell of type A particles essentially counted the number of particles in the shell, with distinguishability between densities as if there were several thresholds over the range of the values that act to roughly discretize the density measurement. Information decomposition with the distributed IB depends upon the particular scheme used to measure the system <cit.>. In the study of complex systems, there can be multiple `natural' schemes of measuring a system state. Density measurements of radial bands lead to an essentially linear relationship between structure and rearrangement <cit.>; what if we had not inherited such a fortuitous measurement scheme? Another natural basis of measurements is the position of all of the particles (Fig. <ref>a). In contrast to radial density measurements, per-particle measurements lack a canonical ordering; accordingly, we used a permutation-invariant transformer architecture for the predictive model <cit.>. Every particle position was transmitted in parallel through a single compression channel, rather than through a uniquely learned compression scheme per measurement as before. An analogue of the distributed IB task is to write a note for each particle in the region with the goal to predict whether the region will rearrange. Under a constraint on time or effort, more careful notes would be taken for the informative particles, while less careful notes would be taken for the rest. The per-particle measurement scheme imposed no structure on the selection of configurational information. Nevertheless, we found that the information cost per particle as a function of the position in the neighborhood had a radial structure (Fig. <ref>b). The information per particle was highest in the low density radial bands near the center of the region (Fig. <ref>c), and inspection of the compression scheme indicated that negligible azimuthal information was transmitted (Fig. <ref>d). The information decomposition allowed for similar insights to be derived as in the radial density measurement scheme, even though the nature of the predictive model in the two cases was substantially different. Additionally, because the distributed IB operates entirely on the input side of an arbitrary predictive model, the information analysis was agnostic to whether the model was a simple fully connected network or a more complicated transformer architecture. § DISCUSSION A universal challenge faced when studying complex systems, fundamental to what makes a system complex, is the abundance of entropy from the perspective of the microscale that obscures relevant information about macroscale behavior. The generality of mutual information as a measure of statistical relatedness, and the expressivity of deep learning when handling high-dimensional data, allow the distributed IB to be as readily utilized to identify structural defects relevant to a given material property as it is to reveal gene variation relevant to a given affliction. Tens, hundreds, and potentially thousands of measurements of a complex system are handled simultaneously, rendering practical analyses that would have previously been infeasible through exhaustive search or severely limited by constraints on functional relationships between variables. Information theory has long held appeal for the analysis of complex systems owing to the generality of mutual information <cit.>. However, the estimation of mutual information from data is fraught with difficulties <cit.>, which have hindered information theoretic analyses of data from complex systems. By distributing information bottlenecks across multiple partial measurements of a complex system, entropy is partitioned to a degree that makes precise estimation of mutual information possible while simultaneously revealing the most important combinations of bits for insight about the system. Machine learning navigates the space of lossy compression schemes for each variable and allows the identification of meaningful variation without consideration of the black box functional relationship found by the predictive model. Instead of compressing partial measurements in parallel, the information bottleneck <cit.> extracts the relevant information from one random variable in its entirety about another, and is foundational to many works in representation learning <cit.>. In the physical sciences, the IB has been used to extract relevant degrees of freedom with a theoretical equivalence to coarse-graining in the renormalization group <cit.>, and to identify useful reaction coordinates in biomolecular reactions <cit.>. However, the IB has limited capacity to find useful approximations, particularly when the relationship between X and Y is deterministic (or nearly so) <cit.>. Much of the spectrum of learned approximations is the trivial noisy rendition of a high-fidelity reconstruction <cit.>. Additionally, compression schemes found by IB are rarely interpretable because the singular bottleneck occurs after processing the complete input, allowing the compression scheme to involve arbitrarily complex relationships between components of the input without penalty. The distribution of information bottlenecks is critical to an interpretable information decomposition, and to accurately estimating the necessary mutual information terms. A growing body of literature focuses on a fundamentally different route to decompose the information contained in multiple random variables {X_i} about a relevant random variable Y; that alternative route is partial information decomposition (PID) <cit.>. Although there is no consensus on how to achieve PID in practice, its goal is to account for the mutual information between {X_i} and Y in terms of subsets of {X_i}, by analogy to set theory <cit.>. PID allocates information to the input variables in their entirety, whereas the distributed IB selects partial entropy from the input variables in the form of lossy compression schemes, with one scheme per variable. While PID has been proposed as an information theoretic route to study complex systems <cit.> and quantify complexity <cit.>, the super-exponential growth of PID terms renders the methodology rather impractical. There are 5× 10^22 PID terms for a Boolean circuit with 8 inputs <cit.> and the number of terms for the simple 10 input circuit from Fig. <ref> is not known <cit.>. By contrast, the distributed IB offers a pragmatic route to the decomposition of information in a complex system: it is amenable to machine learning and data, and can readily process one hundred (continuous) input variables as in the amorphous plasticity experiments. § ACKNOWLEDGEMENTS We gratefully acknowledge Sam Dillavou and Zhuowen Yin for helpful discussions and comments on the manuscript, and Sylvain Patinet for the amorphous plasticity data. § CODE AVAILABILITY The full code base has been released on Github and may be found through the following link: https://distributed-information-bottleneck.github.iodistributed-information-bottleneck.github.io. Every analysis included in this work can be repeated from scratch with the corresponding Google Colab iPython notebook in https://github.com/distributed-information-bottleneck/distributed-information-bottleneck.github.io/tree/main/colab § DATA AVAILABILITY The train and validation splits of the amorphous plasticity data, consisting of local neighborhoods that were subsequently “measured” as radial densities (Figs. <ref>,<ref>) or as per-particle descriptors (Fig. <ref>), can be found through the project page and can be downloaded https://drive.google.com/drive/folders/1vzWSv_4dE4VyjAXbLrZtcbuV1R6igFEEhere. The full dataset with all particle locations before and after all events is available with the permission of the authors of Ref. <cit.>. § APPENDIX A: MUTUAL INFORMATION BOUNDS The full method presented in this work requires us to bound the mutual information for high dimensional data; identifying this bound is notoriously difficult <cit.>. Fortunately, there are factors in our favor to facilitate optimization with machine learning and the recovery of tight bounds on the information transmitted by the compression channels U_i. To optimize the distributed information bottleneck objective (Eqn. <ref>) requires an upper bound on I(U_i;X_i) and a lower bound on I(U;Y). The (distributed) variational information bottleneck objective <cit.> upper bounds I(U_i;X_i) with the expectation of the Kullback-Leibler (KL) divergence between the encoded distributions p(u_i|x_i) and an arbitrary prior distribution r(u_i) in latent space, I(U_i;X_i) ≤𝔼_x_i ∼ p(x_i) [D_KL(p(u_i|x_i)||r(u_i))]. Normal distributions are used for both the encoded distribution, p(u_i|x_i) = 𝒩(μ=f_μ(x_i), σ=f_σ(x_i)), and the prior, r(u_i)=𝒩(0, 1) so that the KL divergence has a simple analytic form. Over the course of training, the KL divergence is computed for each channel U_i, thereby providing a proxy quantity for the amount of information that is contained in the compression scheme. Although the KL divergence can be used for a qualitative sense of information allocation to features <cit.>, it is a rather poor estimate of the mutual information. Because the encoded distributions p(u_i|x_i) have a known form, we can use the noise contrastive estimation (InfoNCE) lower bound and “leave one out” upper bound from Ref. <cit.> with a large number of samples to obtain tight bounds on the amount of mutual information in the learned compression schemes. The lower and upper bounds on I(U_i;X_i) are based on likelihood ratios at points sampled from the dataset x_i ∼ p(x_i) and from the corresponding conditional distributions, u_i ∼ p(u_i|x_i). To be specific, the mutual information for each channel U=f(X) (dropping channel indices for simplicity) is lower bounded by I(U;X) ≥𝔼 [1/K∑_i^K logp(u_i|x_i)/1/K∑_j^K p(u_i|x_j) ] and upper bounded by I(U;X) ≤𝔼 [ 1/K∑_i^K logp(u_i|x_i)/1/K-1∑_j i^K p(u_i|x_j) ]. The expectation values in both equations are taken over samples {u_i,x_i}_i=1^K of size K extracted repeatedly from the joint distribution p(u,x)=p(x)p(u|x). We estimated with as large an evaluation batch size K as feasible given memory and time considerations, and then averaged over multiple batches to reduce the variance of the bound. Evaluation with a batch size of 1024, averaged over 8 draws, yielded bounds on the mutual information that was on the order of 0.01 bits for the Boolean circuit and glass data. The size of the validation dataset for the glass and the size of the truth table of the Boolean circuit were both on the order of one thousand points. Hence, the benefit of averaging comes from repeated sampling of the latent representations. We show in Fig. <ref> the performance of the mutual information bounds for compression schemes that encode up to several bits of information. X is a discrete random variable that is uniformly distributed over its support and has one to six bits of entropy; for each X a fixed dataset of size 1024 was sampled for mutual information estimation according to the following method of compression. Each outcome x was encoded to a normal distribution with unit variance in 32-dimensional space, p(u|x)=𝒩(μ, 1). The encoded distributions were placed along orthogonal axes a distance d from the origin; in the limits of d=0 and d≫ 1 the information transmitted by the compression scheme is 0 and H(X), respectively. A Monte Carlo estimate of the mutual information sampled 2×10^5 points from p(u,x) to compute 𝔼_p(u,x)[log p(u|x)/p(u)]. The “leave one out” upper and InfoNCE lower bounds were computed with different evaluation batch sizes K, and averaged over 4096 sampled batches. The standard deviation of the bounds is displayed as the shaded region around each trace, and is left out of the plots for the residual (the difference between the bound and the Monte Carlo estimate) for all but the evaluation batch size of 1024. When the information contained in the compression is less than about two bits—as was the case for the majority of the experiments of the main text—the bounds are tight in expectation for even the smallest evaluation batch size. The variance is reducible by averaging over multiple batches. As the transmitted information grows, the benefit of increasing the evaluation batch size grows more pronounced, though bounds with a range of less than 0.1 bits can still be achieved for up to six bits of transmitted information. §.§ Information transmitted per particle For the per-particle measurement scheme on the amorphous plasticity data, a single compression channel U was used for all particles. The information conveyed by the channel I(U;X) may be estimated as above, with X being the particle position and type. Note that we are particularly interested in the information cost for specific particle positions and for each particle type. The outer summation of the bounds (Eqn. <ref> and <ref>) serves to average over the measurement outcomes x_i in a random sample; we use the summand corresponding to {x_i, u_i} as the information contribution for the specific outcome x_i. To generate the information heatmaps of Fig. <ref>b, we randomly sampled 512 neighborhoods from the dataset, corresponding to an evaluation batch size K=512 neighborhoods×50 particles / neighborhood =25,600 particles (data points), and averaged over 100 such batches. A probe particle with specified particle type and position (one for each point in the grid) was inserted into the batch, and then the corresponding summand for the lower and upper information bounds served to quantify the information transmitted per particle. To be specific, I(X=x;U) ≥𝔼 [ logp(u|x)/1/K∑_j^K p(u|x_j) ], with the expectation taken over u ∼ p(u|x) and samples {x_i}_i=1^K∼∏_i^K p(x). The upper bound differed only by inclusion of the distribution p(u|x) corresponding to the probe point in the denominator's sum. § APPENDIX B: IMPLEMENTATION SPECIFICS All experiments were implemented in TensorFlow and run on a single computer with a 12 GB GeForce RTX 3060 GPU. Computing mutual information bounds repeatedly throughout an optimization run contributed the most to running time. Including the information estimation, the Boolean circuit optimization took about half an hour, and the glass experiments took several hours. §.§ Boolean circuit Each input may take only one of two values (0 or 1), allowing the encoders to be extremely simple. Trainable scalars (μ⃗_i,log σ⃗_i^2) were used to encode p(u_i|x_i)= 𝒩 ((2x_i - 1)×μ⃗_i, σ⃗_i^2). The decoder was a multilayer perceptron (MLP) consisting of three fully connected layers with 256 units (α=0.3) each. We increased the value of β logarithmically from 5×10^-4 to 5 in 5×10^4 steps, with a batch size of 512 input-output pairs sampled randomly from the entire 1024-element truth table. The Adam optimizer was used with a learning rate of 10^-3. §.§ Amorphous plasticity The simulated glass data comes from Ref. <cit.>: 10,000 particles in a two-dimensional cell with Lees-Edwards boundary conditions interact via a Lennard-Jones potential, slightly modified to be twice differentiable <cit.>. Simple shear was applied with energy minimization after each step of applied strain. The critical mode was identified as the eigenvector—existing in the 2N-dimensional configuration space of all the particles' positions—of the Hessian whose eigenvalue crossed zero at the onset of global shear stress decrease. The particle that was identified as the locus of the rearrangement event had the largest contribution to the critical mode <cit.>. We used data from the gradual quench (“GQ”) and rapid quench (high temperature liquid, “HTL”) protocols. Following Ref. <cit.>, we considered only neighborhoods with type A particles (the smaller particles) at the center. We used all of the events in the dataset: 7,255 for the gradually quenched and 10,178 for the rapidly quenched glasses. For each rearrangement event with a type A particle as the locus, we selected at random another region from the same system state with a type A particle at the center to serve as a negative example. 90% of all rearrangement events with type A particles as the locus were used for the training set and the remaining 10% were used as the validation set; the regions and specific training and validation splits used in this work can be found on the project webpage. §.§.§ Radial density measurement scheme For the radial density measurements (Figs. <ref>, <ref>), the local neighborhood of each sample was processed 50 radial density structure functions for each particle type, evenly spaced over the interval r=[0.5, 4]. Specifically, for particle i at the center and the set of neighboring particles 𝒮_A of type A, G_A(i;r,δ)=∑_j∈𝒮_Aexp(-(R_ij-r)^2/2δ^2), where R_ij is the distance between particles i and j. The same expression was used to compute G_B, the structure functions for the type B particles in the local neighborhood. The width parameter δ was equal to 50% of each radius interval. After computing the 100 values summarizing each local neighborhood, the training and validation sets were normalized with the mean and standard deviation of each structure function across the training set. The best validation results from a logarithmic scan over values for the C parameter were used for the value of the SVM accuracy in Fig. <ref>. For the distributed IB, each of the 100 scalar values for the structure functions were input to their own MLP consisting of 2 layers of 128 units with activation. The embedding dimension of each U_i was 32. Then the 100 embeddings were concatenated for input to the predictive model, which was an MLP consisting of 3 layers of 256 units with activation. The output was a single logit to classify whether the particle at the center is the locus of imminent rearrangement. We increased β in equally spaced logarithmic steps from 10^-6 to 1 over 250 epochs (an epoch is one pass through the training data). The batch size was 256. The Adam optimizer was used with a learning rate of 10^-4. §.§.§ Per-particle measurement scheme For the per-particle measurements, the nearest 50 particles to the center of each region were compressed by the same encoder, an MLP with two layers of 128 activation (α=0.1), to a 32-dimensional latent space. The only information available to the encoder was the particle's position and type, though the values were preprocessed before input to the encoder to help with optimization: for each particle position r⃗=(x,y), we concatenated x^2, y^2, r=|r⃗|, log r, log x^2, log y^2, and r⃗/r. All were positionally encoded (i.e., before being passed to the MLP, inputs were mapped to x ← (x, sinω_1 x, sinω_2 x, ... )) with frequencies ω_k = 2^k, with k ∈{1, 2, 3, 4, 5} <cit.>. After compression, the 50 representations (one for each particle) were input to a set transformer <cit.>, a permutation-invariant architecture that is free to learn how to relate different particles via self-attention. We used 6 multi-head attention (MHA) blocks with 12 heads each, and a key dimension of 128. Following Ref. <cit.>, each MHA block adds the output of multi-head attention to a skip connection of the block's input, and applies layer normalization to the sum. This intermediate output is passed through an MLP (a single layer with 128 units, in our case) and added to itself (another skip connection) before a second round of layer normalization. After the MHA blocks, the 50 particle representations were mean-pooled and passed through a final fully connected layer of 256 units with activation (α=0.1) before outputting a logit for prediction. Training proceeded for 25,000 training steps, and the learning rate was ramped linearly from zero to 10^-4 over the first 10% of training. Over the duration of training, β increased logarithmically from 3× 10^-8 to 3 × 10^-3. The batch size was 64. § APPENDIX D: CITATION DIVERSITY STATEMENT Science is a human endeavour and consequently vulnerable to many forms of bias; the responsible scientist identifies and mitigates such bias wherever possible. Meta-analyses of research in multiple fields have measured significant bias in how research works are cited, to the detriment of scholars in minority groups <cit.>. We use this space to amplify studies, perspectives, and tools that we found influential during the execution of this research <cit.>. 73 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Mitchell(2006)]mitchell2006complexity author author M. Mitchell, title title Complex systems: Network thinking, https://doi.org/https://doi.org/10.1016/j.artint.2006.10.002 journal journal Artificial Intelligence volume 170, pages 1194 (year 2006), note special Review IssueNoStop [Kwapień and Drożdż(2012)]kwapien2012complexity author author J. Kwapień and author S. Drożdż, title title Physical approach to complex systems, https://doi.org/https://doi.org/10.1016/j.physrep.2012.01.007 journal journal Physics Reports volume 515, pages 115 (year 2012)NoStop [Mitchell(2009)]mitchell2009complexitybook author author M. Mitchell, @noop title Complexity: A guided tour (publisher Oxford University Press, year 2009)NoStop [Newman(2011)]newman2011complex author author M. E. Newman, title title Complex systems: A survey, @noop journal journal arXiv preprint arXiv:1112.1440 (year 2011)NoStop [Matsuda(2000)]matsuda2000physical author author H. Matsuda, title title Physical nature of higher-order mutual information: Intrinsic correlations and frustration, @noop journal journal Physical review E volume 62, pages 3096 (year 2000)NoStop [Koch-Janusz and Ringel(2018)]koch2018natphys author author M. Koch-Janusz and author Z. Ringel, title title Mutual information, neural networks and the renormalization group, @noop journal journal Nature Physics volume 14, pages 578 (year 2018)NoStop [Grassberger(1986)]grassberger1986toward author author P. Grassberger, title title Toward a quantitative theory of self-generated complexity, @noop journal journal International Journal of Theoretical Physics volume 25, pages 907 (year 1986)NoStop [Tononi et al.(1994)Tononi, Sporns, and Edelman]tononi1994measure author author G. Tononi, author O. Sporns, and author G. M. Edelman, title title A measure for brain complexity: relating functional segregation and integration in the nervous system, @noop journal journal Proceedings of the National Academy of Sciences volume 91, pages 5033 (year 1994)NoStop [Gell-Mann and Lloyd(1996)]gellmann1996effective author author M. Gell-Mann and author S. Lloyd, title title Information measures, effective complexity, and total information, @noop journal journal Complexity volume 2, pages 44 (year 1996)NoStop [Golan and Harte(2022)]golan2022pnas author author A. Golan and author J. Harte, title title Information theory: A foundation for complexity science, https://doi.org/10.1073/pnas.2119089119 journal journal Proceedings of the National Academy of Sciences volume 119, pages e2119089119 (year 2022), https://arxiv.org/abs/https://www.pnas.org/doi/pdf/10.1073/pnas.2119089119 https://www.pnas.org/doi/pdf/10.1073/pnas.2119089119 NoStop [LeCun et al.(2015)LeCun, Bengio, and Hinton]lecun2015deep author author Y. LeCun, author Y. Bengio, and author G. Hinton, title title Deep learning, @noop journal journal nature volume 521, pages 436 (year 2015)NoStop [Rudin(2019)]rudin2019stop author author C. Rudin, title title Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead, @noop journal journal Nature Machine Intelligence volume 1, pages 206 (year 2019)NoStop [Rudin et al.(2022)Rudin, Chen, Chen, Huang, Semenova, and Zhong]rudin2022interpretable author author C. Rudin, author C. Chen, author Z. Chen, author H. Huang, author L. Semenova, and author C. Zhong, title title Interpretable machine learning: Fundamental principles and 10 grand challenges, @noop journal journal Statistics Surveys volume 16, pages 1 (year 2022)NoStop [Molnar(2022)]molnar2022interpretableML author author C. Molnar, @noop title Interpretable Machine Learning: A Guide for Making Black Box Models Explainable (year 2022)NoStop [Murphy and Bassett(2023)]dib_ml author author K. A. Murphy and author D. S. Bassett, title title Interpretability with full complexity by constraining feature information, in https://openreview.net/forum?id=R_OL5mLhsv booktitle The Eleventh International Conference on Learning Representations (year 2023)NoStop [Estella Aguerri and Zaidi(2018)]aguerri2018DIB author author I. Estella Aguerri and author A. Zaidi, title title Distributed information bottleneck method for discrete and gaussian sources, in @noop booktitle International Zurich Seminar on Information and Communication (IZS 2018) Proceedings (organization ETH Zurich, year 2018) pp. pages 35–39NoStop [Aguerri and Zaidi(2021)]aguerriDVIB2021 author author I. E. Aguerri and author A. Zaidi, title title Distributed variational representation learning, https://doi.org/10.1109/TPAMI.2019.2928806 journal journal IEEE Transactions on Pattern Analysis and Machine Intelligence volume 43, pages 120 (year 2021)NoStop [Tishby et al.(2000)Tishby, Pereira, and Bialek]tishbyIB2000 author author N. Tishby, author F. C. Pereira, and author W. Bialek, title title The information bottleneck method, @noop journal journal arXiv preprint physics/0004057 (year 2000)NoStop [Savage(1998)]savage1998models author author J. E. Savage, @noop title Models of computation, Vol. volume 136 (publisher Addison-Wesley Reading, MA, year 1998)NoStop [Chaves et al.(2006)Chaves, Sontag, and Albert]chaves2006methods author author M. Chaves, author E. D. Sontag, and author R. Albert, title title Methods of robustness analysis for Boolean models of gene control networks, @noop journal journal IEE Proceedings-Systems Biology volume 153, pages 154 (year 2006)NoStop [Huynh-Thu and Sanguinetti(2019)]huynh2019gene author author V. A. Huynh-Thu and author G. Sanguinetti, title title Gene regulatory network inference: an introductory survey, in @noop booktitle Gene Regulatory Networks (publisher Springer, year 2019) pp. pages 1–23NoStop [Cubuk et al.(2017)Cubuk, Ivancic, Schoenholz, Strickland, Basu, Davidson, Fontaine, Hor, Huang, Jiang, Keim, Koshigan, Lefever, Liu, Ma, Magagnosc, Morrow, Ortiz, Rieser, Shavit, Still, Xu, Zhang, Nordstrom, Arratia, Carpick, Durian, Fakhraai, Jerolmack, Lee, Li, Riggleman, Turner, Yodh, Gianola, and Liu]cubuk2017science author author E. D. Cubuk, author R. J. S. Ivancic, author S. S. Schoenholz, author D. J. Strickland, author A. Basu, author Z. S. Davidson, author J. Fontaine, author J. L. Hor, author Y.-R. Huang, author Y. Jiang, author N. C. Keim, author K. D. Koshigan, author J. A. Lefever, author T. Liu, author X.-G. Ma, author D. J. Magagnosc, author E. Morrow, author C. P. Ortiz, author J. M. Rieser, author A. Shavit, author T. Still, author Y. Xu, author Y. Zhang, author K. N. Nordstrom, author P. E. Arratia, author R. W. Carpick, author D. J. Durian, author Z. Fakhraai, author D. J. Jerolmack, author D. Lee, author J. Li, author R. Riggleman, author K. T. Turner, author A. G. Yodh, author D. S. Gianola, and author A. J. Liu, title title Structure-property relationships from universal signatures of plasticity in disordered solids, https://doi.org/10.1126/science.aai8830 journal journal Science volume 358, pages 1033 (year 2017), https://arxiv.org/abs/https://www.science.org/doi/pdf/10.1126/science.aai8830 https://www.science.org/doi/pdf/10.1126/science.aai8830 NoStop [Murphy et al.(2019)Murphy, Dahmen, and Jaeger]murphy2019transforming author author K. A. Murphy, author K. A. Dahmen, and author H. M. Jaeger, title title Transforming mesoscale granular plasticity through particle shape, @noop journal journal Physical Review X volume 9, pages 011014 (year 2019)NoStop [Jaeger and Nagel(1992)]jaeger1992 author author H. M. Jaeger and author S. R. Nagel, title title Physics of the granular state, https://doi.org/10.1126/science.255.5051.1523 journal journal Science volume 255, pages 1523 (year 1992), https://arxiv.org/abs/https://www.science.org/doi/pdf/10.1126/science.255.5051.1523 https://www.science.org/doi/pdf/10.1126/science.255.5051.1523 NoStop [Liu and Nagel(1998)]liu1998jammingcool author author A. J. Liu and author S. R. Nagel, title title Jamming is not just cool any more, @noop journal journal Nature volume 396, pages 21 (year 1998)NoStop [Bi et al.(2011)Bi, Zhang, Chakraborty, and Behringer]bi2011jammingshear author author D. Bi, author J. Zhang, author B. Chakraborty, and author R. P. Behringer, title title Jamming by shear, @noop journal journal Nature volume 480, pages 355 (year 2011)NoStop [Berthier et al.(2019)Berthier, Biroli, Charbonneau, Corwin, Franz, and Zamponi]berthier2019gardner author author L. Berthier, author G. Biroli, author P. Charbonneau, author E. I. Corwin, author S. Franz, and author F. Zamponi, title title Gardner physics in amorphous solids and beyond, @noop journal journal The Journal of chemical physics volume 151, pages 010901 (year 2019)NoStop [Cover and Thomas(1999)]cover1999elements author author T. M. Cover and author J. A. Thomas, @noop title Elements of information theory (publisher John Wiley & Sons, year 1999)NoStop [Shannon(1948)]shannon1948mathematical author author C. E. Shannon, title title A mathematical theory of communication, @noop journal journal The Bell System Technical Journal volume 27, pages 379 (year 1948)NoStop [Steiner and Kuehn(2021)]steiner2021distributedcompression author author S. Steiner and author V. Kuehn, title title Distributed compression using the information bottleneck principle, in https://doi.org/10.1109/ICC42927.2021.9500324 booktitle ICC 2021 - IEEE International Conference on Communications (year 2021) pp. pages 1–6NoStop [Alemi et al.(2016)Alemi, Fischer, Dillon, and Murphy]alemiVIB2016 author author A. A. Alemi, author I. Fischer, author J. V. Dillon, and author K. Murphy, title title Deep variational information bottleneck, @noop journal journal arXiv preprint arXiv:1612.00410 (year 2016)NoStop [Kingma and Welling(2014)]vae author author D. P. Kingma and author M. Welling, title title Auto-encoding variational Bayes, in @noop booktitle International Conference on Learning Representations (ICLR) (year 2014)NoStop [Poole et al.(2019)Poole, Ozair, Van Den Oord, Alemi, and Tucker]poole2019variational author author B. Poole, author S. Ozair, author A. Van Den Oord, author A. Alemi, and author G. Tucker, title title On variational bounds of mutual information, in @noop booktitle International Conference on Machine Learning (organization PMLR, year 2019) pp. pages 5171–5180NoStop [McAllester and Stratos(2020)]mcallester2020infolimitations author author D. McAllester and author K. Stratos, title title Formal limitations on the measurement of mutual information, in @noop booktitle International Conference on Artificial Intelligence and Statistics (organization PMLR, year 2020) pp. pages 875–884NoStop [Argon and Kuo(1979)]argon1979bubbles author author A. S. Argon and author H. Y. Kuo, title title Plastic flow in a disordered bubble raft (an analog of a metallic glass), @noop journal journal Materials Science and Engineering volume 39, pages 101 (year 1979)NoStop [Falk and Langer(1998)]falk1998dynamics author author M. L. Falk and author J. S. Langer, title title Dynamics of viscoplastic deformation in amorphous solids, @noop journal journal Physical Review E volume 57, pages 7192 (year 1998)NoStop [Manning and Liu(2011)]manning2011vibrational author author M. L. Manning and author A. J. Liu, title title Vibrational modes identify soft spots in a sheared disordered packing, @noop journal journal Physical Review Letters volume 107, pages 108302 (year 2011)NoStop [Richard et al.(2020)Richard, Ozawa, Patinet, Stanifer, Shang, Ridout, Xu, Zhang, Morse, Barrat, Berthier, Falk, Guan, Liu, Martens, Sastry, Vandembroucq, Lerner, and Manning]richard2020indicators author author D. Richard, author M. Ozawa, author S. Patinet, author E. Stanifer, author B. Shang, author S. A. Ridout, author B. Xu, author G. Zhang, author P. K. Morse, author J.-L. Barrat, author L. Berthier, author M. L. Falk, author P. Guan, author A. J. Liu, author K. Martens, author S. Sastry, author D. Vandembroucq, author E. Lerner, and author M. L. Manning, title title Predicting plasticity in disordered solids from structural indicators, @noop journal journal Physical Review Materials volume 4, pages 113609 (year 2020)NoStop [Dunleavy et al.(2012)Dunleavy, Wiesner, and Royall]dunleavy2012information author author A. J. Dunleavy, author K. Wiesner, and author C. P. Royall, title title Using mutual information to measure order in model glass formers, @noop journal journal Physical Review E volume 86, pages 041505 (year 2012)NoStop [Jack et al.(2014)Jack, Dunleavy, and Royall]jack2014information author author R. L. Jack, author A. J. Dunleavy, and author C. P. Royall, title title Information-theoretic measurements of coupling between structure and dynamics in glass formers, @noop journal journal Physical review letters volume 113, pages 095703 (year 2014)NoStop [Dunleavy et al.(2015)Dunleavy, Wiesner, Yamamoto, and Royall]dunleavy2015mutual author author A. J. Dunleavy, author K. Wiesner, author R. Yamamoto, and author C. P. Royall, title title Mutual information reveals multiple structural relaxation mechanisms in a model glass former, @noop journal journal Nature communications volume 6, pages 6089 (year 2015)NoStop [Behler and Parrinello(2007)]behler2007structurefns author author J. Behler and author M. Parrinello, title title Generalized neural-network representation of high-dimensional potential-energy surfaces, @noop journal journal Physical Review Letters volume 98, pages 146401 (year 2007)NoStop [Cubuk et al.(2015)Cubuk, Schoenholz, Rieser, Malone, Rottler, Durian, Kaxiras, and Liu]cubuk2015PRL author author E. D. Cubuk, author S. S. Schoenholz, author J. M. Rieser, author B. D. Malone, author J. Rottler, author D. J. Durian, author E. Kaxiras, and author A. J. Liu, title title Identifying structural flow defects in disordered solids using machine-learning methods, https://doi.org/10.1103/PhysRevLett.114.108001 journal journal Physical Review Letters volume 114, pages 108001 (year 2015)NoStop [Schoenholz et al.(2016)Schoenholz, Cubuk, Sussman, Kaxiras, and Liu]schoenholz2016natphys author author S. S. Schoenholz, author E. D. Cubuk, author D. M. Sussman, author E. Kaxiras, and author A. J. Liu, title title A structural approach to relaxation in glassy liquids, @noop journal journal Nature Physics volume 12, pages 469 (year 2016)NoStop [Sharp et al.(2018)Sharp, Thomas, Cubuk, Schoenholz, Srolovitz, and Liu]softnessGrainBoundaries author author T. A. Sharp, author S. L. Thomas, author E. D. Cubuk, author S. S. Schoenholz, author D. J. Srolovitz, and author A. J. Liu, title title Machine learning determination of atomic dynamics at grain boundaries, https://doi.org/10.1073/pnas.1807176115 journal journal Proceedings of the National Academy of Sciences volume 115, pages 10943 (year 2018)NoStop [Sussman et al.(2017)Sussman, Schoenholz, Cubuk, and Liu]softnessFilms author author D. M. Sussman, author S. S. Schoenholz, author E. D. Cubuk, and author A. J. Liu, title title Disconnecting structure and dynamics in glassy thin films, https://doi.org/10.1073/pnas.1703927114 journal journal Proceedings of the National Academy of Sciences volume 114, pages 10601 (year 2017)NoStop [Zhang et al.(2021)Zhang, Ridout, and Liu]ridout2021avalanche author author G. Zhang, author S. A. Ridout, and author A. J. Liu, title title Interplay of rearrangements, strain, and local structure during avalanche propagation, @noop journal journal Physical Review X volume 11, pages 041019 (year 2021)NoStop [Murphy and Bassett(2022)]dib_orig author author K. A. Murphy and author D. S. Bassett, title title The distributed information bottleneck reveals the explanatory structure of complex systems, @noop journal journal arXiv preprint arXiv:2204.07576 (year 2022)NoStop [Lee et al.(2019)Lee, Lee, Kim, Kosiorek, Choi, and Teh]lee2019set author author J. Lee, author Y. Lee, author J. Kim, author A. Kosiorek, author S. Choi, and author Y. W. Teh, title title Set transformer: A framework for attention-based permutation-invariant neural networks, in @noop booktitle International conference on machine learning (organization PMLR, year 2019) pp. pages 3744–3753NoStop [Saxe et al.(2019)Saxe, Bansal, Dapello, Advani, Kolchinsky, Tracey, and Cox]saxe2019 author author A. M. Saxe, author Y. Bansal, author J. Dapello, author M. Advani, author A. Kolchinsky, author B. D. Tracey, and author D. D. Cox, title title On the information bottleneck theory of deep learning, https://doi.org/10.1088/1742-5468/ab3985 journal journal Journal of Statistical Mechanics: Theory and Experiment volume 2019, pages 124020 (year 2019)NoStop [Zaidi et al.(2020)Zaidi, Estella-Aguerri, and Shamai (Shitz)]zaidi2020IBreview author author A. Zaidi, author I. Estella-Aguerri, and author S. Shamai (Shitz), title title On the information bottleneck problems: Models, connections, applications and information theoretic views, https://www.mdpi.com/1099-4300/22/2/151 journal journal Entropy volume 22 (year 2020)NoStop [Goldfeld and Polyanskiy(2020)]goldfeld2020information author author Z. Goldfeld and author Y. Polyanskiy, title title The information bottleneck problem and its applications in machine learning, @noop journal journal IEEE Journal on Selected Areas in Information Theory volume 1, pages 19 (year 2020)NoStop [Gordon et al.(2021)Gordon, Banerjee, Koch-Janusz, and Ringel]gordonrelevance2021 author author A. Gordon, author A. Banerjee, author M. Koch-Janusz, and author Z. Ringel, title title Relevance in the renormalization group and in information theory, https://doi.org/10.1103/PhysRevLett.126.240601 journal journal Physical Review Letters volume 126, pages 240601 (year 2021)NoStop [Kline and Palmer(2021)]kline2021RGIB author author A. G. Kline and author S. Palmer, title title Gaussian information bottleneck and the non-perturbative renormalization group, @noop journal journal New Journal of Physics (year 2021)NoStop [Wang et al.(2019)Wang, Ribeiro, and Tiwary]wang2019PIB author author Y. Wang, author J. M. L. Ribeiro, and author P. Tiwary, title title Past–future information bottleneck for sampling molecular reaction coordinate simultaneously with thermodynamics and kinetics, @noop journal journal Nature Communications volume 10, pages 1 (year 2019)NoStop [Kolchinsky et al.(2019a)Kolchinsky, Tracey, and Van Kuyk]kolchinsky2018caveats author author A. Kolchinsky, author B. D. Tracey, and author S. Van Kuyk, title title Caveats for information bottleneck in deterministic scenarios, @noop journal journal International Conference on Learning Representations (ICLR) (year 2019a)NoStop [Kolchinsky et al.(2019b)Kolchinsky, Tracey, and Wolpert]kolchinskyNonlinearIB2019 author author A. Kolchinsky, author B. D. Tracey, and author D. H. Wolpert, title title Nonlinear information bottleneck, journal journal Entropy volume 21, https://doi.org/10.3390/e21121181 10.3390/e21121181 (year 2019b)NoStop [Williams and Beer(2010)]williams2010PID author author P. L. Williams and author R. D. Beer, title title Nonnegative decomposition of multivariate information, @noop journal journal arXiv preprint arXiv:1004.2515 (year 2010)NoStop [Gutknecht et al.(2021)Gutknecht, Wibral, and Makkeh]gutknecht2021bits author author A. J. Gutknecht, author M. Wibral, and author A. Makkeh, title title Bits and pieces: Understanding information decomposition from part-whole relationships and formal logic, @noop journal journal Proceedings of the Royal Society A volume 477, pages 20210110 (year 2021)NoStop [Kolchinsky(2022)]kolchinsky2022PID author author A. Kolchinsky, title title A novel approach to the partial information decomposition, @noop journal journal Entropy volume 24, pages 403 (year 2022)NoStop [Varley and Hoel(2022)]varley2022emergence author author T. F. Varley and author E. Hoel, title title Emergence as the conversion of information: A unifying theory, @noop journal journal Philosophical Transactions of the Royal Society A volume 380, pages 20210150 (year 2022)NoStop [Ehrlich et al.(2022)Ehrlich, Schneider, Wibral, Priesemann, and Makkeh]ehrlich2022partial author author D. A. Ehrlich, author A. C. Schneider, author M. Wibral, author V. Priesemann, and author A. Makkeh, title title Partial information decomposition reveals the structure of neural representations, @noop journal journal arXiv preprint arXiv:2209.10438 (year 2022)NoStop [Barbot et al.(2018)Barbot, Lerbinger, Hernandez-Garcia, García-García, Falk, Vandembroucq, and Patinet]barbot2018simulations author author A. Barbot, author M. Lerbinger, author A. Hernandez-Garcia, author R. García-García, author M. L. Falk, author D. Vandembroucq, and author S. Patinet, title title Local yield stress statistics in model amorphous solids, https://doi.org/10.1103/PhysRevE.97.033001 journal journal Physical Review E volume 97, pages 033001 (year 2018)NoStop [Tancik et al.(2020)Tancik, Srinivasan, Mildenhall, Fridovich-Keil, Raghavan, Singhal, Ramamoorthi, Barron, and Ng]tancik2020fourier author author M. Tancik, author P. Srinivasan, author B. Mildenhall, author S. Fridovich-Keil, author N. Raghavan, author U. Singhal, author R. Ramamoorthi, author J. Barron, and author R. Ng, title title Fourier features let networks learn high frequency functions in low dimensional domains, @noop journal journal Advances in Neural Information Processing Systems volume 33, pages 7537 (year 2020)NoStop [Maliniak et al.(2013)Maliniak, Powers, and Walter]maliniak2013gender author author D. Maliniak, author R. Powers, and author B. F. Walter, title title The gender citation gap in international relations, @noop journal journal International Organization volume 67, pages 889 (year 2013)NoStop [Caplar et al.(2017)Caplar, Tacchella, and Birrer]caplar2017quantitative author author N. Caplar, author S. Tacchella, and author S. Birrer, title title Quantitative evaluation of gender bias in astronomical publications from citation counts, @noop journal journal Nature Astronomy volume 1, pages 1 (year 2017)NoStop [Chakravartty et al.(2018)Chakravartty, Kuo, Grubbs, and McIlwain]chakravartty2018communicationsowhite author author P. Chakravartty, author R. Kuo, author V. Grubbs, and author C. McIlwain, title title #CommunicationSoWhite, @noop journal journal Journal of Communication volume 68, pages 254 (year 2018)NoStop [Dion et al.(2018)Dion, Sumner, and Mitchell]dion2018gendered author author M. L. Dion, author J. L. Sumner, and author S. M. Mitchell, title title Gendered citation patterns across political science and social science methodology fields, @noop journal journal Political Analysis volume 26, pages 312 (year 2018)NoStop [Dworkin et al.(2020a)Dworkin, Linn, Teich, Zurn, Shinohara, and Bassett]dworkin2020extent author author J. D. Dworkin, author K. A. Linn, author E. G. Teich, author P. Zurn, author R. T. Shinohara, and author D. S. Bassett, title title The extent and drivers of gender imbalance in neuroscience reference lists, @noop journal journal Nature Neuroscience volume 23, pages 918 (year 2020a)NoStop [Zurn et al.(2020)Zurn, Bassett, and Rust]zurn2020citation author author P. Zurn, author D. S. Bassett, and author N. C. Rust, title title The citation diversity statement: a practice of transparency, a way of life, @noop journal journal Trends in Cognitive Sciences volume 24, pages 669 (year 2020)NoStop [Dworkin et al.(2020b)Dworkin, Zurn, and Bassett]dworkin2020citing author author J. Dworkin, author P. Zurn, and author D. S. Bassett, title title (In)citing action to realize an equitable future, @noop journal journal Neuron volume 106, pages 890 (year 2020b)NoStop [Zhou et al.(2020)Zhou, Cornblath, Stiso, Teich, Dworkin, Blevins, and Bassett]zhou2020gender author author D. Zhou, author E. J. Cornblath, author J. Stiso, author E. G. Teich, author J. D. Dworkin, author A. S. Blevins, and author D. S. Bassett, title title Gender diversity statement and code notebook v1. 0, @noop journal journal Zenodo (year 2020)NoStop [Budrikis(2020)]budrikis2020growing author author Z. Budrikis, title title Growing citation gender gap, @noop journal journal Nature Reviews Physics volume 2, pages 346 (year 2020)NoStop
http://arxiv.org/abs/2307.04941v1
20230711000328
MG3MConv: Multi-Grained Matrix-Multiplication-Mapping Convolution Algorithm toward the SW26010 Processor
[ "Zheng Wu" ]
cs.DC
[ "cs.DC" ]
address1]Zheng Wumycorrespondingauthor [email protected] [mycorrespondingauthor]Corresponding author [address1]University Of Science And Technology Of China, Shushan Qu, Hefei, China As the core of artificial intelligence applications, the research of convolution has become a hot topic in high performance computing. With the rapid development of the emerging SW26010 processor in artificial intelligence, there is an urgent need for high-performance convolution algorithms on the processor. However, the current support of convolution on SW26010 is still rudimentary. The only studies provide sufficient runtime peak performance but lack the adaptability to various convolution scenes. To perfect convolution algorithms on SW26010, we propose a multi-grained matrix-multiplication-mapping convolution algorithm called MG3MConv, which targets the architectural features of SW26010. MG3MConv supports diversified mapping schemes of convolution tasks based on the concept of the thread block proposed in this paper. All the architecture-oriented optimization methods are elaborately designed from four levels to fully exploit the hardware efficiency of SW26010. The experiments show that the hardware efficiency of MG3MConv can reach 84.78% in max, which is 1.75 times compared with that of cuDNN based on NVIDIA K80m GPU. Moreover, MG3MConv can overperform cuDNN in most convolution scenes. We also use six representative CNNs as real-world cases, and the hardware efficiency of MG3MConv reaches up to 67.04% on the VGG network model, which is 1.37 times and 1.96 times that of cuDNN and swDNN, respectively. § INTRODUCE Deep learning has vastly promoted the development of artificial intelligence. As one of the most successful neural network models in deep learning, CNNs (convolutional neural networks) are widely used in numerous fields <cit.> such as computer vision, speech recognition, natural language processing, automatic driving, intelligent medical health. The execution time of CNNs becomes long and unacceptable as larger data sets and more complex CNNs emerge. Because convolution accounts for more than 90% of the total computation in CNNs <cit.>, highly efficient convolution algorithms on many-core processors have become a popular research direction in academia and industry. Nowadays, GPUs and CPUs are the most mature many-core processor platforms for CNNs. Many studies are devoted to improving the performance of convolution on GPUs <cit.> and CPUs <cit.>, which has promoted the perfection of deep neural network libraries such as NVIDIA cuDNN <cit.> and Intel MKL-DNN <cit.>. Furthermore, the acceleration of convolution algorithms on other hardware platforms also attracts researchers to participate, such as Cambrian's DianNao series <cit.>, Google's TPU <cit.>, and SW26010 <cit.>. As the main contributor to the computational power of the world-class Sunway TaihuLight supercomputer, SW26010 <cit.> has several special architectural features such as user-controllable memory hierarchy, asynchronous direct memory access (DMA), on-chip register communication, and double-pipeline instruction execution. These features provide great potential for running artificial intelligence applications based on CNNs. However, the current support of convolution on SW26010 is still rudimentary. The existing studies <cit.> deploy optimization methods by simply mapping convolution tasks into the whole CG (core group). They continually enhance the runtime peak performance of algorithms but rarely consider the adaptability for changeable convolution scenes. Significantly, the situation is the poorest when the batch number and channel number are small. Moreover, due to some limitations of SW26010 <cit.>, the research on the commonly used single-precision convolution has been very lacking. In this paper, we propose a multi-grained matrix-multiplication-mapping convolution algorithm called MG3MConv. Unlike the existing studies, MG3MConv employs diversified mapping schemes of convolution tasks instead of the humdrum mapping scheme based on the whole CG, which can more effectively cope with different convolution scenes. This paper mainly aimed at optimizing and implementing single-precision convolution on SW26010 to make up for the lack of relevant work. Referring to some parallel optimization methods on SW26010 <cit.>, we conduct more comprehensive and fine-grained designs for MG3MConv. The main contributions of our work can be summarized as follows: * We propose MG3MConv, which employs the multi-grained mapping scheme of convolution tasks to deal with various convolution scenes. * We simulate a new concept, the TB (thread block), between the CG and the thread by software, and then manually divide one CG into multiple TBs to assist the multi-grained mapping scheme of MG3MConv. Moreover, this paper integrates many architecture-oriented optimization technologies from four levels (CG-level, TB-level, thread-level, and instruction-level), such as double suffering, on-chip data sharing, and instruction reordering. * This paper conducts experiments from two perspectives: (1) adaptability; (2) practicality. The hardware efficiency of MG3MConv is 84.78% in max, which is 1.75 times that of cuDNN on NVIDIA K80m GPU. Moreover, in most convolution scenes, MG3MConv performs better than cuDNN. For representative CNNs, MG3MConv has the hardware efficiency of 67.04% on VGG, which is 1.37 times and 1.96 times that of cuDNN and swDNN, respectively. Finally, we organize an additional experiment to demonstrate the superiority of the multi-grained mapping scheme of MG3MConv. The rest of this paper is organized as follows. Section 2 presents the background of CNNs and the SW26010 architecture. Section 3 discusses the related work. Section 4 presents the details of implementing MG3MConv. Section 5 evaluates the proposed convolution algorithm. Section 6 concludes the paper. § BACKGROUND §.§ Convolutional Neural Networks Because of the benefits such as weight sharing, sparse interaction, and equivalent representation, CNNs stand out from many deep neural network models and promote the rapid development of computer vision, speech recognition, natural language understanding, and other fields <cit.>. Convolutional layers are exceedingly significant for CNNs. Their huge computation cost has led to high demand to optimize convolution algorithms for high performance. <Ref> shows that the convolutional parameters are symbolically defined to facilitate the subsequent description. Input, filter, and output are denoted as 𝐈𝐍, 𝐅𝐋𝐓, and 𝐎𝐔𝐓, respectively. The training process of CNNs is the iteration of batch after batch of sample data, thus continuously improving the model quality, and here we label the batch number as B. Moreover, 𝐈𝐍 has IC channels, each of which can be viewed as an input feature map with size inH × inW. Similarly, 𝐎𝐔𝐓 consists of OC channels with each channel corresponding to an output feature map with size outH × outW. 𝐅𝐋𝐓 has OC × IC filters, and the size of each filter is fltH × fltW. In addition, we denote the height and width of padding by padH and padW, respectively. Similarly, the height and width of stride are denoted as stdH and stdW. For the elementary convolutional computation, IC filters convolution IC input feature maps one to one, and then an output feature map can be obtained by accumulating the IC partial results. Therefore, one convolution requires OC × IC filters. The overall process of convolution can be simplified to a tensor multiplication and accumulation routine about 𝐈𝐍, 𝐅𝐋𝐓, and 𝐎𝐔𝐓 as <Ref>. As shown in <Ref>, the most simple convolution algorithm <cit.>, called direct convolution, is based on seven nested loops. §.§ SW26010 Architecture SW26010 <cit.> is a heterogeneous many-core processor independently developed by the Shanghai National High Performance Integrated Circuit Design Center. <Ref> shows its detailed architecture. The processor adopts Shenwei-64 Instruction Set, which integrates 260 cores operating at 1.45GHz. SW26010 is able to provide the theoretical peak performance of 3.06TFlops. All the cores are uniformly distributed across four equivalent CGs. Each CG consists of one MPE (management processing element) and 64 CPEs (computing processing elements). The 64 CPEs are organized as an 8x8 grid called the CPE cluster. The four CGs are interconnected via a NoC (network on chip) and support 32GB DDR3 memory. Each CG is directly connected to 8GB memory via a private MC (memory controller). The MPE handles management and communication functions, while the CPE is mainly used to process computational tasks. An MPE has two levels of private cache, including a 32KB L1 instruction cache, a 32KB L1 data cache, and a 256KB L2 cache. Similarly, a CPE has a 16KB L1 instruction cache and a 64KB SPM (Scratchpad Memory) called LDM (Local Device Memory). The LDM can be regarded as a user-controllable fast buffer, and different LDM usage strategies will lead to different DMA efficiency. The 64 CPEs of one CG share a direct-mapped L2 instruction cache of 64KB. SW26010 has many unique features in computation and data access. From the perspective of computation, both the MPE and the CPE support 4-channel floating-point vector computations and fused-multiply-add instructions. However, the MPE has two floating-point units and an instruction pipeline, while the CPE has one floating-point unit and two pipelines, P0 and P1. The P0 is used for scalar/vector computational operations of both floating-point and integer, while the P1 is for data transfer, comparison, jump, and integer scalar operations. From the perspective of data access, two key technologies are adopted to relieve the pressure of off-chip data access on SW26010. One is two kinds of data access from the main memory to the LDM, gld/gst discrete memory access and DMA batched memory access. The former can directly read and write the main memory, while the latter employs the LDM as a bridge to indirectly access the main memory. Stream Triad Test <cit.> shows that both bandwidths can reach up to 1.48GB/s and 22.6GB/s, respectively. The other is register communication, which enables data sharing among the 64 CPEs of one CG. Each CPE is equipped with a sending buffer, a row receiving buffer, and a column receiving buffer, which can contain 6, 4, and 4 register messages, respectively. There are three attention points about the register communication mechanism: (1) the data size is fixed at 256-bits each communication; (2) each CPE only communicates with CPEs of the same row or the same column; (3) the communication is anonymous, and target CPEs receive messages based on the FCFS (first-come-first-serve) principle. § RELATED WORK There are four mainstream convolution algorithms, called direct, GEMM-based, FFT-based, and Winograd-based convolutions. As described in Section 2.1, direct convolution is easy to implement but is difficult to optimize because of its poor data locality. Due to the successful matrix multiplication libraries on many hardware platforms, GEMM-based convolution has become a popular method to accelerate the convolutional process, divided into explicit <cit.> and implicit ones <cit.>. Explicit GEMM-based convolution needs to extract the input, and then fill input matrices with size ( IC× fltH× fltW ) ×( outH× outW ) according to filter matrices with size OC×( IC× fltH× fltW ). The algorithm maximizes the performance of matrix multiplications in convolution, but at the cost of abundantly extra memory and data access. Implicit GEMM-based convolution converts direct convolution into multiple small matrix multiplications by exploiting the potential matrix multiplication relation based on B, IC, and OC. Small matrices can be loaded directly into on-chip storage to avoid unnecessary off-chip memory occupation. Moreover, a suitable data format <cit.> can even put the cost of extra data access zero. Unlike GEMM-based convolution, both Winograd-based and FFT-based convolution reduce the computation complexity of convolution. FFT-based convolution <cit.> converts the input and filter into the frequency domain space, completes those matrix multiplications <cit.>, and converts the result back into the time domain space to get the final convolutional result. The algorithm can reduce the computation complexity of convolution from O( outH^2× fltH^2 ) to O( outH^2×log outH ) <cit.>. However, the process requires expanding the filter size to the size of input feature maps, which is highly unfriendly for CNNs with small-filter convolution. Winograd-based convolution <cit.> can reduce the computation complexity to O( ( outH+fltH-1 ) ^2 ). The disadvantage is too inflexible. Its data transformation process changes with the filter size and strictly restricts the stride size. In addition, FFT-based and Winograd-based convolution will consume amounts of memory to store intermediate data. There are many excellent studies on optimizing convolution algorithms. Li et al. <cit.> optimized direct convolution by register partitioning, and the performance in large-filter cases was improved by 33% compared with cuDNN. Park et al. <cit.> proposed ZeroSkip and AddOpt to optimize convolution. The experiments show that the enhanced Winograd-based convolution using ZeroSkip has a performance improvement of 51.8% compared with the basic Winograd-based one. Vasudevan et al. <cit.> presented a GEMM-based convolution without im2col operations, eliminating the input replication. In most selected layers of GoogLeNet, VGG-16 and AlexNet, the result is evaluated on Intel® Core™ i5-4570 and is better than MKL-DNN. Wang et al. <cit.> proposed a novel implicit im2bcol+IMM convolution to fuse im2col into matrix multiplication, which dedicated the effort to alleviate extra memory consumption and data access consumption. Li et al. <cit.> proposed a coordination tiling and batching framework for efficient batchedGEMM on GPUs. The framework is mainly composed of a tiling engine and a batching engine. Using GoogleNet as a real-world scene, the test achieved x1.24 speedup. Kasagi et al. <cit.> substituted a single layer for a pair formed by a convolutional layer and the following average-pooling layer. The forward performance of ResNet-34 has x17.1 speedup on Intel Core i7-6700k, while the backward x9.17. Kateoka et al. <cit.> presented the convolution-pooling computation technique using the direct sum computation instead of the SATs of Kasagi et al. <cit.>, considering the small pooling size is used in CNNs. Except for NVIDIA GPUs and Intel CPUs, the emerging many-core SW26010 processor has also attracted researchers, but a few studies have been done for convolution on SW26010. Among the existing studies <cit.>, Fang et al. <cit.> rescheduled and mapped the seven nested loops of direct convolution to four CGs. The performance of double-precision convolution is up to 54% of the theoretical peak performance. Zhao et al. <cit.> introduced the support of single-precision convolution based on the study of Fang et al. <cit.>, but the performance is far lower than of double-precision convolution. Reordering the kernel instruction queue and reducing the data access cost of DMA, Zhang et al. <cit.> further optimized the double-precision convolution implementation on SW26010 and achieved 81% of the theoretical peak performance on the best case. However, the current support for convolution on SW26010 is still rudimentary. These efforts excessively focus on maximizing the peak performance of double-precision convolution while ignoring commonly used single-precision one and changeable convolution scenes in CNNs, which is contrary to real-world applications. This paper will solve the shortages of performance and adaptability for single-precision convolution to satisfy applications using CNNs on SW26010. § IMPLEMENTATION AND OPTIMIZATION OF CONVOLUTION Given the following two points: (1) SW26010 has limited main memory capacity and high-overhead memory access; (2) the support of convolution on SW26010 is still rudimentary, we choose the implicit GEMM-based convolution as the basis of our work. The values of B, IC, and OC are often small in CNNs, so convolution implementations that directly call matrix multiplication interfaces are inefficient according to the research <cit.>. Therefore, we design a novel parallel convolution algorithm called MG3MConv. Unlike the traditional optimization methods on SW26010, this paper puts forward the concept of the thread block, called TB, between the CG and the thread. We realize TB by software simulation to assist the implementation of MG3MConv. Therefore, the guiding ideology of this paper is divided into four levels: CG-level, TB-level, thread-level, and instruction-level optimization. §.§ CG-level Optimization CG-level optimization aims to efficiently organize and map convolution tasks in MG3MConv. §.§.§ Matrix-multiplication convolution A three-layer nested cycle of B, IC, and OC remains after hiding fltH, fltW, outH, and outW in direct convolution. Further, we place B, IC, and OC in low dimensions to improve the data locality. Therefore, this paper designs the data layout of 𝐈𝐍 as [inH,inW,IC,B], 𝐅𝐋𝐓 as [fltH,fltW,IC,OC], and 𝐎𝐔𝐓 as [outH,outW,OC,B]. The default data type is single precision, commonly applied to real-world CNNs <cit.>. The convolutional process without fltH, fltW, outH, and outW is as follows in <Ref>: 𝐎𝐔𝐓[ oc,b ] +=∑_ic^IC-1𝐅𝐋𝐓[ ic,oc ] ×𝐈𝐍[ ic,b ] <Ref> can be regarded as matrix multiplication operations with transposition. We mark it as MM_unit to distinguish from matrix multiplication in BLAS. Thus, the following <Ref> can be obtained. <Ref> views 𝐈𝐍 as an array of size inH× inW. Each element is marked 𝐈𝐍_mtx with a size of IC× B. Similarly, 𝐅𝐋𝐓 is an array of size fltH× fltW, where the size of each element is IC× OC. 𝐎𝐔𝐓 is an array of outH× outW corresponding to each element with a size of OC× B. The elements of 𝐅𝐋𝐓 and 𝐎𝐔𝐓 are marked as 𝐅𝐋𝐓_mtx and 𝐎𝐔𝐓_mtx, respectively. Therefore, by applying matrix multiplications as convolution task units, we redesign the seven-layer loop of direct convolution into a four-layer loop. The redesigned algorithm has more complex computational processes and data relationships than matrix multiplication. Through exploring the computational processes and data relationships, we implemented the highly optimized convolution algorithm MG3MConv. §.§.§ Multi-grained mapping For the general GEMM-based convolution algorithm, M of matrix multiplications refers to the number of output channels, N refers to the size of output feature mappings and the batch number, and K refers to the filter size and the number of input channels. Overall, M, N, and K are less than 1000, and even M in half of the cases is less than 100 <cit.>. In <Ref>, the parameters of matrix multiplications become smaller, where N is only the batch number and K is only the number of input channels. Taking the convolution in inception3a/5x5 of GoogleNet as an example <cit.>, after transforming it to GEMM, M, N, and K are 32, 128, and 16 respectively. For the above case, the FP32 performance of the matrix multiplication is 0.408GFlops on SW26010, which only plays 0.055% of the theoretical peak performance. The plain matrix multiplication mapping based on the whole CG is difficult to perform well when matrix multiplication scale is small. Therefore, we present the concept of the TB. By zoning the CPE cluster by software, we can partition one CG into multiple TB. Each TB works independently while multiple CPEs within one TB work cooperatively, thereby improving the utilization of hardware resources. Eventually, we designed an original convolution algorithm, MG3MConv, toward the SW2010 processor. When the matrix multiplication scale is small, forcing 8x8 mapping will result in a single CPE gaining too small tasks and poor performance <cit.>. Accordingly, the simple convolution algorithm based on the above scheme is also inefficient. <Ref> can solve the problem well. MG3MConv distinguishes MM_unit into three different scales shown in <Ref>, in order of small-scale MM_unit, medium-scale MM_unit, and large-scale MM_unit, corresponding to different grained TBs, which are TB(1,1), TB(1,8), and TB(8,8) separately. For TB(1,1), a single MM_unit is mapped to one CPE, and the CPE cluster can perform 64 independent tasks simultaneously. Similarly, TB(1,8) maps a single MM_unit to a row of CPEs, performing 8 independent tasks in parallel. TB(8,8) maps a single MM_unit to the whole CG, similar to matrix multiplication algorithms on SW26010. However, the design ideas of matrix multiplication exploit the convolutional potential on SW26010 insufficiently, so we will further introduce other optimization methods in this paper. We can convert one task of TB(1,8) to multiple tasks of TB(1,1) by tiling MM_unit. Similarly, one task of TB(8,8) can be converted to multiple tasks of TB(1,8). The specific tiling method can refer to <cit.>, and we will not repeat it. Except for the above three division schemes of TB, there are others such as TB(1,2), TB(2,2), and TB(2,4). However, this paper aims to propose and verify the feasibility of the above idea, so we will only discuss and implement MG3MConv based on TB(1,1), TB(1,8), and TB(8,8). §.§ TB-level Optimization TB-level optimization focuses on task collaboration among multiple CPEs within a single TB. §.§.§ Multi-mode on-chip data sharing SW26010 provides a low-latency on-chip register communication mechanism. For TBs with more than one CPE, designing algorithms to increase on-chip data reuse within every TB can significantly reduce the pressure on memory access. Therefore, we propose two different modes of on-chip data sharing strategies for TB(1,8) and TB(8,8): single-broadcast on-chip data sharing and dual-broadcast on-chip data sharing. The dimensions of 𝐅𝐋𝐓_mtx, 𝐈𝐍_mtx, and 𝐎𝐔𝐓_mtx matrices in the MG3MConv are IC× OC, IC× B, and OC× B, respectively. For TB(1,8) shown in <Ref>, one MM_unit is mapped to a row of CPEs, and 𝐎𝐔𝐓_mtx is divided into 8 equal parts along the OC dimension. Each CPE is responsible for one OC8× B submatrix of 𝐎𝐔𝐓_mtx, which requires one IC×OC8 submatrix of 𝐅𝐋𝐓_mtx and one IC× B submatrix of 𝐈𝐍_mtx. At this time, 𝐈𝐍_mtx is repeatedly loaded 8 times, resulting in the high cost of memory access. We propose a single-broadcast on-chip data sharing to solve the problem. Further, divide 𝐈𝐍_mtx into 8 equal parts along the IC dimension, and then perform the row broadcast of submatrices of 𝐈𝐍_mtx in turn to finish the following computation: 𝐎𝐔𝐓_mtx[ i ] =∑_k=0^7𝐅𝐋𝐓_mtx[ i,k ] ×𝐈𝐍_mtx[ k ] As illustrated in <Ref>, CPE[ i ] represents the i-th CPE in one row, corresponding to 𝐅𝐋𝐓_mtx[ i ], 𝐈𝐍_mtx[ i ], and 𝐎𝐔𝐓_mtx[ i ], submatrices ( i∈[ 0,1,...,7 ] ). Furthermore, 𝐅𝐋𝐓_mtx[ i ] is divided into 8 equal parts labeled as 𝐅𝐋𝐓_mtx[ i,0 ] ∼𝐅𝐋𝐓_mtx[ i,7 ]. Firstly, CPE[ 0 ] broadcasts 𝐈𝐍_mtx[ 0 ] to the other CPEs in the same row, which receive the row-broadcast data by register communication. The 8 CPEs in the same row perform 𝐎𝐔𝐓_mtx[ i ] +=𝐅𝐋𝐓_mtx[ i,0 ] ×𝐈𝐍_mtx[ 0 ] separately. Similarly, We can finish the remaining operations from CPE[ 1 ] to CPE[ 7 ]. For TB(8,8), the CPE Cluster processes one MM_unit at a time. Intuitively, we partition 𝐎𝐔𝐓_mtx by an 8×8 mesh. Each CPE is responsible for one OC8×B8 submatrix of 𝐎𝐔𝐓_mtx, which requires one IC×OC8 submatrix of 𝐅𝐋𝐓_mtx and one IC×B8 submatrix of 𝐈𝐍_mtx. Both 𝐅𝐋𝐓_mtx and 𝐈𝐍_mtx are loaded 8 times repeatedly. We propose a double-broadcast on-chip data sharing to eliminate repeated memory access. Further, we partition 𝐅𝐋𝐓_mtx and 𝐈𝐍_mtx by an 8×8 mesh, then perform the row broadcast of submatrices of 𝐅𝐋𝐓_mtx and the column broadcast of submatrices of 𝐈𝐍_mtx. The specific computational process is as follows: 𝐎𝐔𝐓_mtx[ i,j ] =∑_k=0^7𝐅𝐋𝐓_mtx[ k,i ] ×𝐈𝐍_mtx[ k,j ] Similarly, CPE[ i,j ] represents the CPE in the i-th row and j-th column ( i,j∈[ 0,1,...,7 ] ) shown in <Ref>, corresponding to 𝐅𝐋𝐓_mtx[ j,i ], 𝐈𝐍_mtx[ i,j ], and 𝐎𝐔𝐓_mtx[ i,j ]. 𝐅𝐋𝐓_mtx[ j,i ] is mapped to CPE[ i,j ] by inverting the row and column indexes. The purpose is to avoid idling the row receiving buffer on the CPE and promote the efficiency of register communication. Firstly, CPE[ i,0 ] broadcasts 𝐅𝐋𝐓_mtx[ 0,i ] to the other CPEs in the same row, and CPE[ 0,j ] broadcasts 𝐈𝐍_mtx[ 0,j ] to the other CPEs in the same column. At the moment, all the CPEs perform 𝐎𝐔𝐓_mtx[ i,j ] +=𝐅𝐋𝐓_mtx[ 0,i ] ×𝐈𝐍_mtx[ 0,j ]. Similarly, We can finish the remaining operations from CPE[ i,1 ] ,CPE[ 1,j ] to CPE[ i,7 ] ,CPE[ j,7 ]. §.§ Thread-level Optimization Thread-level optimization concentrates on designing the optimization methods of data access within a single CPE. Referring to <cit.>, MG3MConv performs all DMA operations by single-precision data while performing the assembly kernel by double-precision data. Therefore, the additional occupation of LDM caused by data type conversion becomes a non-negligible problem. §.§.§ Enhanced data reuse within the CPE The filter will be used repeatedly during the convolution execution because of the convolutional weight sharing in CNNs. In <Ref>, each 𝐅𝐋𝐓_matrix with a size of IC× OC is used about outH× outW time. The stride size is generally smaller than the filter size, so the input will also be used repeatedly. Similarly, <Ref> will use each 𝐈𝐍_matrix with a size of IC× B about fltHstdH×fltWstdW times repeatedly. As shown in <Ref>, each 𝐅𝐋𝐓_matrix is loaded 36 times, while each 𝐈𝐍_matrix is only loaded 9 times. Because outH× outW is usually larger than fltHstdH×fltWstdW in real-world CNNs, we focus more on optimizing the data access of 𝐅𝐋𝐓_matrix used more frequently. By exploring the data reuse of 𝐅𝐋𝐓_matrix, we present <Ref> to reduce or even eliminate repeated data access cost for 𝐅𝐋𝐓_matrix. In <Ref>, we allocate the LDM space, 𝐥𝐝𝐦𝐎𝐔𝐓_S[ outLen ], for outLen 𝐎𝐔𝐓_matrix. Similarly, 𝐥𝐝𝐦𝐈𝐍_S and 𝐥𝐝𝐦𝐅𝐋𝐓_S are for one 𝐈𝐍_matrix and one 𝐅𝐋𝐓_matrix. Given the on-chip data type conversion, we deploy 𝐥𝐝𝐦𝐎𝐔𝐓_D[ outLen ], 𝐥𝐝𝐦𝐈𝐍_D, and 𝐥𝐝𝐦𝐅𝐋𝐓_D for the double-precision data used by the assembly kernel. The computation of 𝐥𝐝𝐦𝐎𝐔𝐓_D[ outLen ] in the innermost loop can realize the outLen-times data reuse of 𝐥𝐝𝐦𝐅𝐋𝐓_S. Correspondingly, the total amount of transferring 𝐅𝐋𝐓_matrix from the main memory to the LDM is reduced by outLen times. <Ref> shows that the frequency of loading the same 𝐅𝐋𝐓_matrix drops to 12 with outLen=3. Without considering the LDM capacity, we can set outLen by an extreme value of outH× outW. At this time, MG3MConv will eliminate repeated data access of 𝐅𝐋𝐓_matrix. §.§.§ Enhanced asynchronous DMA within the CPE SW26010 supports, by DMA, asynchronous data access between the main memory and the LDM, making it possible to apportion the cost of data access into the assembly kernel. Therefore, we employ a double buffering method shown as <Ref> to hide DMA's data access cost in MG3MConv. We double buffer 𝐅𝐋𝐓_mtx based on 𝐥𝐝𝐦𝐅𝐋𝐓_S[ ldst ] and 𝐥𝐝𝐦𝐅𝐋𝐓_S[ cmpt ], where ldst indicates the LDM space of data required the next computation of the assembly kernel, and cmpt is for the current computation of the assembly kernel. Similarly, we use 𝐥𝐝𝐦𝐈𝐍_S[ ldst ] and 𝐥𝐝𝐦𝐈𝐍_S[ cmpt ] to double buffer 𝐈𝐍_mtx. Because of the data type conversion in MG3MConv, set the corresponding 𝐥𝐝𝐦𝐅𝐋𝐓_D[ ldst ], 𝐥𝐝𝐦𝐅𝐋𝐓_D[ cmpt ], 𝐥𝐝𝐦𝐈𝐍_D[ ldst ], and 𝐥𝐝𝐦𝐈𝐍_D[ cmpt ] for the double-precision data of the assembly kernel in <Ref>, respectively. With the above preparations, we prefetch 𝐥𝐝𝐦𝐅𝐋𝐓_S[ cmpt ] and 𝐥𝐝𝐦𝐈𝐍_S[ cmpt ], and guarantee to load 𝐥𝐝𝐦𝐅𝐋𝐓_S[ ldst ] and 𝐥𝐝𝐦𝐈𝐍_S[ ldst ] and to compute the assembly kernel are executed in parallel without data dependence. The essence of double buffering is to overlap independent computation and data access in the program, thereby hiding the shorter cost of both. If both costs are significantly unbalanced, immoderate double buffering will waste limited on-chip storage resources and hurt performance. To improve the effect of double buffering and save the LDM, we design four double buffering methods based on <Ref>: (1) zero-matrix double buffering; (2) one-matrix double buffering of 𝐈𝐍_mtx or 𝐅𝐋𝐓_mtx; (3) two-matrices double buffering of 𝐈𝐍_mtx and 𝐅𝐋𝐓_mtx; (4) three-matrices double buffering of 𝐈𝐍_mtx, 𝐅𝐋𝐓_mtx, and 𝐎𝐔𝐓_mtx. §.§.§ Enhanced LDM usage with the CPE In enhanced asynchronous DMA within the CPE, there are two types of LDM data: ordinary data and double-buffering data. Because of the data type conversion in MG3MConv, DMA operations depend on SPD (single-precision data), and the assembly kernel depends on DPD (double-precision data). The left of <Ref> shows the simple LDM usage, where each SPD matches a double-sized DPD in pairs. Compared with the ideal LDM usage in the right of <Ref>, we can see that the simple usage will cause additional LDM consumption in 2 times, which is unacceptable for limited LDM with only 64KB on one CPE. As shown in the middle of <Ref>, we propose a nested usage for ordinary data and a fixed usage for double-buffering data to solve the above problem. The nested usage places the LDM space of SPD in the first half of the corresponding DPD, which realizes the physical share and logical separation of LDM between the SPD and the DPD. At the time, we need to guarantee the result accuracy of the algorithm. For the SPD loaded by DMA, follow the end of the DMA loading closely and convert the SPD into the DPD in reverse order. For the SPD stored by DMA, follow the beginning of the DMA storing closely and convert the DPD into the SPD in sequential order. The fixed usage specifies the SPD as the LDM space indexed by ldst and the corresponding DPD as the LDM space indexed by cmpt. To guarantee the result accuracy of the algorithm, for the SPD loaded by DMA, the conversion from the SPD to the DPD follows the end of the DMA loading closely. At this moment, the freed SPD space prepares for the next DMA loading. Similarly, the conversion from the DPD to the SPD follows the beginning of the DMA storing closely. Then, the freed DPD space prepares for the next computation of the assembly kernel. As shown in the middle of <Ref>, the enhanced usage only requires about 66.7% LDM extra compared with the ideal usage, which significantly relieves the pressure of limited LDM. §.§ Instruction-level Optimization Instruction-level optimization mainly addresses the highly optimized implementation of the assembly kernel in MG3MConv. Although it is similar to the work in <cit.>, there still exist two differences: (1) FLT_mtx requires data transposition; (2) the values of B, IC, and OC are small. §.§.§ Register computation without data transposition We must firstly solve two problems with the high-performance implementation of the assembly kernel. How to effectively organize and map scalar computation to vector computation? How to allocate limited vector registers? Ordinary multiply-add operations, such as 𝐂[ 0:N ] +=𝐀[ 0:N ] ×𝐁[ 0:N ], can be directly converted into vector operations by segmentation based on vector length. However, the assembly kernel of MG3MConv is complicated, as shown in <Ref>. 𝐥𝐝𝐦𝐎𝐔𝐓_D[ k,n ] +=∑_c=0^C-1𝐥𝐝𝐦𝐅𝐋𝐓_D[ c,k ] ×𝐥𝐝𝐦𝐈𝐍_D[ c,n ] k∈[ 0,K ) ,n∈[ 0,N ) The direct vectorization method is not suitable for the assembly kernel. We can also change the dimension of 𝐥𝐝𝐦𝐅𝐋𝐓_D from C× K to K× C by data transposition and then directly use the work in <cit.>. However, we prefer to avoid the cost of data transposition, so we design a vectorization mapping in <Ref>. Taking MG3MConv based on TB(1,1), the details are as follows: * The vldd instruction loads four elements of 𝐥𝐝𝐦𝐎𝐔𝐓_D in turn because the vector length of SW26010 is four. We mark the result as the vector array 𝐥𝐝𝐦𝐎𝐔𝐓_D^V with a size of K×N4. * The ldde instruction loads one element of 𝐥𝐝𝐦𝐅𝐋𝐓_D in turn and performs vector expansion. We mark the result as the vector array 𝐥𝐝𝐦𝐅𝐋𝐓_D^V with a size of C× K. * The vldd instruction loads four elements of 𝐥𝐝𝐦𝐈𝐍_D in turn. We mark the result as the vector array 𝐥𝐝𝐦𝐈𝐍_D^V with a size of C×N4. * The vmad instruction performs C times of multiply-add operations based on 𝐥𝐝𝐦𝐎𝐔𝐓_D^V, 𝐥𝐝𝐦𝐅𝐋𝐓_D^V, and 𝐥𝐝𝐦𝐈𝐍_D^V. * The vstd instruction stores the final values back to the original positions of 𝐥𝐝𝐦𝐎𝐔𝐓_D. Each CPE of SW26010 has 32 vector registers, including the zero register and the SP (stack pointer) register. We can only use no more than 30 vector registers freely. As shown in <Ref>, we assume that one stage of the assembly kernel is responsible for one 𝐥𝐝𝐦𝐎𝐔𝐓_D^V block of size K_r×N_r4, which requires one 𝐥𝐝𝐦𝐅𝐋𝐓_D^V block of size C_r× K_r and one 𝐥𝐝𝐦𝐈𝐍_D^V block of size C_r×N_r4. To guarantee the efficiency of the vectorization computation, we make the following limits: (1) there is no data dependence within each stage; (2) there is no register reuse within each stage. Therefore, C_r=1 can satisfy the limit (1). In addition, we have K_r+N_r4+K_rN_r4<30 because of the limit (2). To maximize the computation to data access ratio in <Ref>, we acquire the minimum value of 4N_r+1K_r when K_r=N_r4=4. 2KNC/4KCNN_r+CNKK_r+2KN≈2/4N_r+1K_r §.§.§ Fine-grained instruction reordering Each CPE of SW26010 has two pipelines, P0 and P1. The P0 mainly supports floating-point operations, and the P1 mainly supports data transfers. Meanwhile, both of the two can run integer scalar operations. According to the conclusions from Section 4.4.1, we can acquire the ideal allocation of vector registers. We load 𝐥𝐝𝐦𝐅𝐋𝐓_D^V with four vector registers marked as 𝐅𝐋𝐓_r[ 0 ] ∼𝐅𝐋𝐓_r[ 3 ], load 𝐥𝐝𝐦𝐈𝐍_D^V with four vector registers marked as 𝐈𝐍_r[ 0 ] ∼𝐈𝐍_r[ 3 ], and store the computational results of 𝐥𝐝𝐦𝐎𝐔𝐓_D^V with 16 vector registers marked 𝐎𝐔𝐓_r[ 0,0 ] ∼𝐎𝐔𝐓_r[ 3,3 ]. Taking MG3MConv based on TB(1,1), the left of <Ref> shows the elementary instruction sequence of the innermost loop of the assembly kernel. The parallelism of the instruction sequence is so low that the execution cost is up to 25 cycles. Many excellent studies <cit.> have proved the importance and effectiveness of manual instruction reordering. Therefore, we realize the highly effective instruction-level parallelism by manually reordering the instruction sequence. Before entering the innermost loop, prefetch 𝐅𝐋𝐓_r[ 0 ] ∼𝐅𝐋𝐓_r[ 3 ] and 𝐈𝐍_r[ 0 ] ∼𝐈𝐍_r[ 3 ] required by the first computation, and then rearrange the instruction sequence with two fundamental principles. Principle 1 guarantees to parallel the front of computation and the rear of data access in the current loop. Principle 2 guarantees to parallel the rear of computation in the current loop and the front of data access in the next loop. As shown in the right of <Ref>, the optimized instruction sequence only requires 17 cycles, and the performance is improved by about 47.1%. Although the rearranged instruction sequence significantly improves performance, the limit of K_r=4 and N_r=16 can not be ignored. K_r=4 requires K to be a multiple of 4, and N_r=16 requires N to be a multiple of 16. If the above conditions are not satisfied, we have to pad data to run the assembly kernel correctly, which will enormously impair the performance. For example, the correct execution will cause 16.4% of extra computation and 12.5% of extra data access when K=30, N=44, and C=16. We take multiple possible cases into account based on the values of K_r and N_r to solve the problem. Given 16 cases consisting of K_r∈{ 1,2,3,4 } and N_r∈{ 4,8,12,16 }, we rearrange 16 kinds of instruction sequences. <Ref> shows the case of K_r=2 and N_r=12. Based on the above implementations, we can divide the assembly kernel into four parts: (1) k∈[ 0,K-mod( K,4 ) ) and n∈[ 0,N-mod( N,16 ) ); (2) k∈[ 0,K-mod( K,4 ) ) and n∈[ N-mod( N,16 ) ,N ); (3) k∈[ K-mod( K,4 ) ,K ) and n∈[ 0,N-mod( N,16 ) ); (4) k∈[ K-mod( K,4 ) ,K ) and n∈[ N-mod( N,16 ) ,N ). At this time, the assembly kernel can be performed without any extra cost when K=30, N=44, and C=16. For MG3MConv based on TB(1,8), we can implement its instruction-level optimization by mainly replacing vldd of 𝐥𝐝𝐦𝐈𝐍_D with vldr. Moreover, MG3MConv based on TB(8,8) mainly uses vldc and ldder instead of vldd of 𝐥𝐝𝐦𝐈𝐍_D and ldde of 𝐥𝐝𝐦𝐅𝐋𝐓_D, respectively. Because TB(1,8) and TB(8,8) will introduce additional instructions to compute the addresses of data broadcasted, we only refer to the rearranged instruction sequences of TB(1,1) and then design the rest of the 32 cases. § EXPERIMENTAL RESULTS To verify the work of this paper synthetically, we evaluate the superiority of the proposed MG3MConv algorithm from three aspects. We first evaluate the algorithm’s adaptability with different convolution scenes. Then, test the performance of several representative CNNs to verify the practicability of MG3MConv. Lastly, we demonstrate the superiority of the multi-grained mapping scheme of MG3MConv. This paper chooses the NVIDIA K80m GPU in the same period to compare the runtime performance of cuDNN. The theoretical peak performance of FP of K80m GPU is 8.74TFlops. Considering different theoretical peak performances of SW26010 and K80m GPU, we use hardware efficiency (%) in experiments instead of the general performance metric (GFlops). The computation of hardware efficiency is runtime performance/theoretical peak performance, which indicates the utilization degree of processors during the convolution execution. We can intuitively spot the superiority of convolution algorithms of different hardware platforms according to hardware efficiency. §.§ Evaluating the Adaptability Current CNNs have a variety of convolution layers, and convolution parameters change with convolution layers irregularly. Therefore, it is unnecessary to try to cover all possible convolution scenes. This paper generates four sets of experiments targeting different values of convolution parameters: (1) channel number (IC,OC), (2) batch number (B), (3) filter size (fltH,fltW), (4) padding size (padH,padW) and stride size (stdH,stdW). §.§.§ Convolution scenes with different channel numbers We generate three sets of convolutions corresponding to three channel scales: small-scale, medium-scale, and big-scale channels. For small-scale channel convolutions, the ranges of channel number are 16, 32, 48, and 64; for medium-scale channel convolutions, the ranges of channel number are 64, 128, 192, and 256; for big-scale channel convolutions, the ranges of channel number are 256, 512, 768, and 1024. Each set contains 16 convolution scenes based on IC and OC. <Ref> shows the hardware efficiency of MG3MConv and cuDNN for convolution scenes with various channel numbers. We can find that MG3MConv outperforms cuDNN in 97.8% of the scenes, and the average hardware efficiency is 1.77 times that of cuDNN. Comparing <Ref>, <Ref>, and <Ref>, we can draw an important conclusion that the larger the channel number is, the better the performance is. This mainly owes to that MM_unit based on B, IC, and OC is the core of MG3MConv. When B is determined, larger IC and OC can efficiently improve the performance of matrix multiplications. The scenes of the big-scale channel have the best performance, where the hardware efficiency of MG3MConv can reach 84.78% in max, while that of cuDNN is only 48.36%. However, the performance of convolution declines as the channel number decreases. The average hardware efficiency of MG3MConv is 55.86% on the scenes of the medium-scale channel, which is 1.49 times that of cuDNN. For the scenes of the small-scale channel, the hardware efficiency of MG3MConv drops to 36.87% on average but is still significantly more than that of cuDNN. §.§.§ Convolution scenes with different batch numbers B is not as unpredictable as IC and OC in CNNs, so we set B= 64, 128, and 256 as representatives. Then, the three values of B are matched with IC=OC in Section 5.1.1 to test convolution scenes with different batch numbers. <Ref> shows the hardware efficiency of MG3MConv for convolution scenes with the three representative batch numbers. We have two important observations from the results in <Ref>. Firstly, the larger B is, the higher the performance of MG3MConv is. The average hardware efficiency is 40.39%, 59.07%, and 62.54% by the value of B from smallest to largest, respectively. This owns that larger B means higher DMA bandwidth and better instruction-level parallelism. Secondly, the performance gap of different B is narrowing as the channel number increases. This is because larger IC can improve the instruction-level parallelism, while larger OC can improve the DMA bandwidth. Therefore, B will influence the performance of MG3MConv less as IC and OC increase. We can find that increasing B is beneficial but is not endless. §.§.§ Convolution scenes with different filter sizes Generally, the filter size is odd and no more than 11, so we select fltH=fltW= 3, 5, 7, 9, and 11. Similarly, the five values of fltH=fltW are matched with IC=OC in Section 5.1.1 to finish experiments. <Ref> shows the hardware efficiency of MG3MConv for convolution scenes with different filter sizes. We acquire two important observations from these results. Firstly, the filter size has an inconspicuous effect on the performance of MG3MConv when the other convolution parameters are determined. The average performance fluctuation is only 1.65%. Secondly, the performance will slightly increase for small channel numbers as the filter size becomes bigger. However, the performance maintains highly stable when the channel size is more than 256. This is because the impact of data access gradually decreases for MG3MConv with the increase of the channel number. Larger filter sizes mean better optimization of data locality. When MG3MConv is compute-bound, the performance is determined by B, IC, and OC while is not affected by filter sizes. §.§.§ Convolution scenes with different padding sizes and stride sizes Padding and stride are the most neglected parameters in convolution, but they are important to the convergence of CNNs. Except for two kinds of common configurations: (1) padH=padW=0 and stdH=stdW=1 and (2) padH=padW=1 and stdH=stdW=1, we add two additional configurations: (3) padH=padW=0 and stdH=stdW=2 and (4) padH=padW=1 and stdH=stdW=2. Eventually, we match the four configurations with IC=OC in Section 5.1.1 to experimentalize. <Ref> shows the hardware efficiency of MG3MConv for convolution scenes with various padding sizes and stride sizes. With these results of <Ref> together, we can find that the performance of MG3MConv is almost stable when only padding and stride sizes change, and the performance fluctuation is only 0.65% on average. There are two main reasons to cause performance fluctuation: (1) a padding size more than 0 will lead to a more complex execution process of MG3MConv; (2) a bigger stride size will lessen the potential data locality of the algorithm. Even so, we can still consider that MG3MConv has excellent adaptability to padding and stride sizes. §.§ Evaluating the Practicability To verify the practicability of MG3MConv in the real world, we select six representative CNNs as experiment objects: AlexNet, VGG, GoogLeNet, ResNet, SqueezeNet, and YOLO. We test and record the hardware efficiency of MG3MConv based on all convolution layers of the six CNNs, and then compare that of cuDNN. <Ref> shows the hardware efficiency of MG3MConv and cuDNN for different CNNs. Overall, MG3MConv outperforms cuDNN in all the six CNNs. Compared with cuDNN, the improvement of the hardware efficiency of MG3MConv ranges from 2.4% to 77.9%, and is up to 43.85% on average. As shown in <Ref>, the hardware efficiency of MG3MConv on VGG is the highest with 67.04%, and has 37.21% and 96.61% improvement compared with that of cuDNN and swDNN <cit.>. In summary, <Ref> demonstrates that MG3MConv has better practicability than cuDNN and swDNN. §.§ Evaluating the Multi-grained Mapping Scheme The core ideology of the MG3MConv algorithm proposed in this paper is the multi-grained mapping scheme. The scheme is directly affected by B, IC, and OC. Referring to Section 5.1, we select various convolution scenes to verify the superiority of the multi-grained mapping scheme. These convolution scenes are artificially built from the three representative batch numbers and different channel numbers ranging from 16 to 1024. <Ref> shows the best-grained mapping scheme of MG3MConv for different convolution scenarios. The X-axis indicates the value of IC, the Y-axis indicates the value of OC, and the yellow, green, green, and purple squares represent TB(1,1), TB(1,8), and TB(8,8), respectively. We have two important observations from the results in <Ref>. Firstly, when B is fixed, the granularity of the mapping scheme increases as IC and OC increase. Secondly, TB(1,8) and TB(8,8) tend to extend to the upper left corner of <Ref> as B increases. This mainly owes to that MG3MConv takes MM_unit as convolution tasks of one TB. A large-grained mapping scheme will cause a lack of workload in a single CPE for small B, IC, and OC. Conversely, for big B, IC, and OC, a small-grained mapping scheme will lead to repeated data access between the main memory and the LDM. Therefore, partitioning one CG into multiple thread blocks is beneficial when B, IC, and OC are small. We manually produce a simple convolution algorithm based on TB(8,8) to verify the superiority of MG3MConv. As shown in <Ref>, for the coverage area of TB(1,1) plus TB(1,8), B=64, B=128, and B=256 are 100%, 68%, and 60%, respectively. We can see that MG3MConv improves performance on most convolution scenes compared with the simple convolution. <Ref> shows the average hardware efficiency of the simple convolution and MG3MConv. Comparing the results of both, MG3MConv brings significant performance improvement with 102.24%, 44.92%, and 26.97% at B=64, B=128, and B=256, respectively. In summary, <Ref> and <Ref> demonstrate that the multi-grained mapping scheme of MG3MConv is necessary and can improve the performance of convolution on SW26010 significantly. § CONCLUSIONS The current support of convolution on SW26010 is still rudimentary. There are mainly two urgent problems: (1) enhance the adaptability for various convolution scenes; (2) deploy the mature implementation of single-precision convolution. This paper presents a novel convolution algorithm, MG3MConv, to solve these problems. Based on the concept of TB proposed in this paper, MG3MConv can perform diversified mapping schemes of convolution tasks, which significantly improves the adaptability to different convolution scenes. Compared with cuDNN and swDNN, experiments demonstrate that the proposed MG3MConv performs better for various convolution scenes and real-world CNNs. Because of the features of the SW26010 architecture, we design architecture-specific optimization techniques, such as LDM utilization and register communication. Generally speaking, some optimization techniques, such as thread blocking, vectorization, and instruction reordering, are also applied to other many-core processors, such as the Intel Xeon/Xeon Phi and the NVIDIA GPUs. In summary, our work can be general for other application and algorithm optimization problems on SW26010, which also provides other many-core processors with some valuable references. Our future work is on other convolution algorithms, such as Winograd-based convolution. Moreover, We expect to extend the experience of convolution algorithms on SW26010 to other many-core processor platforms. § ACKNOWLEDGMENTS The work is supported by the National Key Research and Development Program of China under Grant (2018YFB0204102). We sincerely thank the technical staff professionals of Sunway TaihuLight for helpful discussions.
http://arxiv.org/abs/2307.04420v1
20230710085407
FedDCT: A Dynamic Cross-Tier Federated Learning Scheme in Wireless Communication Networks
[ "Peng Liu", "Youquan Xian", "Chuanjian Yao", "Xiaoyun Gan", "Lianghaojie Zhou", "Jianyong Jiang", "Dongcheng Li" ]
cs.DC
[ "cs.DC", "cs.AI" ]
Article Title]FedDCT: A Dynamic Cross-Tier Federated Learning Scheme in Wireless Communication Networks 1,2]Peng [email protected] 1,2]Youquan [email protected] 1,2]Chuanjian [email protected] 1,2]Xiaoyun [email protected] 1,2]Lianghaojie [email protected] 1,2]Jianyong [email protected] [1,2]Dongcheng [email protected] *[1] Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, 54104, China [2] Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin, 54104, China With the rapid proliferation of Internet of Things (IoT) devices and the growing concern for data privacy among the public, Federated Learning (FL) has gained significant attention as a privacy-preserving machine learning paradigm. FL enables the training of a global model among clients without exposing local data. However, when a federated learning system runs on wireless communication networks, limited wireless resources, heterogeneity of clients, and network transmission failures affect its performance and accuracy. In this study, we propose a novel dynamic cross-tier FL scheme, named FedDCT to increase training accuracy and performance in wireless communication networks. We utilize a tiering algorithm that dynamically divides clients into different tiers according to specific indicators and assigns specific timeout thresholds to each tier to reduce the training time required. To improve the accuracy of the model without increasing the training time, we introduce a cross-tier client selection algorithm that can effectively select the tiers and participants. Simulation experiments show that our scheme can make the model converge faster and achieve a higher accuracy in wireless communication networks. [ * Received / Accepted ======================== § INTRODUCTION With the rapid proliferation of intelligent services and applications powered by artificial intelligence (AI), the Internet of Things (IoT) is permeating various aspects of our daily lives. Traditional AI techniques rely on centralized data collection and processing, which may not be feasible in real-world scenarios due to escalating concerns about data privacy and the high scalability of modern IoT networks. In this context, Federated Learning (FL) has emerged as a distributed and collaborative AI approach that enables training on distributed IoT devices without data sharing, making numerous intelligent IoT applications attainable <cit.>. In wireless communication networks, IoT clients have different computing and communication resources, and unavoidable transmission failures, which can cause straggler problems. Although stragglers result in model drift and affect model convergence <cit.>, the straggling caused by heterogeneous clients and communication failure are different; the former is relatively stable and predictable, while the latter is unpredictable. Furthermore, local data samples of different clients are usually not independent and identically distributed (non-iid), which prolongs the training time and reduces the accuracy of the model <cit.>. To address these problems, asynchronous federated learning <cit.> is used with the expectation that clients can improve their training performance in a single round, without waiting for stragglers. However, asynchronous FLs usually require more iterations and communication overheads to train and are difficult to integrate into existing privacy protection schemes <cit.>. To improve the training performance, TiFL <cit.> divides clients into different tiers according to their training response time and randomly selects clients for training in a tier. However, although TiFL reduces the training time increased by heterogeneous clients, it does not consider the impact of communication failures in wireless communication networks. In this study, we propose a new dynamic cross-tier federated learning scheme named FedDCT for existing FL applications. FedDCT consists of two algorithms: a dynamic tiering algorithm and a cross-tier client selection algorithm. Specifically, the dynamic tiering algorithm is used to evaluate the training time of clients and divide them into different logical tiers, and before each training round, the cross-tier client selection algorithm is used to select the tiers and participants. FL distributes the latest global model to selected clients for training. Through the dynamic tiering algorithm, timeout thresholds are assigned for each tier to increase training performance. If the training time of a client exceeds their tier’s threshold, they will be removed from the training process and re-evaluated. However, if the training time of a client does not exceed their threshold, their average training time is updated. Our main contributions are as follows: * To reduce the delay in training time caused by stragglers, we designed a dynamic tiering algorithm that aims to divide clients dynamically into different tiers according to their training time and assign different timeout thresholds for each tier. For clients that exceed the threshold, we used an evaluation program to reduce the impact of stragglers and improve training performance. * To reduce training time and improve model accuracy, we proposed a cross-tier client selection algorithm. The algorithm uses different strategies for selecting tiers and participants to achieve a balance between training time and accuracy. * FedDCT considers both data heterogeneity and different types of stragglers in wireless communication networks. Extensive experiments showed that the FedDCT can achieve better training performance and training accuracy under different degrees of data heterogeneity, client heterogeneity, and network reliability. The remainder of this paper is organized as follows: Section <ref> summarizes related research in federated learning, Section <ref> provides a preliminary introduction to FL, and the technical details of FedDCT are presented in Section <ref>. Section <ref> summarizes the experimental results and discussion, and Section <ref> concludes the paper. § RELATED WORK In recent years, many schemes have been proposed to reduce the influence of data heterogeneity, resource heterogeneity, and stragglers in FL to improve the training performance and accuracy of wireless communication networks. To reduce the impact of data heterogeneity in training and to improve model accuracy, Wang et al. <cit.> suggested using Deep Q-Network (DQN) to select participants because they believed that the distribution of training samples was related to the model weight. Furthermore, Fraboni et al. <cit.> claimed that the current FL sampling algorithm was biased and unstable and proposed to select participants by introducing cluster sampling. Although the above schemes can effectively reduce the impact of data heterogeneity on FL, they do not consider the impact of the training time of the selected clients on the overall training time. To reduce the impact of resource heterogeneity in training and improve training performance, Nishio et al. <cit.> proposed the FedCS algorithm, which dynamically selects clients for training according to their resource status, enables the server to aggregate as many model updates from the clients as possible, and significantly accelerates the training speed. In addition, Abdulrahman et al. <cit.> proposed the FedMCCS algorithm, which considers the computational resources and communication capabilities of the clients, predicts whether the clients can complete the task, and maximizes the number of clients selected to improve the overall convergence speed. However, this approach does not consider the effect of data heterogeneity, and excessive participants can increase network load <cit.>. Leng et al. <cit.> considered the channel and learning quality of clients in a wireless communication network, selected participants, and assigned subchannels to them. Zhang et al. <cit.> used reinforcement learning to select participants to whom different local iteration epochs and radio resources are allocated. TiFL divides clients into different tiers based on their training time and randomly selected participants from each tier in a round <cit.>. Although the above method can effectively improve the convergence speed of the model, the problem of stragglers due to network failures and other problems in wireless communication networks still significantly increases the training time of FL. Asynchronous FL eliminates the need to wait for other clients to upload the model parameters in each training round, which can greatly improve the training performance <cit.>. Xie et al. <cit.> designed an adaptive weight algorithm according to the staleness of the model and updated the global model using the weighted average. Wang et al. <cit.> proposed a new aggregation weight that considers the effect of training data size and model staleness on global model convergence through combination. Chai et al. <cit.> proposed FedAT, an asynchronously federated learning system that enables clients in each tier to be trained simultaneously and uses gradient quantization and sparsification techniques to minimize communication. However, asynchronous FL aggravates different degrees of participation in training among clients, which may cause model drift and affect model accuracy <cit.>. Therefore, the current asynchronous FL method is difficult to fit into the currently available FL privacy protection schemes <cit.>. Luo et al. <cit.> proposed that the training time can be reduced while ensuring convergence by adjusting the basic variables of each training round (number of selected nodes and number of local iterations). Chen et al. <cit.> used the upper confidence bound (UCB) algorithm to predict the computational and communication capabilities of clients and assign different numbers of local iterations to them. Chen et al. <cit.> used a dynamic learning step to compensate for clients with high data volume and poor communication status. Liu et al. <cit.> noted that the bias of the local model is initially large and decreases as training progresses; therefore, they proposed an adaptive number of aggregation models to improve the convergence speed of the global model. However, none of the above schemes consider the impact of stragglers in wireless communication networks on FL training. Although there have many related studies trying to improve the training accuracy and performance of FL systems in wireless communication networks. The impact of stragglers caused by various factors on FL training in wireless communication networks requires further investigation. Therefore, this paper proposes the FedDCT scheme, which considers different stragglers and data heterogeneity to improve model accuracy and training performance in wireless communication networks. § PRELIMINARY INTRODUCTION ON FL FL algorithms typically involve training tens of thousands of remote clients on their local datasets and jointly training a global shared model under the coordination of an aggregation server <cit.>. FL is an iterative process in which the selected clients use the latest global model and local data for training. The server then aggregates the trained models to form a new global model. Where C represents the set of all available clients and | C | represents the number of available clients. The basic flow of the FL algorithm is briefly summarized in Algorithm. <ref>, as follows. * The aggregation server first initializes the weight of the global model w^0 randomly. * At the beginning of each round, the aggregation server randomly selects the set of participants C_r and sends the latest global model w^r to them. * The selected clients use the global model w^r and local data for training. They then return the trained model to the aggregation server. * The aggregation server waits for the selected clients to upload model w_c^r+1, and aggregates the trained models to form a new global model w^r+1. Steps 2–4 are repeated until either a predetermined number of training rounds has been completed or the model convergence satisfies the necessary accuracy standards. § FEDDCT: DYNAMIC CROSS TIER FEDERATED LEARNING In this section, we introduce the framework and algorithm flow of FedDCT. First, we effectively improve the training performance using the dynamic tiering algorithm. Second, we use the cross-tier client selection algorithm to select more clients to participate in the training process without increasing the training time to improve the training accuracy. Table <ref> summarizes the main symbols used in this study. §.§ System Overview FedDCT consists of two main components: 1) The dynamic tiering algorithm is used to evaluate the training time of the client, and divide the client into different tiers based on the training time of each client. 2) The client selection and tier timeout threshold algorithm selects the tier according to the accuracy change of the global model, then selects the participating clients according to the training information of the clients in the tier, and assigns a specific timeout threshold to each tier of clients. We illustrate the training process of FedDCT in Fig. <ref> and explain its specific implementation in Algorithm <ref>. First, the dynamic tiering algorithm evaluates the training time of all participants and divides them into M tiers according to the training time of each client: {tier_1,...,tier_M}, with tier_1 being the fastest tier and tier_M being the slowest. Before each training round, FedDCT selects the participants for the round, based on the difference in model accuracy and the successful rounds of clients, and distributes the latest global model w^i to them. The selected clients use the global model and their local data to train the model and return it. The server aggregates the successfully uploaded models to form a new global model and updates the most recent client training time. For example, in round 1, the server selects clients in tier_1 and tier_2 to participate in the training process. All clients selected from tier_1 complete training and uploads their models within the timeout threshold D_Max^1,1. Some of the clients in tier_2 , who are considered stragglers (highlighted in red in Fig. <ref>), cannot complete the task within the timeout threshold D_Max^1,2, the server does not wait for them. In addition, the dynamic tiering algorithm re-evaluate the training time of the stragglers in each training round. §.§ Dynamic Tiering Algorithm The resources of clients in wireless communication networks are usually heterogeneous, increasing the training time between clients and affecting the training efficiency. An effective solution is to select clients with similar training time to participate in the training process. First, the server performs κ rounds of pre-training and divides the clients into M tiers based on their average training time at. Clients whose average training time exceeds the threshold Ω are considered stragglers and are thus not allowed to participate in subsequent rounds <cit.>. As shown in Fig. <ref>, the training time of clients from tier_1 to tier_M increases sequentially, while clients in the same tier have a similar training time. at[i] = at[i], at[i] < Ω dropout, at[i] ≥Ω Although this algorithm can effectively reduce the training time caused by the difference in client resources, such a fixed-tiering algorithm does not adapt to dynamic changes in the wireless communication network. Owing to problems such as network failure, there may be a large number of stragglers. On the one hand, if these clients are discarded directly, the model will have low accuracy. However, if they are selected to participate in the training, they will increase the training time in a single round. To reduce the training time in a wireless communication network, we have designed a dynamic tiering algorithm. As shown in Algorithm <ref>, we first conduct κ rounds of training in the initial stage and evaluate the training time of clients to divide them into different tiers. In the subsequent rounds, we update at using the real training time of clients t_train. For stragglers joining subsequent rounds, our scheme does not wait for them in the current round and places them in a parallel evaluation program, which allows stragglers to update their average training time by completing training tasks whose results are not aggregated. at[i] = at[i] × ct[i] + t_train/ct[i] + 1 Clients exceeding the tier timeout threshold D_max^t (the red part of Fig. <ref>) are not selected in subsequent rounds until κ rounds of evaluations are completed. After κ rounds of evaluations are completed properly, their new average training time is ∑_i=1^κt_train/κ. Finally, they are re-tiered according to their updated average training time, and re-participate in the following rounds. §.§ Cross Tier Client Selection Algorithm Based on the above analysis, we propose a cross-tier client selection algorithm. If we choose clients in a tier with a short training time to participate in the training, we can reduce the single-round training time of FL. However, selecting only the clients in a tier with a short training time leads to insufficient training of the clients in the other tiers and consequent model drift <cit.>. In this regard, we thoroughly examine how client selection affects the overall accuracy and training time, and define the issue of reducing the training time while maintaining model convergence performance. For example, if the clients in tier 1 can help the global model converge faster, a specific part of them are selected rather than those in slower tiers to reduce the training time. To evaluate the performance of the system, we use the variation in accuracy υ_i produced by the newly aggregated model as the criterion. If the current accuracy υ_i is higher than the accuracy υ_i-1 from the last round, this means that the clients in the t^th tier currently used can still improve the global model accuracy. Therefore, we need to select only the clients in the t-1^th tier for the next round. However, if υ_i is lower than υ_i-1, the data in the current tier may not be able to help the global model effectively converge, and more data would have to be added to help the global model converge. Therefore, clients in the t+1^th tier should be selected in the next round. t = min(t + 1, M), υ_i < υ_i-1 max(t - 1, 1), υ_i ≥υ_i-1 In wireless communication networks, stragglers may result in different training rounds in the tiers, while non-stragglers are selected more frequently than stragglers, leading to global model drift. To solve this problem, we design a weighted client selection algorithm for tiers. Clients with fewer successful rounds have a higher probability of being selected, which can accelerate the model convergence. Therefore, we assign different selection probability probs based on the number of rounds ct on all participant clients in tier t. Finally, the lowest τ clients C_r are selected as the participant clients in the r^th round according to probs of the clients in the tier. probs[i] = ct[i]/∑_i ∈ ts[t] ct[i] FedDCT selects a tier at each training round, and randomly selects τ clients from that tier as participants C_r in the current round. The final training time D^t is affected by actual clients’ training time D_train^i ∈ C_r , timeout threshold of the current tier D_max^t, and max timeout threshold Ω. D^t = min(max(D_train^1,...,D_train^τ ),D_max^t,Ω) However, this approach is inefficient. Clients in tier t complete the training and upload their models within a certain time period, causing a significant amount of idle time in a single round. To improve the training performance, we select the clients not only from tier t, but also from tiers {1...t-1} to participate in the current round. The eventual training time D of this round depends on the longest training time of tier {1...t}. D = max(D^1,...,D^t) In wireless communication networks, the actual training time of clients may exceed their expectations, as shown in Fig. <ref>. If a client in the first tier fails to complete the training and upload within the estimated training time because of network delays or other reasons, it may delay the upload actions of clients in the subsequent tiers and thereby prolong the training time. Therefore, we take advantage of the tiering feature to set separate timeout thresholds D^t_max for each client tier. First, we set the timeout tolerance to β and use the average training time ∑ _i ∈ ts[t]at[i]/∑ _i ∈ ts[t]1 of each tier multiplied by β as the timeout threshold D^t_max of each tier. We also set a maximum timeout threshold Ω to prevent D^t_max from becoming too large. Meanwhile, we allow clients to upload the model in tolerable time (green part in Fig. <ref>). D^t_max can restrict that each tier of clients to upload models within a certain time interval, thus alleviating interference between clients. Because the channel bandwidth of a wireless communication network is limited, numerous simultaneous upload behaviors can lead to network congestion. Therefore, we introduce D^t_max to provide a time guarantee for the cross-tier client selection algorithm, which ensures that clients in the t^th tier cannot excessively interfere with the normal activities of clients in the t+1^th tier. D^t_max = min( ∑ _i ∈ ts[t]at[i]/∑ _i ∈ ts[t]1×β , Ω ) § EXPERIMENTAL EVALUATION §.§ Experimental Design We use PyTorch to implement FedDCT and other FL baseline methods, referring to the implementation methods in FedLab <cit.>. We use a high-performance server with 2 x Intel(R) Xeon(R) Gold 6230 CPU, 128GB memory, and 2 x NVIDIA Tesla V100 FHHL graphics cards to simulate an aggregation server and 50 clients. In this experiment, we used three common datasets for verification. * MNIST<cit.> is a classic experimental data set in the field of image classification, which consists of 60,000 training data and 10,000 test data. Each piece of data is a 28×28 grayscale image, containing a total of 10 categories of images. * CIFAR-10<cit.> dataset consists of 60,000 32×32 color images from 10 categories, with 6,000 images per category. It total consists of 50,000 training images and 10,000 test images. * Fashion-MNIST<cit.> contains 10 classes of images; the dataset consists of 60,000 training data and 10,000 test data images; each example is a 28x28 grayscale image. In the experiment, two classical neural network models, CNN and RestNet8 were used. We trained the CNN with the MNIST and Fashion-MNIST datasets. For MNIST, the network architecture includes two convolutional layers, each with 32 and 64 filters, followed by 2×2 max pooling and two fully connected layers with units of 512 and 10. For Fashion-MNIST, the network architecture includes two convolutional layers, each with 32 and 64 filters, followed by 2×2 max pooling, flattened and two fully connected layers with units of 128 and 10. We trained RestNet8 with the same network structure using CIFAR-10 as in <cit.>. We compared FedDCT with three synchronous and asynchronous FL methods: * FedAvg: Baseline synchronized FL method proposed by McMahan et al. <cit.> In each round, a certain proportion of total clients is randomly selected for training, and server averages the weights received from the selected clients to form a new global model. * FedAsync: Xie et al. <cit.> used the asynchronous FL method that weights aggregation to train all clients simultaneously. When the server receives the model from any client, the model is weighted and averaged with the current global model to obtain the latest global model. * TiFL<cit.>: Based on the training delay, the clients are divided into different tiers. In each round, one tier is selected based on the adaptive selection algorithm base on the test accuracy across all tiers, and some clients are chosen at random for training. The aggregation method used for TiFL is FedAvg. The learning rate was set to 0.001. For each method, we use the following configuration for training: local epoch E = 1, batch size = 10, M = 5, τ = 5, β = 1.2, κ=1 , Ω = 30s; we used the same parameters for the other FL schemes. Other schemes choose five clients to participate in training by default in each round, but for FedDCT, the number of clients selected in each round changes with the selected tier. Clients in FL are often edge devices such as smartphones and IoT clients and possess varying computing and communication capabilities. First, we divide all clients into M parts and then assign them random training delays satisfying a Gaussian distribution with a variance of 2, whose expectations are 5, 10, 15, 20, and 25 s, respectively, to simulate the training time difference caused by different client resources. In a wireless communication network, clients vary not only in terms of resources, but also due to the possibility of any client dropping out due to communication failure, client failure, etc. We randomly added a 30-60 s training delay to simulate the occurrence of various failures and used μ to control the probability of their appearance in the training process. To investigate the training effect under different data distributions, we assigned a master class to each client at random, with #% of the data in the client belonging to this category and the remaining data belonging to the other categories. §.§ Experimental Results Table <ref> shows the best average accuracy across all datasets and the time required to achieve the preset accuracy. The experimental results show that FedDCT improved the accuracy by 1.91% and reduced the time cost by 56.3% over the best baseline for CIFAR-10 when #=0.5. FedDCT achieves higher training accuracy in all experiments under the same experimental configuration and significantly reduced the time cost of training. This was because of the following factors: 1) Dynamic tiering can more effectively adapt to dynamic environments, improve client tier division, and cut down the time required for training. 2) The cross-tier client selection algorithm selects more clients to participate in the training process to increase the precision of the model without increasing training time. When μ > 0, TiFL could not achieve satisfactory training accuracy and training time because it mistakenly classified clients and abandoning more clients during the initial stage. Fig. <ref> shows that TiFL performs best in a steady environment without any unpredicted stragglers. Fig. <ref> illustrates how FedDCT improves the training effect across several datasets in heterogeneous environments, including the ultimate training accuracy and training time. Furthermore, FedDCT can reach the target accuracy faster than FedAvg because its selection algorithm caps the selection frequency of the slowest tier. Because FedAvg is entirely random, it has more chances to select the slowest tier, which results in a longer training time. In addition, we discovered that FedDCT typically has a higher ultimate training accuracy than TiFL, as TiFL abandons stragglers who unintentionally drop out during training, which can reduce the training accuracy. As shown in Fig. <ref>, to study the influence of data heterogeneity on FL training, the CIFAR-10 dataset is divided into different non-iid degrees for training. For μ = 0.1, we set # equal to iid, 0.3, and 0.7, respectively. The results match our expectations, and the proposed scheme can achieve good results under different data distributions. Taking iid as an example, it is observed that our scheme converges to 0.7 precision in only 685.69s. Our scheme can converge faster, and its final training accuracy is higher than that of other baseline schemes. FedDCT may cause training accuracy fluctuation because of it balancing training accuracy and time; however, its fluctuation time is very short. By comparing the three groups of graphs in Fig. <ref>, it can be observed that in Fig. <ref> - <ref>, the training accuracy decreases with an increase in the heterogeneity of the data distributions. Meanwhile, the training time of FedDCT and TiFL is affected by not only the data heterogeneity but also stragglers, as shown in Fig. <ref> - <ref>. FedDCT and TiFL tend to select faster tiers to train; if stragglers occur more frequently in the faster tiers, it has a greater impact on the training time. As shown in Fig. <ref>, to study the influence of various failures in wireless communication networks on FL training, we have experiment with CIFAR-10. In the case of # = 0.5, we set μ to 0, 0.2, and 0.4, to test the performance of each scheme. The training time of the FL increased as the μ increased. As μ increased, the number of stragglers in the training process and the training time also increased. At the same time, we also find that μ has relatively little influence on FedDCT because the dynamic tiering algorithm and cross-tier client selection of FedDCT can greatly decrease the impact of stragglers on FL. As shown in Fig. <ref>, to study the performance of FL training in a more complex network environment, we have increased the difference in resources between clients. To simulate the difference in resources between clients, we increased the expectation of the Gaussian random distribution corresponding to the client's training delays: 1, 3, 10, 30, and 100 s. We evaluated various FL methods using the Fashion-MNIST dataset. When the environment is more complicated, using a dynamic tiering scheme has a greater convergence benefit than the other baseline schemes. The successful training rounds of clients differ more in a complex network environment, but our scheme limits the training time, which significantly speeds up convergence. As depicted in Fig. <ref>, we train in a stable network (no varied failures), and the dynamic tiering technique is eliminated to confirm the effectiveness of our cross-tier client selection algorithm. Our cross-tier client selection algorithm achieved good results for different datasets due to our scheme can use more client data for training at the same time cost. Meanwhile, in conjunction with Fig. <ref> - <ref>, we can see that FedAsync typically lags other schemes in training accuracy and time, which makes it difficult for the model to converge because of its staleness of model. Additionally, the convergence effect of FedAsync and FedAvg is subpar compared to other baseline schemes because they fail to utilize existing information, such as training time, effectively. Finally, to explore why our cross-tier client selection algorithm could converge faster, we recorded the changes of the tier in FedDCT during the training process, averaged it every 10 rounds, and fitted it with a linear regression model. As shown in Fig. <ref>, the overall trend of the tier increases with training rounds, with more fluctuations in the middle. This is consistent with the expectations of the proposed design. FedDCT first uses the clients in the tier with a short training time for training until it is difficult to improve the accuracy of the global model, and then uses the clients in the other tier with a longer training time. In the middle of the training, tier selection can be temporarily caught in the tradeoff between time and accuracy, causing it to fluctuate. § CONCLUSION In this study, we proposed a novel dynamic cross-tier federated learning scheme FedDCT to reduce the adverse impact of wireless communication networks on FL training. FedDCT adopts the method of dynamic tiering to reduce the waiting time caused by heterogeneous resources and varied failures in the training process, and improves the performance of training. Additionally, we designed a cross-tier client selection algorithm to effectively select participants based on their training information to improve training accuracy and performance. Finally, we verified the influence of various factors in wireless communication networks for FL training, such as data heterogeneity, network failures, and resource heterogeneity. Experiments showed that our scheme is superior to the traditional FL scheme in various heterogeneous scenarios both in terms of training accuracy and performance. §.§ Acknowledgements The research was supported in part by the National Natural Science Foundation of China (Nos. 62166004,U21A20474), the Guangxi Science and Technology Major Project (No. AA22068070), the Basic Ability Enhancement Program for Young and Middle-aged Teachers of Guangxi (No.2022KY0057 ), the Key Lab of Education Blockchain and Intelligent Technology, the Center for Applied Mathematics of Guangxi, the Guangxi "Bagui Scholar" Teams for Innovation and Research Project, the Guangxi Talent Highland Project of Big Data Intelligence and Application, the Guangxi Collaborative Center of Multisource Information Integration and Intelligent Processing. unsrt
http://arxiv.org/abs/2307.04120v1
20230709082441
Toward a stellar population catalog in the Kilo Degree Survey: the impact of stellar recipes on stellar masses and star formation rates
[ "Linghua Xie", "Nicola R. Napolitano", "Xiaotong Guo", "Crescenzo Tortora", "Haicheng Feng", "Antonios Katsianis", "Rui Li", "Sirui Wu", "Mario Radovich", "Leslie K. Hunt", "Yang Wang", "Lin Tang", "Baitian Tang", "Zhiqi Huang" ]
astro-ph.GA
[ "astro-ph.GA" ]
subject Article SPECIAL TOPIC: 2023 04 xx x xxxx 000000 xxx xxx 1,2]Linghua Xie 1,2]Nicola R. [email protected] 3]Xiaotong [email protected] 4]Crescenzo Tortora 5]Haicheng Feng 1] Antonios Katsianis 6,7]Rui Li 1,2]Sirui Wu 8]Mario Radovich 9]Leslie K. Hunt 2,10]Yang Wang 2,11] Lin Tang 1]Baitian Tang 1,2]Zhiqi Huang Xie L. Xie L. et al. [1]School of Physics and Astronomy, Sun Yat-sen University, Zhuhai Campus, 2 Daxue Road, Xiangzhou District, Zhuhai, P. R. China; [2]CSST Science Center for Guangdong-Hong Kong-Macau Great Bay Area, Zhuhai, China, 519082 [3]Institute of Astronomy and Astrophysics, Anqing Normal University, Anqing, Anhui 246133, China [4]INAF – Osservatorio Astronomico di Capodimonte, Salita Moiariello 16, 80131 - Napoli, Italy; [5]Yunnan Observatories, Chinese Academy of Sciences, Kunming, 650011, Yunnan, People's Republic of China [6]School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, China; [7]National Astronomical Observatories, Chinese Academy of Sciences, 20A Datun Road, Chaoyang District, Beijing 100012, China [8]INAF - Osservatorio Astronomico di Padova, via dell'Osservatorio 5, 35122 Padova, Italy [9]INAF - Osservatorio Astronomico di Arcetri, Largo Enrico Fermi 5, 50125, Firenze, Italy [10]Peng Cheng Laboratory, No.2, Xingke 1st Street, Shenzhen, 518000, P. R. China [11]School of Physics and Astronomy, China West Normal University, ShiDa Road 1, 637002, Nanchong, China The Kilo Degree Survey (KiDS) is currently the only sky survey providing optical (ugri) plus near-infrared (NIR, ZYHJK_S) seeing matched photometry over an area larger than 1000 deg^2. This is obtained by incorporating the NIR data from the VISTA Kilo Degree Infrared Galaxy (VIKING) survey, covering the same KiDS footprint. As such, the KiDS multi-wavelength photometry represents a unique dataset to test the ability of stellar population models to return robust photometric stellar mass (M_*) and star-formation rate (SFR) estimates. Here we use a spectroscopic sample of galaxies for which we possess u g r i Z Y J H K_s “gaussianized” magnitudes from KiDS data release 4. We fit the spectral energy distribution from the 9-band photometry using: 1) three different popular libraries of stellar population templates, 2) single burst, simple and delayed exponential star-formation history models, and 3) a wide range of priors on age and metallicity. As template fitting codes we use two popular softwares: LePhare and CIGALE. We investigate the variance of the stellar masses and the star-formation rates from the different combinations of templates, star formation recipes and codes to assess the stability of these estimates and define some “robust” median quantities to be included in the upcoming KiDS data releases. As a science validation test, we derive the mass function, the star formation rate function, and the SFR-M_* relation for a low-redshift (z<0.5) sample of galaxies, that result in excellent agreement with previous literature data. The final catalog, containing ∼290 000 galaxies with redshift 0.01<z<0.9, is made publicly available. 98.62.Lv,98.62.Ai,98.62.Ck Toward a stellar population catalog in the Kilo Degree Survey: the impact of stellar recipes on stellar masses and star formation rates [ August 12, 2023 ======================================================================================================================================= § INTRODUCTION The spectral energy distribution (SED) of galaxies provides crucial information on the properties of their stellar populations at the different cosmic epochs. In particular, the stellar mass content and the star formation history of galaxies are of major importance to understand the mechanisms of their formation, including the impact of the environment on their properties <cit.>. For instance, the study of the stellar mass function as a function of the redshift is a crucial probe of the stellar mass assembly of galaxies <cit.>, and combined with the halo mass function of simulations, can be used as a cosmological probe, e.g. in abundance matching studies (e.g. <cit.>, <cit.>, <cit.>). Similarly, the star formation rate function can measure the growth of the stellar content of galaxies across the cosmic time (e.g. <cit.>). A relevant example of scaling relation is the star formation versus stellar mass, also known as the galaxy main sequence (<cit.>). This is crucial to understand the formation mechanisms of galaxies, in particular the relation between the star formation activity across time (<cit.>), and the gas consumption during galaxy formation (<cit.>). The measurement of the galaxy stellar masses and star formation rates mainly relies on details of stellar population analyses (<cit.>, <cit.>), and their ability to constrain the stellar mass-to-light ratios (e.g. <cit.>) and specific star formation history (e.g. <cit.>). This is a notoriously complex problem (<cit.>), due to the existence of degeneracies among some of the parameters, in particular dust, age and metallicity (e.g. <cit.>, <cit.>, <cit.>, <cit.>). Furthermore, in order to convert the stellar population parameters into “galaxy” properties, one needs to account for the galaxy intrinsic luminosity, which carries other uncertainties, e.g. galaxy distances, or redshifts. This step is generally incorporated in the stellar population codes that can model the SED using the redshift as a free parameter (e.g. <cit.>, <cit.>, <cit.>, <cit.>) or as an input from spectra or photo-z codes (e.g. <cit.>, <cit.>, <cit.>). Despite these difficulties, spectroscopical data (<cit.>, <cit.>, <cit.>, <cit.>) or multi-band photometry (e.g., <cit.>, <cit.>) have been routinely used to derive stellar masses, age, metallicity using simple stellar population (SSP, e.g. <cit.>) or more complex stellar population models with a parametrized star formation history (SFH, e.g. delayed exponential: <cit.>, log-normal: <cit.>, double power law: <cit.>, Γ: <cit.>) or non-parametric SFHs <cit.>. Optical broadband photometry alone cannot break the dust-age–metallicity degeneracies (e.g. <cit.>), while extending the wavelength range in the near-infrared (NIR) can provide additional constraints that can alleviate them (<cit.>, <cit.>). The combination of optical and NIR photometry is also effective for photometric redshifts from SED fitting techniques, which are an important ingredient in stellar population analyses. These consist of finding a model galaxy spectrum, given by a linear combination of representative stellar or galaxy templates, which best fits the observed galaxy SED (<cit.>). Here, the wide baseline can alleviate the degeneracy between various galaxy spectra as a function of galaxy redshifts (<cit.>). In this paper, we want to test the outcomes of different stellar population codes, namely LePhare (<cit.>) and CIGALE <cit.>, and different stellar population templates and star formation histories, using a multi-band, seeing matched catalog of galaxies collected in the fourth data release (DR4) of the Kilo Degree Survey (KiDS, <cit.>, K+19 hereafter). The catalog includes sources for which we possess 1) optical photometry in ugri bands and NIR photometry in ZYHJK_s bands from the VISTA Kilo Degree Infrared Galaxy (VIKING, <cit.>), 2) spectroscopic redshifts (spec-zs, hereafter) from different surveys, and 3) deep learning photometric redshifts. It collects about 290 000 sources, a subsample of which has already been used in KiDS to calibrate photometric redshifts (e.g., <cit.>). The advantage of spectroscopic redshifts is that they alleviate the degeneracies between colors and redshifts, which further impact the accuracy of the stellar parameters. The addition of photometric redshifts will also allow us to assess the impact of their larger uncertainties on the same stellar parameters. In fact, the final goal of this work is to evaluate the variance of the stellar population quantities from different SED fitting recipes, popular stellar population templates, as well as the uncertainties on redshifts. We will determine what are the most stable parameters and define robust quantities suitable for science applications. This is a first step to define a strategy to produce a robust stellar population catalog for the upcoming KiDS data release 5 (KiDS-DR5, Wright et al. 2023). The main parameters we are interested in are the stellar mass and the star formation rate, but we will also provide the catalog of ages and metallicities of the galaxy stellar populations from a large set of priors. Since for this spectroscopic sample we also possess very accurate morphotometric redshifts from deep learning (i.e. GaZNet, <cit.>), we can finally test the impact of redshifts derived from pure multi-band photometric catalogs combining optical and NIR, like the ones expected to be collected from future large sky surveys like Euclid mission (<cit.>), Vera Rubin Legacy Survey in Space and Time (VR/LSST; <cit.>), China Space Station Telescope (CSST; <cit.>). There have been previous works including stellar population analyses of KiDS galaxy catalogs, either determining stellar mass only, for weak lensing studies (<cit.>) or estimating galaxy properties, including photometric redshifts and stellar masses, for bright galaxies (i.e. r<21, <cit.>), or estimating structural parameters and stellar mass to select ultra-compact and massive galaxies (<cit.>) and for central dark matter studies (<cit.>). However, none of these has investigated the impact on the stellar masses of the combination of fitting procedure and stellar templates. A similar analysis has been provided for the CANDELS survey (<cit.>), where they used optical plus NIR photometry and tested the impact on stellar masses of different stellar population codes, stellar templates and star formation histories. As a science validation test, we will conclude our analysis by using stellar mass and star formation rate estimates to derive the stellar mass function, the star formation rate function, and the mass vs. star formation rate relation of the galaxies from the KiDS spectroscopic sample, using both spectroscopic and deep learning redshifts and compare them with literature data at redshift z<1. The paper is organised as follow. In Sect. <ref> we introduce the data and the set-up of the stellar population analysis; in Sect. <ref> we present the stellar population inferences, assess their accuracy and precision using a series of statistical estimators, and define a robust definition of the stellar mass and star formation estimates; in Sect. <ref> we discuss the dependence of the accuracy and scatter on galaxy properties and finally show the galaxy mass function, the star formation rate function, and the stellar mass-star formation rate relation as a science validation test; in Sect. <ref> we draw some conclusions and perspectives for future analyses. Throughout the paper, we will adopt the following cosmological parameters: Ω_m = 0.3, Ω_Λ = 0.7, H_0 = 70 km s^-1 Mpc^-1. § DATA AND METHODS The spectroscopic sample which we use in this paper consists of 9-band photometry from the 1000 deg^2 area of KiDS data release 4 (KiDS-DR4 hereafter, see K+19), plus spectroscopic redshifts collected from the Galaxy Mass Assembly <cit.> survey, and the Sloan Digital Sky Survey/Baryon Oscillation Spectroscopic Survey <cit.>, overlapping with the KiDS footprint. We also add further machine learning redshifts from the GaZNet convolutional network presented in <cit.>, as these have been demonstrated to provide very accurate redshifts up to z∼ 3, for galaxy samples with magnitude r 22.5. In the following we describe in more details the content of the dataset and the different stellar population model set-ups used to analyze them. §.§ Photometry and spectroscopic redshifts The photometric data of the spectroscopic sample are collected from the KiDS and the VIKING surveys. These are two sister surveys covering a total area of 1350 deg^2 of the sky, in ugri and ZYJHK_s bands, respectively. The KiDS survey has been carried out at the VST/Omegacam telescope in Cerro Paranal (<cit.>; <cit.>). It has been optimized for weak lensing in the r-band, which provides best seeing imaging (average FWHM∼0.7”), and mean limiting AB magnitude (5σ in a 2” aperture) of 25.02±0.13. The other bands have been observed with poorer seeing and reached mean limiting AB magnitudes of 24.23±0.12, 25.12±0.14, 23.68±0.27 for u, g and i, respectively (see K+19). VIKING has been carried out at the VISTA/VIRCAM (<cit.>) and complemented KiDS observations with five NIR bands (Z, Y, J, H and Ks). The median value of the seeing is ∼ 0.9” (<cit.>), and the AB magnitude depths are 23.1, 22.3, 22.1, 21.5 and 21.2 in the five bands (<cit.>), respectively. The 9-band fluxes have been measured via the Gaussian Aperture and PSF (GAaP) photometry method (<cit.>), which gives colours that are corrected for PSF differences. Hence, GAaP photometry naturally provides seeing matched fluxes for each source in the catalog, by definition. However, sources more extended than the aperture function result in underestimated total fluxes. In order to correct this systematic effect, a total aperture correction needs to be applied to derive the “total” galaxy properties (see Sect. <ref>). As discussed in K+19 the GAaP photometry is Galactic extinction corrected using the <cit.> maps with the <cit.> coefficients. As a spectroscopic database, we have collected redshifts from: 1) GAMA data release 4 (<cit.>), and 2) SDSS data release 17 (<cit.>, SDSS-DR17 hereafter). Previous compilations of spectroscopic data overlapping with the KiDS area did not include SDSS-DR17, but included other high redshift datasets (see e.g. <cit.> and reference therein). However, the statistics of galaxies matching the KiDS-DR4 catalog at redshift larger than z∼1 is rather sparse. On the other hand, for the analysis we are interested to perform in this paper, SDSS-DR17 and GAMA provide a quite abundant sample of galaxies at z1. In particular, GAMA is the most complete sample, reaching ∼95.5% completeness for r-band magnitude r<19.8 (<cit.>). To match the redshift distributions of the two catalogs, we exclude sources at z>0.9, where the overall catalog drops to a constant number of a few tens of galaxies per redshift bin, mainly from SDSS-DR17. We also notice that a large portion of sources at z<0.005 are classified as “stars” from their parent surveys. Hence, to avoid the contamination from other misclassified stars, we decide to use a conservative cut and select only sources with z>0.01. Equally, we exclude all sources classified as Quasars (QSO), as their SED might be dominated by the nuclei emission, rather than the stellar population light. These criteria together produce a final catalog of 242678 GAMA and 77859 SDSS-DR17 galaxies, which includes 31728 repeated sources. For these duplicates, we adopt the SDSS-DR17 redshifts, which have errors, finally ending up with a total of 288 809 objects. In the following, we consider these sources to be “galaxies”, although we might still expect some minor contamination from unclassified QSO (or active galactic nuclei, AGN). The distributions of the redshift and the r-band Kron-like magnitude, MAG_AUTO (r-mag for short), obtained by SExtractor <cit.> for these galaxies are finally reported in Fig. <ref>, where we have broken the sample in the two original spectroscopic surveys, for clarity. From the r-mag distribution we can see the different completeness magnitude of the two samples, with SDSS-DR17 showing a peak at r∼17.8. and GAMA at r∼19.8. The sample (in)completeness is not expected to impact the main goal of our analysis, which is to study the response of the 9-band optical+NIR photometry to the different stellar population recipes, however we will need to consider this when the stellar parameters will be used for the science validation test (see Sect. <ref>). §.§ Statistical estimators Here we introduce some statistical estimators we will use throughout the paper: 1) the relative bias, 2) the median absolute error and 3) the outlier fraction. 1) The relative bias is defined as Δ p = p_i-r_i, where p_i and r_i are the estimated (log) parameters and the reference value for any i galaxy of the sample. In the case of redshifts, this becomes μ=p_i-z_i/1+z_i, where p_i are the predicted photometric redshifts and z_i are the spectroscopic redshifts (see <cit.>). 2) The Normalized median absolute deviation (NMAD) is then defined as: NMAD = 1.4826 × median (|BIAS - median (BIAS)|). where we identify by BIAS either the Δ p or the μ defined above. This gives a measure of the overall scatter of the predicted values with respect to the 1-to-1 relation, i.e. the precision of the method.   3) Fraction of outliers. It is often useful to define the fraction of catastrophic estimates, that significantly deviate from the mean values, as a measure of the robustness of an estimator. In case of redshifts this is defined as the fraction of discrepant estimates, with the condition |μ|>0.15 (see, e.g., <cit.>). For the stellar population parameters we decided to use a 2σ level in the log-normal distribution of the estimated values, which allow us to spot strong deviations from gaussianity. §.§ Deep Learning morphoto-metric redshifts from GaZNet As mentioned in Sect. <ref>, in this paper we want to test the robustness of the derived quantities from a full photometric samples. To do that, besides the spec-z as in Sect. <ref>, we use the morphoto-metric redshifts obtained by combining KiDS r-band images and the 9-band catalog using the Galaxy morphoto-Z Network (GaZNet, <cit.>, Li+22 hereafter). GaZNet has been previously tested on a KiDS galaxy sample (see Li+22 for details) and demonstrated to achieve very high precision in normalized median absolute deviation (NMAD=0.014 for z1 redshift and NMAD=0.041 for z1 redshift galaxies) and low outlier fraction (0.4% for lower and 1.27% for higher redshift galaxies, respectively), down to r∼22. These performances are better than the ones obtained by standard bayesian methods in KiDS for “point” estimates (e.g. BPZ, see <cit.>) and other machine learning methods based on photometry only data applied previously to KiDS datasets (e.g. <cit.>, <cit.>). The level of accuracy reached by the deep learning estimates is shown in Fig. <ref>, where we compare the GaZNet estimated redshifts vs. the spec-z catalog described above. In this figure we show the GaZNet estimates also for the SDSS-DR17 sample, that was not part of the deep learning training/testing in Li+22. As such, the SDSS-DR17 sample, added in this paper, represents a totally independent galaxy test sample with rather different distribution in redshift and luminosity than the original training sample (see Fig. <ref>). This gives us a more realistic sense of the scatter we can expect from the full photometric samples from KiDS, covering similar redshift/magnitude ranges. For the predictions in Fig. <ref>, we obtain a relative bias μ=0.005, a NMAD=0.017 and an outlier fraction of 0.4%, which are perfectly in line with the results found on <cit.>, hence confirming the very good performances of the deep learning morphoto-z provided by the GaZNet. We just notice a tail of outliers at z0.05, which are overestimated by the GaZNet and that might yet produce some systematics in the stellar population parameters. §.§ LePhare stellar population: set-up and templates LePhare (<cit.>), is a template-fitting code, which performs a simple χ^2 minimization between the stellar population synthesis (SPS) theoretical models and data, in a standard cosmology (see <ref>). In our analysis we adopt a <cit.> Initial Mass Function[In LePhare, there is no real option to set the IMF, but this is implemented in the stellar libraries. For the <cit.> libraries the IMF closer to Chabrier is the <cit.> IMF. To account for these IMF difference we will simply adopt the standard -0.05 dex correction to transform Kroupa-based into Chabrier-based masses. ] (IMF), the <cit.> dust-extinction law. We also include the contribution of nebular emission, e.g. from low-mass starforming galaxies (see Sect. <ref>): LePhare uses a simple recipe based on the Kennicutt relations <cit.> between the SFR and UV luminosity, Hα and [OII] lines. Regarding the stellar templates, we test three different libraries: 1) the standard <cit.>, 2) the <cit.> and 3) the <cit.> stellar population synthesis (SPS) models. We have also adopted three different models for the star formation history (SFH), ψ(t): 1) a single burst (SB, hereafter), i.e. ψ(t)=δ(t_0), where t_0 is the age of the galaxy, 2) the exponentially declining law (ExD, hereafter), ψ(t)∝ exp(-t/τ), and finally 3) a combination of both (SB+ExD), which is directly allowed by, e.g., the M05 stellar libraries. We remark here that the choice of the exponential declining SFH is due to the limited choice offered by Le Phare, even though the ExD is flexible enough to embrace a variety of realistic SFHs. CIGALE (see below) will give us the chance to make a different choice, although a more general approach with a larger variety of SFHs will be considered in future analyses. The full LePhare set-up is summarized in Table <ref>. As anticipated in Sect. <ref>, we use the redshift, both spec-z and morphoto-z, as input in LePhare. The stellar population parameters we use to perform the best fit to the GAaP 9-band magnitudes, described in Sect. <ref>, are: age, metallicity, and star formation parameters (either δ(t_0) or τ), which are assumed to vary as in Table <ref>. Consistently with previous literature (e.g. <cit.>, <cit.>), we use the best-fit parameters as a reference for this analysis. §.§ CIGALE stellar population: set-up and templates We also adopt the Code Investigating GALaxy Emission (CIGALE, <cit.>, v2020.0), which can construct the FUV to the radio SEDs of galaxies and provide star formation rate, attenuation, dust luminosity, stellar mass, and many other physical quantities, using composite stellar populations from simple stellar populations combined with highly flexible star formation histories. For our analysis, we make use of BC03 and M05 stellar templates. Differently from LePhare, CIGALE does not have a pure ExD law among the SFH choices, hence we decide to adopt a delayed exponential law (DelEx, hereafter), ψ(t)∝ t/τ^2 exp(-t/τ), which is smoother than the exponential declining SFH from LePhare. Consistently with LePhare, we have adopted a <cit.> Initial Mass Function (IMF), <cit.> dust-extinction law and both the inclusion or not of nebular continuum and emission lines for the BC03 only. In CIGALE the nebular templates adopted are based on <cit.>. The full set-up parameters, including the range of the stellar parameters adopted, are summarized in Table <ref>. As for LePhare, we use the best-fit parameters from CIGALE in the following analysis. § RESULTS In this section, we discuss the outcome of the different models summarized in Table <ref>. These have, in some cases, very strong differences in the recipe of the star formation history (SFH), as we have adopted a single burst and both an exponentially declining and delayed exponential SFR, with a wide range of τ (see Sects. <ref> and <ref>, and Table <ref>). This choice is made to explore the impact of different SFHs on the stellar masses and SFR estimates. The SFH models above have have been effectively used to reproduce the properties of local galaxies <cit.> and cosmic SFR density and stellar mass density at redshifts z < 2 <cit.>. As anticipated, we also include the effect of emission lines that, although they are generally important in massive galaxies at high redshift (e.g.<cit.>, <cit.>, but see also <cit.>), can also be relevant for local low-mass starforming galaxies (e.g. <cit.>). Overall, the model combinations in Table <ref> include a fair variety of libraries and SFHs, which we expect to provide realistic evidences of systematic effects. Moreover, as we are preparing the methods to be applied to the full KiDS photometric dataset, we will perform the same analysis using morphoto-zs as input, which will be provided to deeper limiting magnitudes that the ones offered by the spectroscopic “galaxy” sample (e.g. down to r∼22.5 as seen in Li+22). This will allow us to evaluate the existence or not of systematics on stellar population parameters, and the impact on the precision of the estimates, due to the usage of the more scattered photometric redshifts. Once collected all the estimates from all the configurations in Table <ref>, we will 1) check the overall consistency among the different stellar parameters; 2) discuss the scatter of the parameters and possibly define some robust estimator for them. As mentioned in the Sect. <ref>, in this first paper we concentrate on the stellar masses and the star formation rates, as the most physical meaningful parameters one can extract from large multi-band photometric samples of galaxies, to study their evolution across the cosmic time. We use the estimates from BC03 templates and ExD star formation recipe in LePhare (LP/BC03/ExD in Table <ref>) as reference model for mass and star formation estimates, if not otherwise specified. This is for uniformity with previous analyses in KiDS (e.g. <cit.>). To statistically assess the difference among the stellar mass and the SFR estimates among the different configurations, we will use the following estimators: 1) the relative bias, 2) the median absolute error and 3) the outlier fraction, defined in Sect. <ref>. §.§ Stellar masses In this section we show the results for the stellar masses for the case we fix the redshift of the galaxies of the sample to the spectroscopic and morphoto-metric redshifts, introduced in Sect. <ref> and shown in Fig. <ref>. By stellar masses, we aim at determining the total mass in stars, while we have seen in Sect. <ref> that the seeing matched GAaP photometry adopted in KiDS does not correspond to a “total aperture”. Hence, if using these fractional fluxes, the stellar masses calculated by the stellar population codes are the mass of stars required to produce the inputted galaxy SED, resulting in an aperture bias. Therefore, in order to recover a fair estimate of the total galaxy stellar mass, the observed SED must be representative of the total light emitted from the galaxy. In order to correct this systematic effect, we opt to use the quasi-total SExtractor, MAG_AUTO, using the equation: M_ *, corr= M_*,out+0.4×(GAAP_r- MAG_AUTO) where M_*,out is the stellar mass estimated by the stellar population tools, GAAP_r is the r-band GAaP magnitude from the KiDS catalog, and the M_ *, corr is the corrected “total” mass, under the assumption of constant mass-to-light ratios. In the following we will first show the results of the stellar population analysis using the spectroscopic redshift, then we compare these latters with the results of the morphoto-z to estimate the impact of the larger uncertainties on these latter on determining galaxy distances (see Sect. <ref>). Finally, we discuss the impact of the inclusion of the nebular emissions in the models. §.§.§ Using Spectroscopic redshifts We start showing the results obtained using the spectroscopic redshift as fixed parameter in the stellar population tools. In Fig. <ref>, we compare the stellar mass estimates from LePhare and CIGALE, using different libraries and SFHs and spectroscopic redshifts. All other parameters, in Table <ref>, are kept varying in the model grid to be estimated via the SED fitting procedure. The range of masses is quite large and spans over almost 6 order of magnitudes from log M_*/M_⊙∼6 to log M_*/M_⊙∼12, although stellar masses log M_*/M_⊙7-7.5 are compatible with globular cluster sized systems rather than galaxies. We cannot exclude the contamination from such compact stellar systems, but we decide to retain all sources in the catalog without making any mass based selection. Nonetheless, we will keep this cautionary note on the very low mass end in mind throughout the paper. Overall, the stellar masses all align along the 1-to-1 relation with residuals (bottom panels), defined as Δlog M=log M_y-log M_x, computed in different mass bins, that are generally distributed around zero, but with the LP models systematically smaller and the CI models rather aligned to the reference model, LP/BC03/ExD. All residuals, except LP/M05/SB, are consistent with zero within 1σ scatter, defined as the standard deviation of the Δlog M, σ(Δlog M), at least for masses larger than log M/M_⊙∼9. In the same bottom panels, we report the mean scatter for the mass bins at log M/M_⊙<9 and >9, showing generally a slightly larger values at lower masses (mean 0.22 dex) than larger masses (mean 0.20 dex), with the CI models also showing a systematically smaller σ(Δlog M) than LP ones. The bias, NMAD and outlier fraction of each configuration are summarized in Table <ref>. Similarly to Fig. <ref>, the bias is indeed consistent with zero for all configurations within the NMAD, except for LP/M05/SB for which the bias is statistically significant. CIGALE shows both a negligible bias and small NMAD, whether or not the same stellar libraries of the reference model from LePhare (BC03) are adopted, meaning that the code and the SFH can have an impact on the scatter but not on the accuracy of the stellar mass inferences. On the other hand, the large bias found for LP/M05/SB shows that the combination of template and SFH has a large impact on the bias, for a fixed fitting method. If we also fix the template (see e.g. M05), we can see that the bias can have rather large variations (from -0.423 of LP/M05/SB, to -0.178 of LP/M05/ExD), eventually due to the impact of the different SFH choices that exacerbate the difference on the treatment of thermally pulsating asymptotic giant branch (TP-AGB) phase by M05 (see e.g. <cit.>). Moreover, we notice a double sequence, at stellar masses log M_*/M_⊙10.8 in the models including the exponential SFH, separating star-forming from quiescent galaxies. The same sequence is not evident on the SB model, which tends to assign younger ages and lower mass-to-light ratios to star-forming galaxies and ultimately ending into an overall strong underestimate of the stellar masses (see the negative biases in Table <ref>). The CIGALE model using M05 and a delayed exponential (CI/M05/DelEx) shows a tighter distribution, with no sign of the double sequence. This confirms that the M05 models are more sensitive than others to the SFH, although there might be a residual component from the fitting (code) procedure, having CI models ∼30% smaller scatter than the LP ones, on average. The NMAD generally mirrors these behaviors, with M05 configurations being larger than the corresponding set-ups from other templates (see e.g. LP/M05/SB vs LP/CB07/SB or CI/M05/ExD vs CI/BC03/ExD). All in all, from Fig. <ref> we see that, using the spec-z as input, the scatter of the different combinations are well confined within ∼0.2 dex and the outlier fraction is always very small (∼ 4-5%), consistently with a log-normal distribution of the uncertainties with no pathological cases across the models. Considering the whole statistical estimators, we can conclude that stellar masses from spec-z are a rather robust quantity with no signs of significant systematics, except for the LP/M05/SB model. This is consistent with findings from previous analyses also using optical + NIR photometry (e.g. Lee et al. 2010, <cit.>), although there are analyses reaching different conclusions (<cit.>). §.§.§ Using morphoto-metric redshifts We now show the results obtained using the GaZNet redshifts as fixed input in the stellar population tools. This is a critical test to check the impact of the use of noisier redshifts on the statistical estimators discussed in Sect. <ref>, and the overall variation of accuracy and precision of the estimates we might expect when applying this analysis to pure photometric datasets as the full KiDS photometric galaxy sample (see K+19 and future releases). In Fig. <ref> we show the same correlations as in Fig. <ref>, but using the GaZNet redshifts, while in Table <ref> we report the corresponding statistical estimators. In this case, we also use the LP/B03/ExD model from the spec-z as reference to check the impact of the GaZNet redshift in terms of accuracy and scatter. Basically, the results show that, for the same correlations seen in Fig. <ref>, the relative bias of the different configurations is not worsened, meaning that the accuracy of the mass estimates is not affected by the use of the morphoto-z. This is eventually a consequence of the good accuracy of these latter as seen in Fig. <ref>. On the other hand, we register an evident increase of the NMAD as a consequence of the morphoto-z intrinsic statistical errors and outlier fractions, which is also mirrored by the scatter of the residual, at the bottom of the 1-to-1 relations, which is now of the order of 0.23 dex, for log M_*/M_⊙>9, and 0.49 dex for log M_*/M_⊙<9, on average. These large scatter at low stellar masses are mainly caused by the trend we see that below log M_*/M_⊙=8.5, where stellar masses are systematically overestimated compared to those obtained with the spec-z. This is not an effect that comes from the particular set-up of the fitting procedure, as shown by the comparison of the LP/B03/ExD/morphoto-z against the same set-up with spec-z (bottom/left plot in Fig. <ref>). Even in this latter case, we see that below log M_*/M_⊙=8.5 the positive bias is similar to the ones of all other configurations. We track the motivation of this systematics to some bias of the GaZNet redshifts for a group of objects at very low redshifts (z<0.05 see Fig. <ref>), which turn-out to have also low masses. This can be due to some residual contamination from stars, not picked in the spectra classification, or just a failure of the GaZNet predictions at very low-z, which clearly impact the mass predictions. We will come back to this on Sect. <ref>. However, still looking at the LP/B03/ExD/morphoto-z vs. spec-z, above log M_*/M_⊙=8.5, the bias is almost absent and the only relevant effect is the GaZNet redshift scatter that, from the NMAD, is quantified in 0.09. This is confirmed by noticing that the general increase of the NMAD from the spectroscopic sample to the morphoto-metric sample, in Table <ref>, is compatible with the sum in quadrature of the NMAD of the former with 0.09 coming from the latter, consistently with some pseudo-Gaussian distributions. This is consistent with a log-normal distribution of the uncertainties of the stellar masses, which are confirmed by the outlier fractions that are all of the order of 5-6% above 2σ of the log M_* scatter. A more detailed discussion of the variation of the statistical estimators as a function of the sample properties is presented in Sect. <ref>. §.§.§ The impact of the nebular emissions on stellar masses As anticipated at the beginning of Sect. <ref>, we intend to check the impact of the inclusion of nebular emission on our models. Generally speaking, starforming galaxies can have their spectra heavily contaminated by nebular emissions. The most prominent ones are Lyα @λ1216Å [OII] @λ3727Å, Hβ @λ4861Å, [OIII] @λλ 4959Å and 5007Å, Hα λ6563Å. These emissions are all sparsely distributed in the optical and NIR wavelength at redshift z<1, but they are generally fainter than the continuum collected by the broad bands in this redshift range, except for strong starburst, low-mass galaxies. Here, we have the chance to estimate the impact of the presence of these emissions on the stellar masses, while we will discuss the impact on the star formation rate estimates in Sect. <ref>. We consider the options offered by the LePhare and CIGALE (see details in Sects. <ref> and <ref>) to implement the NE in the models as in Table <ref>. The results of the statistical estimators are reported in Table <ref>, between brackets, for all models considered. Here, we do not find any significant variation of the indicators of all models, which lets us to conclude that the stellar masses are poorly sensitive to the inclusion of the NE, regardless the stellar template, the SFH and the code adopted. We will keep the record of these models in the catalog and we will consider the for the discussion on the variance of the models in the discussion (Sect. <ref>). §.§ Star Formation Rates In this section, we present the results on the star formation rates. These measurements represent the current amount of stellar masses formed per unit of time corresponding to the best parameters of the assumed SFH model fitting the SED. As, by definition, the single burst models do not provide any such estimate, they will be discarded in the following analysis. For the same reason, the mixed model allowed from the M05 libraries (SB+ExD) is almost equivalent to the ExD, as it returns the same SFR estimates for the galaxies best fitted with an exponential SFH (ExD). Hence, only the latter one will be listed in the result tables and figures for the LePhare models, together with the DelEx of CIGALE. We remind here the set of τ and ages adopted for the models in Table <ref>. As seen, we have used a rather large sample of both parameters to check the impact of them on our inferences, even though some extreme values can be either slightly un-physical or too optimistic. For instance, there might be little sensitivity from the fitting procedure to effectively distinguishing between a τ=15 Gyr and 30 Gyr, both producing rather flat SFH, hence leaving a large leverage to the model to converge on both values with similar confidence. On the other hand, the stellar models can be rather insensitive to an age =0.5 Gyr, being the broad band photometry unable to catch the typical feature of young stars, and also given the very shallow limiting magnitude of the u-band that would provide most of the UV rest-frame emission of galaxies up to z=0.9. However, for this test we decide to maintain a broad range of priors for the parameter space to learn their impact and confidently optimize their choice for future analyses. As far as the output of both stellar population codes is concerned, similarly to stellar masses in Sect. <ref>, star formation rates should also be corrected for the total fluxes. This is needed to ensure that the specific star formation rate, sSFR=SFR/M_*, of a galaxy is conserved. Hence, in the following, we will correct the SFRs by the same amount of the stellar masses, i.e. log SFR_corr=log SFR_out + (log M_*,corr-log M_*,out), where M_*,out and SFR_out are the output of the SED fitting codes and M_*,corr is given by Eq. <ref>. Finally, as we want to select a star-forming sample, we will adopt a canonical cut in specific star formation rate (sSFR) to separate passive from active galaxies and use log sSFR/ yr^-1 =-11 as a threshold (see e.g., <cit.>). SSFRs lower than this value should not be taken, in principle, at face value as these correspond to a physically negligible SFR. For this reason, we do not use them in our analysis, although we report them in our catalog with the warning to use them with caution. §.§.§ Using Spectroscopic redshifts As for the stellar masses in Sect. <ref>, we first discuss the SFR results obtained using the spectroscopic redshift as fixed parameter in LePhare and CIGALE. In Fig. <ref>, we show the SFRs computed using the different libraries and SFHs as in Table <ref>. Overall the SFRs look all aligned along the 1-to-1 relation, although both LePhare and CIGALE estimates using M05 show some negative offset (more pronounced in CIGALE), as seen by the residuals shown at the bottom of each panel. Furthermore, at log SFR/M_⊙ Gyr^-1 8, the correlations show a tilt toward a positive bias, more pronounced for CIGALE, that only for CI/BC03/DelEx partially compensates the negative bias at higher SFRs. On the other hand, at log SFR/M_⊙ Gyr^-1 8, the CI/BC03/DelEx estimates are nicely consistent with the LePhare estimates of LP/BC03/ExD. Overall the two tools show a substantial agreement if they use the same libraries, while they do not seem to show a strong dependence on the SFH. This is seen from Table <ref>, showing the statistical estimators for the different experiments. Here we find, indeed, that LP/M05/ExD and CI/M05/DelEx have similar Bias, NMAD, and outlier fraction. Looking at the whole statistical estimators we can confirm that, broadly speaking, the relative bias of the SFR estimates is barely consistent with zero within the NMAD for M05, while it is well consistent with zero for the BC03. From Fig. <ref> (bottom panels), we also see that the overall scatter of the residual is of the order of 0.3 dex, slightly larger to the one of the stellar masses. Moreover, the outlier fraction, even in this case is consistent with a log-normal distribution, across the all SFR range. This broad result suggests that the SFR in star-forming galaxies is a rather stable parameter, in the redshift range we have considered. The degree of accuracy and scatter among different model and library configurations is almost comparable to the stellar masses derived from spec-z. We will now check, if this works similarly for the morphoto-metric redshifts, while we will check the impact of the NE in Sect. <ref>. §.§.§ Using morphoto-metric redshifts In Sect. <ref> we have discussed the impact of the morphoto-z from GaZNet on the stellar mass estimates and shown that the net effect of the morphoto-metric redshift is to increase the scatter and the outlier fraction of the final estimates. We have also seen that the overall impact of the GaZNet redshift can be quantified by comparing two set of estimates with same tool, stellar population library and SFH and changing only the input redshits (e.g. using the LP/BC03/ExD from morphoto-z vs. spec-z). The same trend is seen for the SFR estimates with the GaZNet redshift with respect to the spec-z, as shown in Fig. <ref>. Compared to the spec-z estimates in Fig. <ref>, we see that the scatter and the number of “large” outliers (see below) of the morphoto-z based estimates is increased with respect to the LP/BC03/ExD from spec-z. This is seen from the bottom panels with residuals in Δlog SFR (see caption), where we report an average scatter of 0.35 dex for log SFR/M_⊙ Gyr^-1>9 and 0.52 dex for log SFR/M_⊙ Gyr^-1<9. This is also quantified in Table <ref>, where we again measure an increased NMAD for all configurations. As noticed for the masses, this is compatible with a pseudo-Gaussian increase of the NMAD values of the morphoto-z estimates, being the NMAD of the LP/BC03/ExD/morphoto-z (0.163) a measure of the overall impact of the morphoto-z errors. The “gaussianity” of the log SFR distribution obtained from the morphoto-z is confirmed by the outlier fraction above the 2σ(log SFR), of the order of 5%. In the same Fig. <ref>, we also see that the bias is generally compatible with zero, except for the CI/M05/DelEx (morphoto-z). In general, a trend of the bias with the SFR is evident, due to a positive bias for the lower star formation rates (log SFR/M_⊙ Gyr^-1 8.5). Here the effect of the morphoto-z is to exacerbate the weak trend shown by the spec-z estimates, which is partially absorbed by the scatter of the residuals. Due to the well known correlation between the SFR and the stellar mass (see Sect. <ref>), we conclude that this has the same origin of the bias found for stellar masses at log M_*/M_⊙<8.5, as discussed in Sect. <ref>. We also notice a cloud of outliers at log SFR/M_⊙ Gyr^-1 10 from the GaZNets-based estimates. These come from a series of morphoto-z outliers, overestimating the intrinsic redshift of the galaxy. Indeed, the higher fictitious redshifts force the SED fitting procedure to interpret the rest frame photometry of the galaxy to be bluer and, hence, more star-forming than the result one obtains from the spec-z. To conclude the analysis of the SFRs we can say that, as for the stellar masses, these are also a rather stable quantities with respect to the fitting tool, stellar libraries and SFHs, as they do not show significant systematics, except for small SFRs, although we register a tendency of the M05 models to underestimate the SFRs with respect to the BC03. §.§.§ The impact of the nebular emissions on star formation rates We can finally check the impact of the nebular emissions on the predictions of the star formation rates from the different stellar population models considered. As done for the stellar masses in Sect. <ref>, we report the results of the main statistical indicators in Table <ref>, face-to-face with the same indicators from the no-emission models, using the GaZNet redshifts as input. As for the masses we do not see any significant change on the overall relative bias, NMAD and outlier fraction, meaning that the inclusion of the nebular emissions does not produce any relevant effect for any of the model, given the mass (log M_*/M_⊙>8.5) and redshift range (z<1) considered here. §.§ Median mass and SFR estimates A relevant result of this paper is that both the stellar mass and the star formation rate are two quantities that can robustly be constrained with seeing matched photometry covering a wide range of wavelengths, from optical to NIR (see e.g. <cit.>, <cit.>). For completeness, in <ref> we briefly test the case where only optical bands are available and compare this with results obtained in Sect. <ref> to briefly illustrate the advantage of adding the NIR to the optical bands, in terms of accuracy and precision of the stellar population estimates. By robust constraints, here we mean that the M_* and the SFR estimates do not show statistically significant “relative” bias if compared to the estimates from other tools, libraries and star formation histories. As seen in Sects. <ref> and <ref>, this is generally true for all models considered except LP/M05/SB, as this shows a relative bias of the stellar masses which is systematically larger than the scatter of the overall mass estimates (see Table <ref>, and Fig. <ref>). This makes this model an outlier with respect to all other models (see Sect. <ref>) and we decide to exclude it in the following analysis. As reference estimates we have arbitrarily chosen the LP/B03/ExD model, but this cannot be taken as ground truth. If we assume that the true values of M_* and SFR have to be found within the interval covered by the adopted models, then we can define the “median” value as a reasonable estimator of the ground truth of each of them. To deal with the low number of measurements available to compute the median, we follow the approach of <cit.> and adopt the Hodges-Lehmann estimator, defined as the median value of the means in the linear space of all the possible pairs of estimates in the sample: M_* ^ MED= median ( M_*,i+M_*,j/2 ), where the i and j indexes vary over the different models in Table <ref>. For a dataset with n measurements, the set of all possible two-element subsets, for which the median is computed, has n(n - 1)/2 elements. Similarly we will define a median star formation rate SFR ^ MED= median ( SFR_i+SFR_j/2 ). Assuming these quantities to be unbiased estimators of the ground truth, we will use them for a science validation test as in the Sect. <ref>. As a sanity check for our median estimates, as well as for individual model results, in Appendix <ref> we show a direct comparison of the M_* and SFRs versus some external catalogs overlapping with the KiDS area. In particular, we use the stellar masses from <cit.>, which makes use of ugriZ photometry, and the SFR estimates of the SDSS-DR7 galaxy sample from the MPA-JHU group[https://wwwmpa.mpa-garching.mpg.de/SDSS/DR7/sfrs.html ], based on spectroscopy data as discussed in <cit.>. The main conclusion from these comparisons is that the “median” stellar masses and the star formation rates derived from the 9-band SED fitting are generally consistent with independent estimates based on different data and techniques. This is particularly true for M_* estimates, while for SFRs we can expect some offset due to intrinsic systematics of the different proxies adopted (see also Sect. <ref>). However, in all cases the relative bias between different datasets is confined in the typical scatter of the data. § DISCUSSION In the previous sections we have assessed the accuracy and scatter of the different configurations using the relative bias, NMAD and outlier fraction as statistical estimators, and concluded that both stellar masses and SFRs are rather robust quantities. In this section we want to examine the accuracy and scatter in more details, as a function of the intrinsic properties of galaxies, like redshift, signal-to-noise ratio, and stellar mass. This will allow us to check the presence of “trends” in the systematics that might affect the stellar population parameters in different volumes of the parameter space defined by these quantities. This is fundamental if one wants to study the evolution of the mass function of scaling relations like the “main sequence” of the star-formation galaxies from the M_*- SFR relation. We also briefly discuss other sources of systematics, and finally compare the galaxy mass function and the M_*- SFR derived from our “median” parameters with previous literature, as a science validation of our inferences. §.§ Relative bias, NMAD and outliers as a function of redshift, SNR and stellar mass In Fig. <ref> we plot the bias, NMAD and outlier fraction as a function of the redshift, r-band signal-to-noise ratio (SNR) and stellar mass, for the stellar masses (left) and star formation rates (right) derived fixing the redshifts to the morphoto-z. Here, we decide to show only the GaZNet-based estimates because, as seen in Tables <ref> and <ref>, these are the estimates that, by incorporating the uncertainties of the morphoto-metric redshifts, provide the upper limits for both scatter and outlier fractions. We also show the dependence on the r-band SNR as lower limit of the photometry uncertainties (being all other bands generally less deep than the r-band), that should also enter in the precision of the stellar population parameters. We finally remark that we limit our comparison to log M_*/M_⊙>8.5, as we have seen in Sect. <ref> that, below these limit masses, the estimates are dominated by the morphoto-z biases. The first comment is that both stellar masses and star formation rates show similar features of the statistical estimators as a function of the different quantities, suggesting that the source of the biases and scatter are the same for both quantities. For stellar masses, starting from the top to the bottom, the outlier fractions stay usually within more than acceptable values at all ranges, although we see that toward low redshift (z0.05) and masses (log M_*/M_⊙9) the outlier fraction and NMAD show a systematic increase. This has been anticipated in Sects. <ref> and <ref> and tracked to an excess of outliers on the GaZNet redshifts. Similar degradation of the estimators are observed at z0.7 for M_* estimates, mainly for the poorer statistical samples, which have also degraded the redshift estimates. Overall we see that all the statistical estimators remain contained within reasonable bias (|Δ p|<0.2), NMAD (<0.3) and outlier fraction (<10%), especially having excluded the LP/M05/SB from the mass set-ups. For star formation rates, notice the bimodal behavior of the |Δ p| between the M05 and the BC03 models discussed in Sect. <ref>, with the CI/M05/DelEx model showing the largest deviation from all other models. This possibly suggests that M05 stellar libraries need to cope with more complex SFHs than the ones used here. However, the overall indicators of all models seem to stay contained in the limits of the NMAD, we have kept all of them in the SFR^MED estimates, to average off all possible systematics. For all the other estimators (NMAD and outlier fraction), we see little difference among the adopted fitting configurations, and confirm no major impact of the NE models either. To conclude, we expect we can use the “median” estimates in the full range of masses log M_*/M_⊙>8.5 and at all redshift 1 in future applications, although, for SFRs, it remains to see if the SFR^ MED is totally bias free. In the next sections, and in Appendix <ref>, we will show some evidence that this might be the case. We have also checked the statistical estimators as a function of r-mag (not shown) and we can confirm that the outlier fraction and the bias become almost out of control at r-mag 23, which sets a safe limit for future applications based on the use of current GaZNet redshifts. §.§ Some considerations about other sources of systematics Before moving to some science application we need to stress that providing a full insight into all possible systematics that might come from the stellar population analysis is beyond the purpose of this paper. We have already introduced the problem of the wavelength coverage in Sect. <ref> and addressed in <ref>. Other sources of bias one should consider are the input redshifts in the stellar population tools. As we have discussed in Sect. <ref>), we are motivated to do this because by leaving the stellar population tools to constrain redshift and stellar population properties at the same time, we expect the degeneracies between redshifts and galaxy colours to strongly affect the stellar populations. This is also briefly discussed in <ref>, where we show that the results in terms of photo-z and stellar masses are much more scattered and prone to biases than fixing the redshifts. On the other hand, we have seen in the previous sections that in case of unbiased morpho/photometric redshifts moving from spectroscopy to photometry based redshifts, the accuracy is not affected, but the scatter and the outlier fraction is increased with an acceptable level. Finally, a comment on the stellar templates. In this paper we have used a variety of libraries that could be directly incorporated in the two reference tools adopted (see Table <ref>). However, this list is neither complete, nor optimal to really account for the current state-of-art stellar population models. We expect to expand our analysis to other stellar libraries (see, e.g., MILES, <cit.>), in future analyses. In this respect, we can consider this analysis as a first step to a more general program to implement a larger variety of models to ground based multi-band datasets. §.§ Galaxy stellar mass function, star formation rate function and SFR-M_* relation We want to conclude this paper with a science validation test for the quantities we have focused on in this analysis: the “median” values, M_*^MED and SFR^MED. In Sect. <ref> we have seen that these quantities can be considered robust estimates of the stellar mass, M_*, and the star formation rate, SFR, respectively. A way to test this, is to derive the galaxy stellar mass function (GSMF), i.e. the number of galaxies in a given mass bin in a unit of volume, Φ (M), and the corresponding star formation rate function (SFRF), i.e. the number of galaxies in a given SFR bin in a unit of volume, Φ (SFR). This latter, in particular, will give us the chance to compare the SFRs derived from different indicators (UV, Hα, IR luminosities) with our estimates obtained from the KiDS 9-band photometry. We finally derive the SFR-M_* relation and compare them with independent observations to check for broad consistency of our inferences with previous literature. This will allow us to qualify the dataset based on the process presented in Sect. <ref> for future catalog compilations and science applications. Both the GSMF and the M_*-SFR relation have a crucial role in the understanding of the assembly and formation of galaxies (see discussion in Sect. <ref>) and there have been enormous progresses to trace these quantities back to the early phases of the galaxy formation (see e.g., GSMF: <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, SFR-M_*: <cit.>). The SFRF is less constrained (especially for high star-forming galaxies) as it is highly dependent on the assumed methodology to obtain the galaxy SFRs (see e.g. <cit.>). For this test, we are interested to check the consistency of our derivations with previous literature in a statistical sense, while we leave the physical interpretation of these relations for a dedicated analysis, using the full KiDS photometric galaxy catalog. To avoid corrections due to the different completeness mass of the GAMA and SDSS-DR17 data in our spectroscopic sample (see Sect. <ref>), we will consider, here below, only the GAMA sub-sample. §.§.§ Galaxy Stellar Mass Function In Fig. <ref> we start by showing the stellar mass vs. redshift diagram of the GAMA galaxies in our sample. We also overplot the contour of the completeness mass, obtained from the turn-over points in number counts in a given (narrow) redshift bin (see e.g. <cit.> for more details on this method). As we can see, the completeness mass becomes almost constant to 10^11 M_⊙ at z0.4, leaving there just a small statistical samples to compare with literature. We then decide to limit our analysis at z<0.4, where we have different reference works to compare our data with. For the comparison of the GSMF, we use observations derived for the GAMA galaxies at z<0.1 (<cit.>, <cit.>) and 0.2<z<0.4 (<cit.>). In Fig. <ref> we show the GSMF from the M_*^ MED estimates, derived in the redshift bin z=0.02-0.1 and z=0.2-0.4, against the GSMF from similar redshifts for homology. In the same figure, we also show the completeness mass, defined as in Fig. <ref>. In Fig. <ref>, we do not compute the volume occupied by the complete sample of galaxies in the GAMA area, V_ max, as this would imply to know the GAMA survey selection function, which is beyond the scope of this comparison. We rather normalize the counts to match the literature GSMFs. As we see both the estimates derived by spec-z and the morphoto-z nicely follow the GSMFs of previous literature in the two redshift bins. In particular, at z<0.1 (left panel) the consistency with previous GAMA inferences from <cit.> and the recent compilation from <cit.> are almost indistinguishable for masses above the limiting mass of our spectroscopic sample, although the match becomes more insecure at very high masses, where both the exact volume adopted and the different selections can cause noisy statistics. A similar behaviour is also seen in the other redshift bin adopted (0.2<z<0.4, right panel). Here, the consistency of our GSMF with the dataset from <cit.> is yet very in the full range of masses above the completeness limit. Overall, this good match with independent GSMFs brings us to the conclusion that the stellar masses we have produced have high science fidelity to be expanded to further analyses. §.§.§ Galaxy Star Formation Rate Function Differently from the GSMF, the star formation rate function (SFRF) is not a standard proxy for galaxy evolution, although this can provide relevant insight into galaxy formation (e.g. <cit.>). One reason is that SFRs are more sensitive than stellar masses to the assumed methodology. For this reason, more focus is given to the observed values of UV, Hα or infrared (IR) luminosity functions as probes of the SFRs in galaxies (<cit.>). Despite these difficulties, there have been attempts to quantify the SFRF at different redshifts (e.g. <cit.>). At low redshifts (z<0.5) the UV and Hα data are significantly affected by dust attenuation effects (e.g. <cit.>, <cit.>). This limitation impacts the derived UV/Hα star formation rate functions which are usually incomplete at the high star-forming end (log SFR /M_⊙/Gyr^-1 10). Thus, especially for these high SFR ranges, IR SFRs are considered more robust and give a more accurate estimate of the SFRFs, at least at low redshifts (e.g. <cit.> and reference therein). Taking all this into account, in Fig. <ref> we show the SFRFs based on the “median” values derived in Sect. <ref>. We compare these SFRFs in three redshift bins consistent with other observations from <cit.>, which reports a collection of SFRFs based on UV, Hα and IR, and from <cit.>, which presents SFRs from SED fitting of a local sample of SDSS galaxies. In the figure, we can see the co-existence of SFRs based on different proxies and appreciate the large scatter introduced by the different methods. Broadly speaking, the UV- and Hα-based SFRFs are consistent between them and generally discrepant from the IR-based ones. Our SED estimates look very well consistent with the IR SFRFs, down to the “limiting SFR”, marked as a vertical dashed lines in the different redshift bins[This has been obtained following the same procedure of the stellar masses, i.e. as the peak of the SFRF. Here, though, we do not interpolate in the SFRF vs. z but we show the peak in every particular bin.]. Finally, we remark the almost perfect agreement with the SDSS SED estimates from <cit.>, especially considering our spec-z estimates. Hence, we conclude that the SFR^ MED allow us to build SFRFs which are in good agreement with previous literature based on IR luminosity function and SED fitting, while the difference with UV and Hα-based estimates have to rely to the difference in the calibration of the different methods (see e.g. <cit.>). This does not impact the fidelity of our estimates, as they show no systematics with similar (photometric) probes. As we will see in Appendix <ref>, this conclusion is corroborated by the direct comparison of the SFR^ MED estimates with spectroscopical SFRs, showing a statistically insignificant bias for the morphoto-z and no bias for the spec-z based estimates. To conclude, the consistency of both the GSMFs and SFRFs with literature further support the assumption that the “median” estimates represent a realistic proxy of the true M_* and SFRs, either using spectroscopic or morphoto-metric redshifts. In particular, the accuracy of the GMSFs and SFRFs based on morphoto-metric redshifts demonstrate that the method can be successfully extended to larger photometric KiDS galaxy collections. §.§.§ M_*-SFR relation For the M_*-SFR relation, in Fig. <ref> we also plot the results of the lower-redshift bins, where the mass completeness allows us to have a sufficient sample for a consistency check. We use, as comparison, a series of mean relations of star-forming galaxies from other literature studies in different redshift bins: namely, 1) Tortora et al. (2003, in preparation), including <cit.> based on a hybrid method using far-ultraviolet (FUV)+total infrared luminosity, 2) <cit.>, performing SED fitting using multi-band FUV-FIR, 3) <cit.>, based on a collection of homogenized literature[They calibrate to a Kroupa IMF, and the SFR estimates to the Kennicutt & Evans <cit.>. Note that the choice of IMF does not impact the M_*-SFR relation as it equally affects the stellar mass and the SFR estimates.]. We also add the prediction from Illustris-TNG (<cit.>) and EAGLE simulations (<cit.>) to illustrate the potential of deriving SFRs from larger KiDS galaxy samples to check against the outcome of state-of-art hydrodynamical simulations to gain insight on the galaxy formation scenario[We did not compare the inferred GSMF in Sect. <ref> with the same simulations because these latter are tuned, by construction, to fit the observed stellar mass functions.]. In Fig. <ref> we show the M_*-SFR relation for the median quantities obtained using the GaZNet redshifts as input only. This is because we have seen, in Sect. <ref> that these represent the worse-case scenario, where the measurements are more scattered and show systematic effects only at very low masses (log M_*/M_⊙<8.5) – these are below the completeness mass we can use as lower limit for science analysis. From Fig. <ref> we find that the M_*-SFR relation of the KiDS galaxies (black points with errorbars) nicely follows the majority of the literature data, both from observations and simulations, down to the completeness mass, despite the different methods adopted in literature and the definition of star-forming systems. At masses below the limiting mass, our M_*-SFR shows a significant departure from other relations. We expect to check if this is indicative of the presence of systematics, when we will use the full KiDS photometric sample, for which we expect to push the mass completeness to lower levels in all redshift bins. We are convinced that this consistency check, of both the M_*-SFR relation and the GSMF, which is just qualitative at this stage, confirms the validity of the procedure and the data produced in this analysis. § CONCLUSIONS AND PERSPECTIVES In this paper we have used a spectroscopic galaxy catalog including 9-band (u g r i Z Y J H K_s) photometry from the 4th data release of the Kilo-Degree Survey (KiDS) to derive robust stellar masses and star formation histories. We have performed a full template fitting analysis using two popular stellar population codes, LePhare and CIGALE, and a combination of stellar population libraries (<cit.>, <cit.>, <cit.>) and star formation histories (i.e. a single burst, an exponential decline, and a delayed exponential). Besides the spectroscopic redshifts, taken from GAMA data releases 2 and 3 and SDSS data release 17, we have considered as input of the SED fitting process, the morphoto-metric redshifts obtained from the deep learning tool GaZNet (<cit.>). In this latter case, we can perform a controlled test of the variance one would introduce, in large dataset, where only photometric redshift are available for the galaxy catalogs. In fact, the main goal of this analysis has been to assess the relative accuracy and the variance of the stellar population parameters under a variety of combinations of fitting tools/stellar templates/star formation histories. We summarize here below the main result of this analysis: 1) the stellar mass and the star formation rate show limited scatter and relative bias which is within the scatter, when comparing the estimates for every galaxies against the different methods. As such, these quantities are rather stable against the stellar template fitting set-ups; 2) the relative bias, NMAD and outlier fraction vary with the stellar mass and SNR, not with redshift; 3) due to the overall resilience of the parameters to the different variables in play, we can reasonably adopt a median definition as an unbiased estimator of the “ground truth” values for the parameters. Following <cit.>, we have used a Hodges-Lehmann median for this robust parameter estimate and used them for a science validation; 4) we have evaluated the scatter of the individual fitting set-ups with respect to the Hodges-Lehmann median (Fig. <ref>) and found that, depending on the combination of templates and star formation histories, stellar masses and star formation rates can deviate by ∼0.1 dex, for high mass systems, to ∼0.2 dex, for low mass systems; 5) as a science validation test, we have derived the stellar mass function and the star formation rate mass function, as well as he M_*-SFR relation and compared with previous literature in different redshift bins, finding a very good match with a wide literature; 6) we provide the catalog of the galaxy parameters, including stellar masses, star formation rates, age, metallicity, extinction and the τ of the exponential decaying models, for ∼290 000 galaxies with spectroscopic redshifts, 0.01<z<0.9, from GAMA and SDSS-DR17. The catalog is available at this URL[link] and contains also the 9-band GAaP photometry, the r-band MAG_AUTO, and the spectroscopic redshift from the parent spectroscopic surveys. In the future we plan to expand this test, including more stellar formation tools (e.g. FAST : <cit.>, SED3FIT: <cit.>, Prospector: <cit.>, P12 <cit.>), star formation histories (e.g. log-normal <cit.>, Γ <cit.>) and stellar libraries (e.g. <cit.>). This will allow us to investigate an even larger variety of models and use the “median” of their outcomes (see Sect. <ref>) as an unbiased stellar population parameter estimators for the full KiDS “galaxy’’ photometric sample and finally provide a general-purpose catalog to be used for a variety of galaxy studies. Piece of similar datasets have been previously used in KiDS, to study the size-mass relation of galaxies (<cit.>), the ultra-compact massive galaxies number density evolution (<cit.>), the mass function of galaxies at different redshifts (<cit.>, the clustering of red-sequence galaxies (<cit.>), the dark matter halo masses of elliptical galaxies as a function of observational quantities (<cit.>), the dark matter assembly in massive galaxies (<cit.>). § ACKNOWLEDGEMENTS NRN acknowledges financial support from the Research Fund for International Scholars of the National Science Foundation of China, grant n. 12150710511. RL acknowledges the support of the National Nature Science Foundation of China (No. 12022306) and the science research grants from the China Manned Space Project (CMS-CSST-2021-A01). AK acknowledges financial support from the One hundred top talent program of the Sun Yat-sen University. HF acknowledges the financial support of the National Natural Science Foundation of China (grant No. 12203096). LX thanks Dr. O. Ilbert for the useful suggestions about LePhare and Fucheng Zhong for useful discussions. § DATA AVAILABILITY The data that support the findings of this study are available at the URLs provided in the text. unsrt § §.§ The impact of missing NIR photometry In this Appendix we want to check the impact of the wavelength range on the analysis we have performed, and quantify, in particular, the advantage of the inclusion of the NIR to produce reliable stellar population parameters. It is well known that the wider wavelength base is a necessary prerequisite for accurate photometric redshifts (see e.g. <cit.>). As we will see in <ref>, accurate redshifts have themselves a large impact on the stellar population parameters. Here we want to show that, even assuming to correctly know the redshift of a galaxy, the wavelength baseline is crucial to provide stellar masses and SFRs with minimal bias and scatter. For space sake we just consider the extreme case of fully discarding the NIR bands, to show what is the maximum errors one would commit applying the same set-up as in Table <ref>. For the same brevity reason we show the results for 4 LePhare models: LP/B03/ExD, LP/M05/SB, LP/M05/ExD, LP/CB07/SB. In Table <ref> we report the main statistical estimators for the different configurations, for both mass and SFR estimates, either assuming the spec-z or the morphoto-z as input. These can be compared to Tables <ref> and <ref>. The most evident effect is the large increase of the scatter of the estimates, as measured from the NMAD. For stellar masses we find that the NMAD increases by 30-40% (e.g. LP/M05/ExD/spec-z) to about 100% (LP/B03/ExD/morphoto-z). On the other hand, all SB model show little increase in NMAD (10%) and smaller relative biases, indicating that these are almost insensitive to the wider wavelength baseline. For the SFRs we find a similar degradation of the precision of the estimates with NMAD in Table <ref> increased by 30% to 90% with respect to Table <ref>, and minimal variation on the relative bias. §.§ Comparison of M_* ans SFR estimates against external catalogs As anticipated in Sect. <ref>, here we want to perform a direct comparison of our M_* and SFR estimates against external catalogs. For the stellar masses, we have mentioned the existence of stellar mass catalogs based on similar KiDS data (e.g. <cit.>, <cit.>), however here we decide to compare the stellar masses with a catalog made on a different photometric data from <cit.>. The catalog of their stellar masses is available on the GAMA website[Catalog link: http://www.gama-survey.org/dr2/schema/table.php?id=179]. This is based on the ugri optical imaging from SDSS (DR7) and (according to the catalog description) Z-band from UKIDSS (see T+11 and reference therein). Similarly to us they also use BC03 templates, Chabrier IMF, and Calzetti extinction law, with an Exponentially declining star formation history, but they use a customized code for their stellar population models. Hence we can expect some differences in the estimates due to the code adopted and the data (different observations, photometric accuracy and errors etc.), while they use the GAMA spectroscopic redshifts information as input of their model. We have found a match of 64 771 galaxies with our catalog which are plotted in Fig. <ref> against the M_*^ MED estimates from Sect. <ref>, considering both the spectroscopic and the GaZNet redshifts as input. Being our LP/BC03/ExD the closest model to their set-up, we also add this for comparison in the same figure. Overall, we see that all the estimates (except the M_*^ MED/spec-z) are consistent within the errors, shown at the bottom of each panel, with a scatter that is always contained within ∼ 0.2dex for the spectroscopic redshifts and ∼ 0.25dex for the morphoto-metric redshifts. We also clearly observe that the LP/BC03/ExD has almost no bias, meaning that the different codes and also the different data have a minimal impact on the final mass estimates. The offset with the M_*^ MED (of the order of 0.15 dex) is due to the relative bias of the different models entering in the “median” quantities: in Sect. <ref> this is quantified to be ∼0.10 dex for the LP/BC03/ExD (see blue line in the 2nd row from the top of Fig. <ref>– left panel), hence consistent with the 0.15 dex offset above, considering the scatter of ∼0.2 dex in the top/left panel of Fig. <ref>. The bias with the LP/BC03/ExD model become even smaller is we use the same baseline as T+11, i.e. the 5-bands ugriZ, as shown by the orange residuals at the bottom of each panels. This also indicates that most of the effect of the NIR bands mainly impacts the massive galaxies where the difference of the masses can be as large as 0.2 dex. We still see, in all cases, the systematic deviation of the sample based on GaZNet redshifts at log M_*/M_⊙<9 discussed extensively before, depending on the redshift systematics and not on the stellar population analysis. For the SFRs we make use of the SDSS-DR7 star formation rate catalog (see footnote <ref>) based on the analysis discussed in <cit.>, but see also <cit.>. Here, the star formation rates are computed by directly fitting the emission lines (e.g., Hα, Hβ, [OIII]@λ5007, [NII]@λ6584, [OII]@λ3727, and [SII]@λ6716). This offers us the opportunity to check the presence of biases on our “median” results against spectroscopic inferences, hence based on a more robust method, especially considering the bimodal bias from M05 and BC03, discussed in Sect. <ref>. The comparison of our 9-band estimates with no-NE and the SDSS-DR7 SFRs are shown in Fig. <ref>. We decide to use the no-NE to confirm the little impact of the emission line in the SED based SFR estimated, as discussed in Sect. <ref>. In Fig. <ref>, we see that the SFR^ MED are in very good agreement with the SDSS spectroscopic inferences, with a bias which is well within the scatter of the data-points. For the morphoto-z sample we see, as usual, the positive bias at low star formation rates induced by the systematics in the morphoto-metric redshifts at low-SFR values, as discussed in Sect. <ref>, although here the offset starts to become significan at log SFR/M_⊙ Gyr^-1 9, suggesting that the different methods (e.g. emission lines vs. SED fitting) can introduce some biases (see also Sect. <ref>). The scatter remains always confined within ∼ 0.4 dex (see values in the figure insets), in line with the results also discussed in the same Sect. <ref>, at least for higher SFRs (log SFR/M_⊙ Gyr^-1>9). We believe the evidences collected in this appendix, for both stellar masses and SFRs, support all the main conclusions of the paper about the robustness of the stellar population quantities from the different methods/set-ups and the use of the “median” values as an unbiased estimator of the true quantities for our galaxy sample, given the range of redshifts adopted. §.§ LePhare results with redshift as free parameter Both SED fitting tools, LePhare and CIGALE, can use the redshift as free parameter during the fitting procedure. This gives us the chance to have a direct visualization of the degeneracies in the final results, introduced by missing the information about accurate galaxy redshifts. For this particular test we use LePhare to show the impact on the estimates of the stellar masses. We use the reference set-up, i.e. the LP/BC03/ExD, which becomes LP/BC03/ExD/specz for the case with spec-z fixed and LP/BC03/ExD/zfree in the variation with the redshift as free paramater. In Fig. <ref> we show: 1) on the left panel the spec-z vs the photometric redshift inferred by LePhare, photo-z_ LP, and 2) on the right, the corresponding stellar masses. From this figure, we can clearly see the impact of missing the information on redshift in the stellar population analysis, in comparison with the equivalent quantities obtained for the GaZNet morphoto-z (Fig. <ref> and Fig. <ref>, bottom left). This is also quantifies in the residual plots at the bottom of Fig. <ref>, where we plot the relative bias and scatter both for the photo-z_ LP and the GaZNet redshifts inferences. In particular, the stellar masses in the former case show a bias and scatter that is fully driven by the larger variance of the photometric redshits: see, e.g., the cloud of galaxies with masses almost parallel to the 1-to-1 relation with a large positive offset on the top of the figure, which are absent in the GaZNet-based estimates. This is confirmed by the global statistical estimators, as for the redshifts we have μ=0.005, NMAD=0.039 and Out. frac.=2.3%, i.e. much larger than the same quantities derived for the GaZNet redshift in Fig. <ref> (μ=0.005, NMAD=0.017 and Out. frac.=0.4%, respectively) which are mirrored by a similar worsening of the same estimators from the masses that for the z_ LP case are Δ p=-0.077, NMAD=0.197 and Out. frac.=4.7% which are up to twice as much larger than typical values found for the GaZNet morphoto-z equivalent values in Table <ref> (Δ p=-0.033, NMAD=0.093 and Out. frac.=3.8%). This quantifies the advantage of having accurate photo-z in the stellar population analysis.
http://arxiv.org/abs/2307.05191v3
20230711115605
Confirmation of an He I evaporating atmosphere around the 650-Myr-old sub-Neptune HD235088 b (TOI-1430 b) with CARMENES
[ "J. Orell-Miquel", "M. Lampón", "M. López-Puertas", "M. Mallorquín", "F. Murgas", "A. Peláez-Torres", "E. Pallé", "E. Esparza-Borges", "J. Sanz-Forcada", "H. M. Tabernero", "L. Nortmann", "E. Nagel", "H. Parviainen", "M. R. Zapatero Osorio", "J. A. Caballero", "S. Czesla", "C. Cifuentes", "G. Morello", "A. Quirrenbach", "P. J. Amado", "A. Fernández-Martín", "A. Fukui", "Th. Henning", "K. Kawauchi", "J. P. de Leon", "K. Molaverdikhani", "D. Montes", "N. Narita", "A. Reiners", "I. Ribas", "A. Sánchez-López", "A. Schweitzer", "M. Stangret", "F. Yan" ]
astro-ph.EP
[ "astro-ph.EP" ]
Confirmation of HeI in HD 235088 b's atmosphere Instituto de Astrofísica de Canarias (IAC), C/ Vía Láctea s/n, E-38205 La Laguna, Tenerife, Spain [email protected] Departamento de Astrofísica, Universidad de La Laguna (ULL), Avd. Astrofísico Francisco Sánchez s/n, E-38206 La Laguna, Tenerife, Spain Instituto de Astrofísica de Andalucía (IAA-CSIC), Glorieta de la Astronomía s/n, E-18008 Granada, Spain Centro de Astrobiología (CSIC-INTA), ESAC, Camino Bajo del Castillo s/n, Villanueva de la Cañada, E-28692 Madrid, Spain Centro de Astrobiología (CSIC-INTA), Carretera de Ajalvir km 4, E-28850 Torrejón de Ardoz, Madrid, Spain Institut für Astrophysik und Geophysik, Georg-August-Universität, Friedrich-Hund-Platz 1, 37077 Göttingen, Germany Thüringer Landessternwarte Tautenburg, Sternwarte 5, 07778 Tautenburg, Germany Department of Space, Earth and Environment, Chalmers University of Technology, SE-412 96 Gothenburg, Sweden Landessternwarte, Zentrum für Astronomie der Universität Heidelberg, Königstuhl 12, D-69117 Heidelberg, Germany Centro Astronómico Hispano en Andalucía, Observatorio de Calar Alto, Sierra de los Filabres, E-04550 Gérgal, Almería, Spain Astrobiology Center, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan Max-Planck-Institute für Astronomie, Königstuhl 17, D-69117 Heidelberg, Germany Department of Multi-Disciplinary Sciences, Graduate School of Arts and Sciences, The University of Tokyo, 3-8-1 Komaba, Meguro, Tokyo 153-8902, Japan Department of Astronomy, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan Universitäts-Sternwarte, Ludwig-Maximilians-Universität München, Scheinerstrasse 1, D-81679 München, Germany Exzellenzcluster Origins, Boltzmannstrasse 2, 85748 Garching, Germany Departamento de Física de la Tierra y Astrofísica & IPARCOS-UCM (Instituto de Física de Partículas y del Cosmos de la UCM), Facultad de Ciencias Físicas, Universidad Complutense de Madrid, E-28040 Madrid, Spain Komaba Institute for Science, The University of Tokyo, 3-8-1 Komaba, Meguro, Tokyo 153-8902, Japan Institut de Ciències de l'Espai (ICE, CSIC), Campus UAB, Can Magrans s/n, 08193 Bellaterra, Barcelona, Spain Institut d’Estudis Espacials de Catalunya (IEEC), 08034 Barcelona, Spain Hamburger Sternwarte, Gojenbergsweg 112, 21029 Hamburg, Germany Osservatorio Astronomico di Padova, Vicolo dell’Osservatorio 5, 35122, Padova, Italy Department of Astronomy, University of Science and Technology of China, Hefei 230026, China 23 (TOI-1430) is a young star known to host a sub-Neptune-sized planet candidate. We validated the planetary nature of 23 b with multiband photometry, refined its planetary parameters, and obtained a new age estimate of the host star, placing it at 600-800 Myr. Previous spectroscopic observations of a single transit detected an excess absorption of HeI coincident in time with the planet candidate transit. Here, we confirm the presence of HeI in the atmosphere of 23 b with one transit observed with CARMENES. We also detected hints of variability in the strength of the helium signal, with an absorption of -0.91±0.11 %, which is slightly deeper (2σ) than the previous measurement. Furthermore, we simulated the HeI signal with a spherically symmetric 1D hydrodynamic model, finding that the upper atmosphere of 23 b escapes hydrodynamically with a significant mass loss rate of (1.5–5) ×10^10 in a relatively cold outflow, with T = 3125 ±375 K, in the photon-limited escape regime. 23 b (R_ p = 2.045±0.075 R_⊕) is the smallest planet found to date with a solid atmospheric detection – not just of HeI but any other atom or molecule. This positions it a benchmark planet for further analyses of evolving young sub-Neptune atmospheres. Confirmation of an HeI evaporating atmosphere around the 650-Myr-old sub-Neptune HD 235088 b (TOI-1430 b) with CARMENES J. Orell-Miquel<ref>,<ref> M. Lampón<ref> M. López-Puertas<ref> M. Mallorquín<ref>,<ref> F. Murgas<ref>,<ref> A. Peláez-Torres <ref>,<ref> E. Pallé<ref>,<ref> E. Esparza-Borges <ref>,<ref> J. Sanz-Forcada<ref> H. M. Tabernero<ref> L. Nortmann<ref> E. Nagel<ref> H. Parviainen<ref>,<ref> M. R. Zapatero Osorio<ref> J. A. Caballero<ref> S. Czesla<ref> C. Cifuentes<ref> G. Morello<ref>,<ref> A. Quirrenbach<ref> P. J. Amado<ref> A. Fernández-Martín<ref> A. Fukui<ref>,<ref> Th. Henning<ref> K. Kawauchi<ref> J. P. de Leon<ref> K. Molaverdikhani<ref>,<ref>,<ref> D. Montes<ref> N. Narita<ref>,<ref>,<ref> A. Reiners<ref> I. Ribas<ref>,<ref> A. Sánchez-López<ref> A. Schweitzer<ref> M. Stangret<ref> F. Yan<ref> Received 17 March 2023 / Accepted 20 June 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ § INTRODUCTION Along with their stars, planets evolve and change over time. During their early stages of formation, they suffer severe changes in their physical and orbital properties due to internal and external forces (). Our knowledge of the predominance and timescales of these changes is limited and a greater number of atmospherically characterized planets is needed to constrain formation and evolution models (). In this context, the study of planets at early stages of evolution is crucial for a better comprehension of different processes, such as planet formation and migration, inflation and evaporation of the primary atmospheres of rocky-core planets (), and the formation of the “radius gap” in the radius distribution of small planets (1–4 R_⊕; ). Stars that host young planets are particularly useful because some of their planets are predicted to have not yet lost their extended H/He-rich primordial atmospheres. The space missions Kepler (), K2 (), and Transiting Exoplanet Survey Satellite (TESS; ) have discovered several transiting young planets orbiting <1 Gyr-old stars, such as K2-33 b (), as well as the V1298 Tau () and AU Mic systems (). These young planets are interesting targets to study, but they are also extremely challenging to analyze due to the intrinsic stellar variations of their young host stars (). Moreover, the study of possible evaporating atmospheres of the young mini Neptune population is of special interest in helping identify the origin of the “radius gap.” That is the case for the HD 63433 system (), HD 73583 b (TOI-560 b, ), TOI-1683.01, 23.01 (TOI-1430.01), and TOI-2076 b (). Two observations with Keck/NIRSPEC spectrograph robustly confirmed the presence of HeI in the atmosphere of HD 73583 b. The HeI detections for TOI-1683.01, 23.01, and TOI-2076 b were made with single-transit observations. The same partial transit of TOI-2076 b was also observed from the same mountain with the InfraRed Doppler (IRD) spectrograph by <cit.>. Both analyses got a similar result of an HeI excess absorption of ∼1 % that persisted for a short time after the egress. However, they differ in terms of the interpretation of the feature: <cit.> claimed the signal to be a planetary absorption, while <cit.> attributed it to stellar variability. In this work, we derive stellar parameters for 23 using a high signal-to-noise (S/N) spectrum from the CARMENES spectrograph. We obtain a new age estimation for this young K-type star and validate HD 235088 b (TOI-1430 b) as a planet using multi-color photometry. With CARMENES high-resolution spectra, we confirm the previous detection of HeI and, by analysing this signal, we study the hydrodynamical escape of this planet and derive the temperature and mass-loss rate of its upper atmosphere. § OBSERVATIONS AND DATA ANALYSIS §.§ CARMENES observations and analysis A single transit of the planet candidate 23.01 was observed with the CARMENES[Calar Alto high-Resolution search for M dwarfs with Exoearths with Near-infrared and optical Échelle Spectrographs.] () spectrograph located at the Calar Alto Observatory, Almería, Spain, on the night of 6 August 2022. CARMENES has two spectral channels: the optical channel (VIS), which covers the wavelength range of 0.52–0.96 μm with a resolving power of ℛ = 94 600, and the near-infrared channel (NIR), which covers 0.96–1.71 μm with a resolving power of ℛ = 80 400. We observed the target with both channels simultaneously and collected a total of 44 spectra of 5 min exposure time, with 28 of them between the first (T_1) and fourth (T_4) transit contacts. We obtained a median S/N of 72 around the Hα line and of 86 around the HeI triplet. Fiber A was used to observe the target star, while fiber B was placed on sky, separated by 88 arcsec in the east-west direction. The observations were reduced using the CARMENES pipeline <cit.> and both fibers were extracted with the flat-optimized extraction algorithm (). We also processed the spectra with [<https://github.com/mzechmeister/serval>] <cit.>, which is the standard CARMENES pipeline to derive the radial velocities (RVs) and several activity indicators: the chromatic radial velocity index (CRX) and the differential line width (dLW), as well as the Hα, NaI D1 and D2, and CaII IRT line indices. We corrected the VIS and NIR spectra from telluric absorptions with (). We analyzed the spectroscopic observations via the well-established transmission spectroscopy technique (e.g. ). We computed the Hα transmission spectra (TS) following the standard procedure. However, because there are OH emission lines from the Earth's atmosphere close to the HeI triplet lines, we applied an extra step before computing the HeI TS. First, we planned the observations to avoid a complete overlap or superposition of the OH telluric lines and the HeI planetary trace. We corrected the fiber A spectra from OH telluric emission using fiber B information, which is used to generate an OH emission model for correcting the science spectra. This methodology is based in previous HeI studies with CARMENES (). In particular, we followed the procedure previously applied in <cit.>. Figure <ref> compares our prediction for the telluric contamination of the HeI triplet lines with the real observations. §.§ X-ray observations and planetary irradiation We used XMM-Newton archival observations of 23 (PI M. Zhang) to calculate the X-ray luminosity of the star. The star was observed on 7 July 2021. We reduced the data following standard procedures and used the three EPIC detectors to extract a spectrum for each of them, simultaneously fitting them with a two-temperatures coronal model, using the ISIS package <cit.> and the Astrophysics Plasma Emission Database <cit.> v3.0.9. A value of interstellar medium (ISM) absorption H column density of 1 × 10^19 cm^-3 was adopted, consistent with the fit to the overall spectrum, and the distance to the source. The resulting model has log T_1,2 (K) = 6.53^+0.14_-0.13, 6.87^+0.07_-0.09; log EM_1,2 (cm^-3) = 50.44^+0.32_-0.31, 50.64^+0.18_-0.40; and abundances [Fe/H] = -0.24±0.17 and [Ne/H] = 0.17^+0.28_-0.69. An X-ray luminosity L_ X, in the energy range of 0.12–2.48 keV (λ=5–100 Å), of (1.89±0.07)×10^28 erg s^-1 was calculated. We extrapolated the coronal model towards transition region temperatures, following <cit.>, to determine the expected EUV stellar emission. We calculate L = 15^+15_-5× 10^27 erg s^-1 and 7.8^+2.9_-1.8× 10^27 erg s^-1 for the EUV spectral ranges 100–920 Å and 100–504 Å, respectively. Our X-ray flux is ∼30% lower than the value calculated by <cit.>, likely related to the spectral fitting process (we include the Ne abundance in the fit, and they assumed no ISM absorption, M. Zhang, priv. comm. 2023). <cit.> report also the incident flux (27 000 erg s^-1 cm^-2) in the range λλ 1230-2588 Å (defined as MUV in their work), after the modeling of the stellar emission that was then scaled up to match the observed flux in the XMM-Newton Optical Monitor (OM) filters UVW2 (λλ 2120 ± 500 Å) and UVM2 (λλ 2310 ± 480 Å). We find an excellent agreement between our modelled SED and the observed count rate in these two filters. However these filters have a non-negligible sensitivity at longer wavelengths. We must remark that the SED of a late-type star, as in our case, yields a ∼90% result for the count rate in the filter actually coming from longer wavelengths than the nominal band-pass (see Appendix <ref>). Thus, scaling the MUV level based on the photometry of the UV filters on board XMM-Newton must be taken with care for this type of star. In any case, our model (see Sect. <ref>) indicates an incident flux in the MUV range of 13 300 erg s^-1 cm^-2, without scaling, based on the OM flux. §.§ TESS observations Listed as TIC 293954617 in the TESS Input Catalog (TIC; ), 23 was observed by TESS in 2-min short-cadence integrations in Sectors 14–16, 41, and 54–56. In particular, the transit studied in this work was simultaneously observed by TESS and CARMENES in Sector 55. We fit all the TESS simple aperture photometry (SAP; ), which is publicly available at the Mikulski Archive for Space Telescopes (MAST[<https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html>]), using [<https://juliet.readthedocs.io/en/latest/index.html>] <cit.> to refine the central time of transit (t_0) and other planetary parameters, such as the orbital period and the planetary radius. is a library based on other public packages for transit light curve (, ) and GP (, ) modelling, which uses nested sampling algorithms (, ; , ) to explore the parameter space. In the fitting procedure, we adopted a quadratic limb-darkening law with the (q_1,q_2) parameterization introduced by <cit.>, and we considered the uninformative sample (r_1,r_2) parameterization introduced in <cit.> to explore the impact parameter of the orbit (b) and the planet-to-star radius ratio (p = R_p/R_⋆) values. According to the flux contamination exploration in <cit.>, we can safely fix the dilution factor to 1. Furthermore, the TESS apertures used to compute each sector light curve only include our target star (see Fig. <ref>). The results from the TESS data analysis are presented in Sect. <ref>. §.§ MuSCAT2 observations and data reduction We observed a full transit of 23.01 on 23 June 2021 with the multi-color imager MuSCAT2 <cit.>, mounted on the Carlos Sánchez Telescope (TCS) at Teide Observatory (OT). MuSCAT2 obtained simultaneous photometry in four independent CCDs in the g, r, i, and z_s photometric bands, with an exposure time setting of 5 s for all CCDs. We performed the data reduction, and the aperture photometry using the MuSCAT2 pipeline described by <cit.>. The multi-color lightcurves (App. <ref>) were computed through a global optimization that accounts for the transit and baseline variations simultaneously, using a linear combination of covariates. § STELLAR CHARACTERIZATION In this section, we revise the planet host star parameters taking advantage of the high-S/N co-added spectrum from and other available photometric data. Table <ref> compiles the stellar parameters derived in this work, and from the literature. §.§ Stellar parameters To determine the bolometric luminosity of HD 235088, we first built the star's photometric spectral energy distribution (SED) using broad- and narrow-band photometry from the literature. The stellar SED is shown in Figure <ref> including the Johnson UBVR photometry <cit.>, the ugriz data from the Sloan Digital Sky Survey <cit.>, Gaia Early Data Release 3 and Tycho photometry <cit.>, the Two Micron All Sky Survey (2MASS) near-infrared JHK_s photometry <cit.>, the Wide-field Infrared Survey Explorer (WISE) W1, W2, W3, and W4 data <cit.>, the AKARI 8-μm flux <cit.>, and the optical multi-photometry of the Observatorio Astrofísico de Javalambre (OAJ), Physics of the Accelerating Universe Astrophysical Survey (JPAS), and Photometric Local Universe Survey (JPLUS) catalogs. All of this photometric information is accessible through the Spanish Virtual Observatory <cit.>. In total, there are 90 photometric data points defining the SED of HD 235088 between 0.30 and ∼25 μm. The OAJ/JPAS data cover very nicely the optical region in the interval 0.40–0.95 μm with a cadence of one measurement per 0.01 μm. In Figure <ref>, the effective wavelengths and widths of all passbands were taken from the Virtual Observatory SED Analyzer database <cit.>. The Gaia trigonometric parallax was employed to convert all observed photometry and fluxes into absolute fluxes. The SED of HD 235088 clearly indicates its photospheric origin since there are no obvious mid-infrared flux excesses up to 25 μm. We integrated the SED displayed in Figure <ref> over wavelength to obtain the absolute bolometric flux (F_ bol) using the trapezoidal rule. The Gaia G-band flux was not included in the computations because the large passband width of the filter encompasses various redder and bluer filters. We then applied M_ bol = -2.5  log F_ bol - 18.988 <cit.>, where F_ bol is in units of W m^-2, to derive an absolute bolometric magnitude M_ bol = 5.846 ± 0.016 for HD 235088, from which we obtained a bolometric luminosity of L = 0.3609 ± 0.0052 L_⊙. The quoted error bar accounts for the photometric uncertainties in all observed bands and the trigonometric distance error, although photometry contributes most to the luminosity error. The contribution of fluxes at bluer and redder wavelengths not covered by the photometric observations to the stellar bolometric flux is less than 1%. Then, we computed T_ eff, logg, [Fe/H], and total line broadening <cit.> by means of the SteParSyn code[<https://github.com/hmtabernero/SteParSyn/>] <cit.>. The SteParSyn code is a implementation of the spectral synthesis method that uses the emcee MCMC sampler to infer the stellar atmospheric parameters. To perform the spectral synthesis, we employed a grid of synthetic spectra computed with the Turbospectrum <cit.> radiative transfer code and the MARCS stellar atmospheric models <cit.>. The spectral synthesis employed atomic and molecular data gathered from the Gaia-ESO (GES) line list <cit.>. To determine the stellar parameters, we fit a selection of Fei,ii lines that are well suited to analyzing FGKM stars <cit.>. Using SteParSyn, we computed the following stellar atmospheric parameters: T_ eff = 5037 ± 14 K, logg = 4.63 ± 0.02 dex, [Fe/H] = -0.01 ± 0.02 dex, and V_ broad = 2.89 ± 0.03 km s^-1. Regarding its spectral type, the spectrum of 23 is highly similar to that of HD 166620, which in turn was analyzed by <cit.> with exactly the same CARMENES configuration as in our observations. In all, HD 166620 is a well investigated K2 V standard star with T_ eff = 5039± 85 K and logg = 4.66 ± 0.21 dex <cit.>, compatible at the 1σ-level to those of 23 (e.g. ). The radius of HD 235088 can be obtained from the Stefan-Boltzmann law that relates bolometric luminosity, effective temperature (T_ eff), and stellar size. From the spectral fitting of the CARMENES data, HD 235088's T_ eff is determined to be 5037 ± 14 K. Therefore, we estimate a radius of R_⋆ = 0.789^+0.022_-0.021 R_⊙, where the quoted error accounts for the luminosity uncertainty and an increased ±50 K error in temperature (the latter accounts for the SteParSyn error and possible systematics not included in the spectral fitting analysis, e.g. different model atmospheres). This radius determination is independent of any evolutionary model and depends only on the distance (well known with Gaia), the bolometric luminosity (well determined), and the model atmospheres used to fit the observed spectra. The mass of HD 235088 can be derived following empirical mass–luminosity relationships. <cit.> used masses and radii of eclipsing binaries with FGK main-sequence components to obtain a reliable mass–radius relation, from which we inferred a mass of M_⋆ = 0.843 ^+0.033_-0.056 M_⊙ for our star (by adopting a solar chemical composition). §.§ Stellar rotation and age determination <cit.> computed a rotational period (P_rot) of 5.79 ± 0.15 days from a Lomb-Scargle periodogram of TESS SAP data and estimated an age of 165±30 Myr using gyrochonologic relations. <cit.> derived a similar P_rot using TESS data (6.14 d), but of 12.8–14 d with STELLA and REM photometric data. However, they estimated a significantly older age of 600 Myr for 23. Here, we used seven sectors of TESS SAP data to derive a P_rot of 12.0 ± 0.4 days (see Sect. <ref> for details), which is consistent with the maximum peak at ∼11.8 days from the computed generalized Lomb-Scargle periodogram (, see Fig. <ref>) and with the X-ray luminosity we measured, according to the relations in <cit.>. Quantitatively, we estimated 23's age using gyrochronology with the relations presented in <cit.> and <cit.>, also used in <cit.>. We obtained ages of 630^+100_-85 Myr and 690^+200_-80 Myr, respectively. Figure <ref> shows the distribution of rotation periods as a function of color G-J for different young clusters. Qualitatively, the age of 23 is consistent with the sequences of Praesepe, Hyades, and NGC 6811, that is, between 590 and 1000 Myr. Moreover, from the relation between the age for young stars and the coronal X-ray emission <cit.> and using the L_ X derived in Sect. <ref>, we obtained 700^+1050_-425 Myr for 23. Another age indicator that is commonly used is the atmospheric absorption of LiI 6709.61 Å. We looked for LiI in 23 in the co-added spectrum generated by from the CARMENES spectra, but no clear Lii feature appeared. Therefore, we set an upper limit at 3σ of 3 mÅ, indicating that the star is not younger than the Hyades or Praesepe (∼650 Myr; Fig. <ref>), namely, the Lii EW is not compatible with the age proposed by <cit.>. Lastly, we calculated the UVW galactocentric space velocities <cit.> of 23 using Gaia astrometry and systemic velocity (γ) to determine if the object shares kinematics properties with known clusters, moving groups, or associations. The UVW velocities of 23 are consistent with the young disk and, more particularly, with the Hyades supercluster (Fig. <ref>) which probably indicates its belonging to this supercluster. However, to prove its Hyades supercluster membership a more detailed study must be carried out. From the UVW velocities, we estimated an age of 600-800 Myr <cit.> which is consistent with the age reported by <cit.>, the rotation period, the X-ray emission, and the value we adopted for 23. § VALIDATION OF THE PLANET CANDIDATE We used the multi-color transit analysis approach described in <cit.> to validate 23.01 as a planet. The approach uses <cit.> to model the TESS photometry with the ground-based multi-color transit photometry to estimate the degree of flux contamination from possible unresolved sources inside the TESS photometric aperture. The contamination estimate yields a robust radius ratio estimate for the transiting planet candidate that accounts for any possible third light contamination allowed by the photometry. Combining the robust radius ratio with the stellar radius gives an absolute radius estimate of the planet candidate and if this absolute radius can be securely constrained to be smaller than the minimum brown dwarf radius limit (∼0.8 R_J, ), the planet candidate can be considered as a bona-fide transiting exoplanet. This approach has been used several times in the literature to validate or reject planet candidates orbiting faint stars <cit.>. Our analysis, which combines the TESS and ground-based MuSCAT2 light curves, gives a robust radius estimate of R_true = 2.63^+0.91_-0.61 R_⊕. This estimate can be considered the most reliable radius estimate for the transiting object when assuming that the TESS aperture could contain flux contamination from an unknown (unresolved) source and is consistent with the radius estimate derived from TESS photometry (R_ p = 2.045±0.075 R_⊕). The radius estimate is significantly below the brown dwarf radius limit of 0.8 R_J, and, thus, we validate 23.01 as a sub-Neptune-sized object 23 b. Furthermore, this analysis rules out a significant flux contamination from an unresolved source of a different spectral type than the host and, thus, the basic radius obtained with that assumption can be safely adopted as the planet's radius. § RESULTS §.§ Refinement of the planetary parameters We refined the properties of the 23 system fitting with the available TESS SAP photometric data. We added to the transiting model a GP with quasi-periodic kernel to account for the photometric variability of young star. We adopted the stellar parameters derived in Sect. <ref> and presented in Table <ref>. The fitted parameters with their prior and posterior values, and the derived parameters for 23 b are shown in Table <ref>. The TESS data along with the best transiting and GP models are shown in Fig. <ref>, and 23 b phase folded photometry is shown in Fig. <ref>. We could improve the ephemeris and properties of 23 b with the new TESS data, deriving a radius of R_p = 2.045±0.075 R_⊕. Moreover, the hyperparameter GP_P_ rot accounts for the periodicities in the photometric variation, which are expected to come from the stellar rotation of the young host star. We obtained GP_P_ rot = 12.0 ±0.4 days, therefore, we adopt the GP_P_ rot value as 23's P_ rot. In Sect. <ref>, we computed with the generalized Lomb-Scargle periodogram method a consistent periodicity (∼11.8 days, see Fig. <ref>). Some values needed to compute the transmission spectra come from the planetary mass measurement, which is still not available for our target at the time of writing. We used the information, and predicted mass (∼7 ± 2 M_⊕) from <cit.> to estimate a semi-amplitude K_⋆ of ∼2.5 m s^-1. §.§ Stellar activity analysis Before computing the TS of the Hα and HeI lines, we looked for stellar activity during the observations, which could compromise or challenge the detection of planetary signals, taking advantage of the simultaneous TESS observations to the CARMENES data. However, because 23 b's transit is shallow, and 23 is a young star, TESS data from a single transit do not have enough quality to clearly see if 23 b passed in front of active regions or spots. There is only a ∼20 minute event close to mid-transit with higher flux than expected in transit, but that level of variation is consistent with the photometric variations detected in other regions of the TESS light curve. We also checked the time evolution of the activity indicators during the observations and constructed the light curve of several stellar lines (all the wavelengths are given in vacuum), namely: Hei D3 (5877.2 Å), H Paschen β (Pa-β, 12821.6 Å), H Paschen γ (Pa-γ, 10941.1 Å), and H Paschen δ (Pa-δ, 10052.1 Å). The time evolution of these lines are shown in Fig. <ref>. They are mainly flat except for the Hα index, which exhibits a strange behaviour just at the beginning of the transit. The Hα index increases smoothly, coincident with the ingress time, and then decreases at about after mid-transit. However, the HeI D3 light curve is stable during the observations without evidence of fluctuations, suggesting that HeI lines were not extremely affected by the Hα variability. This is consistent with the HeI line being significantly less variable than the Hα line in the overall M dwarf sample observed by CARMENES (). Because we used fiber B to monitor the sky, there were no simultaneous Fabry-Pérot calibrations during the observations. Thus, the expected amplitude of ∼1 m s^-1 for 23 b's Rossiter-McLaughlin effect is below the uncertainties of the measured RVs. §.§ Hα and HeI transmission spectra Figure <ref> (top) shows the residual map and TS around the Hα line. The residual map displays absorption coincident in time with the Hα line index feature. To avoid the affected spectra by Hα variability, we computed the Hα TS without these spectra. We only included the spectra from the second half of the transit in the calculation of the TS. The Hα TS (Fig. <ref> top right) is mainly flat without a clear planetary absorption signal. We could only set an upper limit to Hα excess absorption of ∼0.9 %, which is computed as three times the root mean square of the TS near the line of interest. The HeI triplet residual map in Fig. <ref> (bottom) shows an absorption region only during the transit, and consistent with the expected location of the planetary trace. From Sect. <ref>, we conclude that there is no evidence of strong contamination of the HeI lines. Thus, the TS shown in the bottom right panel in Fig. <ref> was computed from all the spectra taken fully in-transit, between the first and fourth contacts. The TS shows a clear absorption feature from the two strongest lines of the HeI NIR triplet. We fit the HeI signal with a Gaussian profile, sampling from the parameter posterior distributions using the MultiNest algorithm () via its python implementation <cit.>. We used uniform priors for the excess absorption between ±3 %, and the central position (λ_0) from 10830 Å to 10835 Å. Table <ref> summarizes the priors and posteriors from the nested sampling fit, and other derived properties as well. Figure <ref> displays the posterior corner plot of the distributions. We obtained an HeI triplet absorption signal of -0.91±0.11 %, with an equivalent width of 9.5±1.1 mÅ and significantly blue-shifted (-6.6±1.3 km s^-1). Our absorption is deeper, but consistent at the 2σ-level with the previous detection with Keck/NIRSPEC reported by <cit.> (depth -0.64±0.06 %, equivalent width 6.6±0.5 mÅ, and blue shift -4.0±1.4 km s^-1). If we interpret the mild tension between the two values of the absorption depth as variability, it could be shown to originate from the stellar activity. The stellar activity revealed in our analysis of the Hα line could increase the XUV stellar flux, increasing the population of the level. <cit.> did not present a study of the stellar activity in other than the lines during their observations, preventing us from examining whether the different signals are caused by the stellar variability between the two transits. Keck/NIRSPEC Y band covers from 0.946–1.130 μm, in which there are two lines of the H Paschen series: Pa-γ (10941.1 Å), and Pa-δ (10052.1 Å). The light curves of the stellar Pa-γ, Pa-δ, and Pa-β as well, are shown in Fig. <ref>. As we noted in Sect. <ref>, those H Paschen lines do not show evidence of stellar activity or variability, as it is detected in the Hα index and the Hα transit light curve. Thus, it seems the lines in the NIR may not be the best option to check for stellar activity or variability. The transit light curves (TLC) of individual lines are useful for exploring the time evolution of the emission and absorption features reported in the residual maps and TS. The Hα and HeI triplet TLCs are displayed in Figure <ref>. We computed the TLC for the planetary Hα and HeI triplet lines following the methodology applied in <cit.>, based on previous transmission spectrum analyses (i.e. ). We considered two different band-passes to integrate the counts in the planet rest frame. For the Hα TLC, we took the generic values of 0.5 Å, and 1 Å centered at the nominal Hα wavelength. For the HeI triplet TLC, the band-passes are equal to the fitted σ width (0.4 Å), and the FWHM width (0.95 Å), both centered at the fitted λ_0 to account for the detected blue-shift. The Hα TLC (Fig. <ref> left) is clearly affected by the stellar variability, and shows a very similar time evolution to the Hα line index. The data points from the second half of the transit are consistent with a null absorption within 1σ. The HeI TLC (Fig. <ref> right) displays the time evolution of the planetary transit. From the HeI TLC with a band pass of 0.4 Å, we retrieved an excess transit depth of -1.00±0.11 % compared to the continuum, which is consistent with the TS absorption. When using a broader band pass of 0.95 Å, we get an excess transit depth of -0.74±0.07 %. We do not detect the pre-transit absorption reported by <cit.>. The two first points after transit are slightly negative, but consistent with null absorption, but below 0 %. Thus, we recomputed the Hei residual map, TS, and TLC again with those points as part of the transit. However, we obtained very similar results, suggesting that there is no clear evidence for an extended HeI tail (). The HeI TLC is asymmetric. Qualitatively, the HeI absorption increases from ingress until mid-transit, and remains constant until the rapid decrease at egress. In a band-pass of 0.4 Å, the first half and second half of the transit have a transit depth of -0.88±0.14 %, and -1.15±0.15 %, respectively. We divided the CARMENES spectra in seven different phases to study the temporal evolution of the HeI triplet signal. We defined the phases as i) pre-transit, ii) around the ingress, iii) between ingress and the center of the transit (start), iv) center of the transit, v) between center of the transit and egress (end), vi) around the egress, and vii) post-transit. Each in-transit phase has between about five and six spectra, so the results have comparable uncertainties. Figure <ref> (top panel) displays an infographic about the spectra used, and the coverage of each defined phase. We computed the TS, and the TLC for each phase. Figure <ref> displays the TS for each phase, and the absorption values from TS and TLC are presented in Table <ref>. We checked that the TS and TLC HeI signals during the pre- and post-transit phases are consistent with null absorption from the planet. Because the ingress and egress duration of 23 b is of the order of one exposure (5 min), it is difficult to inspect the differences between the terminators. At best, we can use our defined ingress and egress phases as a proxy, but these include also spectra that are taken close in time, but not strictly during the ingress and egress, respectively. We can clearly see differences between the ingress and egress phases. The ingress absorption is consistent with 0 %, whereas we computed a significant absorption of -0.76 % from the egress TS and TLC. Those differences could be produced by variations between the terminators, but could also have their origin in the stellar variability detected in the Hα line only during the first half of the transit. Although we did not, similarly to the work of <cit.>, find any evidence of a cometary-like tail in 23 b, another plausible explanation could be the material accumulation in the egress terminator due to an incipient or failed formation of an HeI tail. From Fig. <ref>, the HeI signal appears to be consistently blue-shifted during the transit, and it reaches deepest during the center and end phases. The egress phase has an absorption comparable to the start phase one. § MODELING THE HEI ABSORPTION We analyzed the HeI absorption spectrum following the method used in previous studies for 20, 18, and 34 <cit.>, and (more recently) for HAT-P-32 b, WASP-69 b, GJ 1214 b, and WASP-76 b <cit.>. Briefly, we used a one-dimensional hydrodynamic and spherically symmetric model, together with a non-local thermodynamic equilibrium model to calculate the density distribution in the upper atmosphere of the planet <cit.>. The absorption was subsequently computed by using a radiative transfer code for the primary transit geometry <cit.>, which includes Doppler line shapes broadened by the atmospheric temperature, by turbulent velocities and by the velocity of the outflowing gas along the line of sight. We computed the phase averaged synthetic absorption (i.e. the average from T_1 to T_4 contacts) and included the effects of the impact parameter. Some improvements and updates of the model are described in <cit.>. Our estimate of the planetary mass corresponds to a planet-to-star mass ratio of 2.5 × 10^-5. Using the formula given by <cit.>, we estimate a Roche lobe radius of 2.2 R_J for the planet, which yields a transit depth of about 8.0 % for the Roche volume. The observed 0.90 % signal can, therefore, be plausibly caused by material within the planetary Roche lobe. In our detailed analysis (see below), we found that for an intermediate temperature of the best fits (see Fig. <ref>), the contribution of the layers inside the Roche lobe (under the assumption of no stellar winds, SWs) to the total absorption is about 80%. This contribution is larger for lower temperatures and smaller for higher temperatures. Also, it is larger if stellar winds are considered. The model inputs specific to this planet are described below. The stellar and planet parameters of the system are listed in Table <ref>. A key input is the stellar flux from XUV to near-UV, covering the range from 1 Å up to 2600 Å. The 1–1600 Å spectral energy distribution was modeled as described in Sect. <ref>, and the coverage up to 2600 Å was comleted by using the photospheric models by <cit.>. The H/He ratio is another key parameter affecting the mass-loss rate, , and temperature ranges of the upper planetary atmosphere. In previous studies, this ratio has been constrained by using atomic hydrogen absorption measurements ( or ) <cit.> or by using theoretical arguments about the upper limit of the heating efficiency <cit.>. For most of the studied planets, a large H/He ratio, larger than 97/3, has been found. In particular, for the sub-Neptune 12, we derived a value of 98/2 <cit.>. This planet is, among those analyzed, the one closest in size to HD 235088 b; hence, given the lack of further information on the H/He ratio, we use the same value in this analysis. We found that the distribution of this planet is very extended, much more than in previously studied planets, including the sub-Neptune 12. An example of the modelled transmission for the combined transit (the average over the different phases) is shown in Fig. <ref> for a thermospheric temperature of 3000 K and a sub-stellar mass-loss rate of 2.4 × 10^10 . Because of the weak surface gravity of this planet, the velocities of the outflowing gas resulting from the hydrodynamic model are very large even at low radii, for instance, in the range of 5–15 km s^-1 for r = 2–15 R_ P, thus inducing a very prominent broadening (compare blue and orange curves in Fig. <ref>). As described in Sect. <ref>, the absorption peak is shifted to blue wavelengths by -6.6 km s^-1, suggesting that a large fraction of the observed atmosphere is flowing towards the observer. Similar blue shifts have been found for most of the planets with detections. Since our 1D spherical and homogeneous model cannot predict it, we imposed a net shift of -6.6 km s^-1 in our calculations. With the method described above we constrain the mass-loss rate and temperature of the planet's upper atmosphere to the ranges of (1.5–5) × 10^10 and T = 2750 K–3500 K (see Fig. <ref>). As this planet has a very extended atmosphere, the absorption at high altitudes is significant and, hence, it is advisable to estimate the potential effects of the stellar wind <cit.>. We did this by assuming that the atmosphere is still spherical but extended only up to the ionopause in the substellar direction <cit.>. The results (displayed as the black triangles in Fig. <ref>) show that a strong SW does not significantly change our nominal -T ranges. Furthermore, as we could not constrain the H/He ratio, we also explored the effects of varying this ratio (see Fig. <ref>). The results show that increasing the H/He ratio to 99/1 does not significantly change the mass-loss rate. However, if the ratio were as low as 90/10 (unlikely given the previous results), then the mass-loss rate would be significantly lower. <cit.> estimated a temperature of 6700 ± 300 K and a mass-loss rate of ∼1.3 × 10^11 for this planet. We derive significantly smaller values for both. Although we assumed different F_ XUV stellar fluxes (by a factor of three; see above), we found that this does not explain the differences. Instead, if we assume an upper boundary at 10–15 in our model and an H/He = 90/10 value ( used 11 and H/He = 90/10; M. Zhang, priv. comm. 2023) our results agree well. Nevertheless, as we would need an extraordinarily strong SW for confining the planetary wind at those altitudes, while current findings suggest a high H/He ratio, we think that this scenario is less likely and that the mass-loss rates and temperatures we derive are more plausible. From a theoretical point of view, it is important to determine the hydrodynamic escape regime of the planets. Following the method used in previous studies <cit.>, we found that for the assumed H/He ratio of 98/2, this planet is in the photon-limited regime, with a heating efficiency of 0.23 ± 0.03. However, for an H/He ratio of 90/10, although still in the photon-limited regime it approaches the energy-limited case, particularly at lower temperatures (3000 K). § CONCLUSIONS We derived new stellar parameters for the K-type star 23 and we obtained a new age estimation of 600–800 Myr. Furthermore, by using multiband photometry from MuSCAT2, we confirmed the planetary nature of 23 b and refined its planetary parameters. With a radius of R_p = 2.045±0.075 R_⊕, 23 b is a young sub-Neptune planet close to the “radius gap” valley (). More interesting is the confirmation with CARMENES spectra of an evaporating HeI atmosphere. Our excess absorption of -0.91±0.11% is ∼2σ deeper than the previous detection with Keck/NIRSPEC by <cit.>. The difference in the absorption depths suggests possible HeI variability, which should be clarified by further HeI observations. We also analyzed the HeI signal detected in the transmission spectrum via hydrodynamical modeling. In comparison with previously studied planets <cit.>, the mass-loss rate and temperature of HD 235088 b are generally in the expected ranges. It has a low mass-loss rate corresponding to its moderate XUV irradiation level, and a low temperature as expected given its low gravitational potential. Its mass-loss rate and temperature are slightly lower than those of the sub-Neptunes 34 and 12. There are only three exoplanets smaller than 23 b with atmospheric detections[According to the ExoAtmospheres database (<http://research.iac.es/proyecto/exoatmospheres/index.php>)], and all are only tentative and are based on Hubble Space Telescope or James Webb Space Telescope observations: GJ 1132 b (1.16±0.11 R_⊕, ), GJ 486 b (1.34 ± 0.06 R_⊕, ), and LHS 1140 b (1.727±0.032 R_⊕, ). In this work, we confirmed the presence of HeI in the atmosphere of 23 b, making this planet the smallest one with such a robust atmospheric detection – not only of HeI but of any atom or molecule. CARMENES is an instrument at the Centro Astronómico Hispano en Andalucía (CAHA) at Calar Alto (Almería, Spain), operated jointly by the Junta de Andalucía and the Instituto de Astrofísica de Andalucía (CSIC). CARMENES was funded by the Max-Planck-Gesellschaft (MPG), the Consejo Superior de Investigaciones Científicas (CSIC), the Ministerio de Economía y Competitividad (MINECO) and the European Regional Development Fund (ERDF) through projects FICTS-2011-02, ICTS-2017-07-CAHA-4, and CAHA16-CE-3978, and the members of the CARMENES Consortium (Max-Planck-Institut für Astronomie, Instituto de Astrofísica de Andalucía, Landessternwarte Königstuhl, Institut de Ciències de l'Espai, Institut für Astrophysik Göttingen, Universidad Complutense de Madrid, Thüringer Landessternwarte Tautenburg, Instituto de Astrofísica de Canarias, Hamburger Sternwarte, Centro de Astrobiología and Centro Astronómico Hispano-Alemán), with additional contributions by the MINECO, the Deutsche Forschungsgemeinschaft (DFG) through the Major Research Instrumentation Programme and Research Unit FOR2544 “Blue Planets around Red Stars”, the Klaus Tschira Stiftung, the states of Baden-Württemberg and Niedersachsen, and by the Junta de Andalucía. We acknowledge financial support from the State Agency for Research of the Spanish MCIU (AEI) through projects PID2019-110689RB-I00, PID2019-109522GB-C51, PID2019-109522GB-C5[4]/AEI/10.13039/501100011033, and the Centre of Excellence “Severo Ochoa” award to the Instituto de Astrofísica de Andalucía (CEX2021-001131-S). The first author acknowledges the special support from Padrina Conxa, Padrina Mercè, Jeroni, and Mercè. This work has made use of resources from AstroVallAlbaida-Mallorca collaboration. J.O.M. gratefully acknowledge the inspiring discussions with Maite Mateu, Alejandro Almodóvar, and Guillem Llodrà, and the warm support from Yess. This work is partly supported by JSPS KAKENHI Grant Numbers P17H04574, JP18H05439, JP21K13955, and JST CREST Grant Number JPMJCR1761. This paper is based on observations made with the MuSCAT2 instrument, developed by ABC, at Telescopio Carlos Sánchez operated on the island of Tenerife by the IAC in the Spanish Observatorio del Teide. This research was supported by the Excellence Cluster ORIGINS which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2094 - 390783311. SC acknowledges support from DFG through project CZ 222/5-1. aa § TESTING THE MUV FLUX LEVEL One possibility for testing the actual level of the MUV flux would be the use of the UV filters onboard XMM-Newton, as suggested by <cit.> and references therein. The XMM-Newton pipeline reports a flux calibrated assuming a flat SED in the whole spectral range of the filter (AB flux). For sources with a steep SED in the UV region, as in the case of late-type stars, this is not a valid approach. Instead, the effective area combined of the XMM-Newton Optical Monitor (OM) filter must be convolved with the stellar SED. A similar procedure was followed by <cit.>. The XMM-Newton pipeline provides, for the source at HD 235088 position, a count rate of 1.6 cts/s in the UVW2 (λλ 2120 ± 500 Å) filter, and 2.7 cts/s in the UVM2 filter (λλ 2310 ± 480 Å). Although the effective area of these filters is mainly within the nominal limits, there is a non-negligible tail towards longer wavelengths. To test the wavelengths at which the photons recorded in the OM/UVW2 filter had originated in, we extended our modelled SED up to 7000 Å, assuming a black-body emission for a star with temperature and size values as listed in Table <ref>. We applied a 30% reduction of the efficiency of the instrument due to degradation of the CCD[<https://xmmweb.esac.esa.int/docs/documents/CAL-SRN-0378-1-1.pdf>] over time. If we limit our test to the nominal UVW2 band-pass, we obtain 0.14 cts/s, just a ∼9% of the observed count rate, but the use of the SED in the whole 1600–7000 Å spectral range result in a count rate of 1.59 cts/s. Same procedure with the OM/UVM2 filter yields 2.7 cts/s. Both values are in excellent agreement with the observed count rate. This implies that no correction is needed to the general level of our SED. Figure <ref> shows the weighted count rate after convolving the effective area of the OM/UVW2 filter and an assumed SED. We used a flat SED (useful for AB magnitude or flux) in the nominal band-pass, and a realistic stellar SED in the whole spectral range. Although both combinations result in the same accumulated count rate, they yield quite different fluxes given the different distribution of photons along the spectrum. We find that 90% of the photons are actually coming from a wavelength range outside the nominal band-pass. This test indicates that the use of these filters to evaluate the stellar flux in the UV band must be taken with care. § PHOTOMETRIC FIT EXTRA FIGURES § MULTI-COLOR VALIDATION § ADDITIONAL FIGURES AND TABLES
http://arxiv.org/abs/2307.04417v1
20230710084558
Handling Group Fairness in Federated Learning Using Augmented Lagrangian Approach
[ "Gerry Windiarto Mohamad Dunda", "Shenghui Song" ]
cs.LG
[ "cs.LG", "cs.CY" ]
A]Dunda Gerry Windiarto Mohamad A]Song Shenghui [A]The Hong Kong University of Science and Technology Federated learning (FL) has garnered considerable attention due to its privacy-preserving feature. Nonetheless, the lack of freedom in managing user data can lead to group fairness issues, where models might be biased towards sensitive factors such as race or gender, even if they are trained using a legally compliant process. To redress this concern, this paper proposes a novel FL algorithm designed explicitly to address group fairness issues. We show empirically on CelebA and ImSitu datasets that the proposed method can improve fairness both quantitatively and qualitatively with minimal loss in accuracy in the presence of statistical heterogeneity and with different numbers of clients. Besides improving fairness, the proposed FL algorithm is compatible with local differential privacy (LDP), has negligible communication costs, and results in minimal overhead when migrating existing FL systems from the common FL protocol such as FederatedAveraging (FedAvg) <cit.>. We also provide the theoretical convergence rate guarantee for the proposed algorithm and the required noise level of the Gaussian mechanism to achieve desired LDP. This innovative approach holds significant potential to enhance the fairness and effectiveness of FL systems, particularly in sensitive applications such as healthcare or criminal justice. § INTRODUCTION Federated learning (FL) <cit.> is a distributed machine learning approach that enables model training on potentially sensitive data from different entities without the necessity for data sharing. This technique is promising in diverse domains such as computer vision (CV) as it can facilitate training of models on a large-scale, diverse set of data while preserving data privacy. However, FL can also present challenges related to group fairness, which refers to the equitable treatment of different groups in a population. Group fairness may be required by law such as in Europe <cit.>, ensuring that any decision making by predictive models trained using FL does not exhibit bias towards any particular group, such as race or gender. For example, an AI model used in a company's hiring process may have been trained on historical data that reflects biased hiring patterns, leading to discriminatory outcomes for underrepresented groups in the workforce. There are more examples <cit.> that further motivate raising awareness in training fair deep learning models. Group unfairness in FL-trained deep learning models may originate from statistical heterogeneity, where the data used by individual clients is inherently biased. The biased data leads to a biased model, making it crucial to address statistical heterogeneity in FL-based models. However, handling statistical heterogeneity or non-identical and independently distributed (non-iid) data can be an arduous task, and currently is an open problem <cit.>. In this paper, we aim to reduce group unfairness of FL solely from its training mechanism. While off-the-shelf methods to prevent learning bias are available in centralized learning such as modifying the loss function <cit.>, adopting them in FL can be challenging because, apart from potentially more computation, it also requires additional communication and careful consideration of privacy. Considering the difficulties associated with mitigating learning bias in FL, we propose a regularization technique to alleviate this issue. Our approach involves formulating the local optimization as a constrained minimax optimization problem using a fairness metric, and can be used alongside local differential privacy (LDP) <cit.>. In addition, we design an FL protocol that uses an augmented Lagrangian solver to tackle this optimization problem. We provide a detailed description of the proposed method in Section <ref> and offer theoretical results for the convergence rate and the required of noise level of the Gaussian mechanism to satisfy LDP in Section <ref>. We evaluate accuracy of the proposed algorithm on two CV datasets along with the fairness performance in Section <ref>. Our contributions are stated as follows. * We propose a new FL protocol to ensure group fairness. It follows the same framework as FedAvg except some modifications on the local training phase and the aggregation phase. * We provide convergence guarantee and the upper bound for the standard deviation of the Gaussian noise to guarantee LDP when using the proposed algorithm. * We focus on fairness evaluation on FL-trained CV models to fill the gap in the fair FL research, as most works evaluated their methods on categorical datasets with little focus on image datasets. The proposed method has several key merits. Firstly, the empirical results show that the proposed approach is capable of increasing the fairness of the ML model without significant loss of accuracy when compared with the baselines, as discussed in Section <ref>. Secondly, some practical challenges may appear when deploying a new FL algorithm in practical systems. In the following, we outline several notable features of the proposed FL algorithm that facilitate its implementation. Straightforward implementation from FedAvg. The proposed algorithm adds little overhead when migrating from FedAvg. Since we use stochastic gradient descent ascent (SGDA), in addition to performing gradient descent on the model, clients need to update a dual variable with gradient ascent during the local training. This computation is independent of the gradient calculation of the the model parameters, which means it can be executed sequentially. Apart from the model updates, the server also needs the dual variable updates from each client. Similar to aggregating model updates, the server aggregates dual variables by averaging if FedAvg is used. This shows that the proposed method only adds two independent steps in the current FedAvg implementation. Compatibility with the existing privacy mechanism. Attackers may steal information (model updates) during the communication phase in FL. They can reverse engineer it to infer some sensitive data owned by the participating clients. To prevent this issue, LDP can be used to protect user data. In our implementation, we use the Gaussian mechanism on model updates to ensure privacy guaranteed by LDP <cit.>. Negligible communication overhead. Compared with FedAvg, the proposed method only adds an extra scalar variable to the training framework, which needs to be exchanged between the client and server. This means that the proposed method only introduces negligible communication overhead. § RELATED WORK There have been some engaging results in tackling the fairness issues in deep-learning models. We categorize some prior related works based on how the training is conducted, either centralized or federated learning. Ensuring fairness in centralized learning. In centralized learning, it is not uncommon to modify the training framework to achieve a suitable degree of group fairness. The authors of <cit.> decorrelated the input images and the protected group attributes by using adversarial training. Knowledge transfer techniques and multi-classifiers can also be adopted as a debiasing method <cit.>. Augmenting each image sample with its perturbed version generated from generative models can potentially reduce biases as well <cit.>. The aforementioned works require additional components to the model, thus increasing the computation cost. This might not be suitable for FL. A possible alternative is to alter the loss function to take into account group fairness. The authors of <cit.> introduced a loss function obtained from the upper bound of the Lagrangian based on a constrained optimization formulation, which is closely related to this work. While they introduced a regularizer for the dual variable, the proposed method uses the augmented Lagrangian method with a squared constraint penalty term. Ensuring fairness in FL. Some prior works considered group fairness in FL. Due to system constraints, most innovations came from modifying the objective function of the training, the optimization methods, or more information exchange. The example for the latter is FairFed <cit.>, where the client weights are adaptively adjusted during the aggregation phase based on the deviation of each client's fairness metric from the global one. Tackling fairness by altering the objective function includes utilizing differential multipliers to solve a constrained optimization problem (FPFL) <cit.> and adjusting the weight of the local loss function for each sensitive group during the aggregation phase (FedFB) <cit.>. Compared with FPFL, the proposed method uses equality constraint instead of the inequality constraint. Also, FPFL has some limitations such as it sends the client statistics separately (gradients, values of the current loss function, and the number of data) instead of the updated model directly to the server, which in turn increases privacy risks. Moreover, no theoretical convergence rate was provided in <cit.>. Along the line of modifying the optimization method, FCFL <cit.> proposed a two-stage optimization to solve a multi-objective optimization with fairness constraints, which demands more communication rounds. Most of the aforementioned existing works except <cit.> only evaluated their methods on categorical datasets. These comparisons are summarized in Table <ref>. § PRELIMINARIES In this section, we introduce some mathematical notations that are often used in this paper. After that, we briefly describe the problem formulation of the conventional FL along with its algorithm. §.§ Notations Throughout this paper, we primarily focus on classification tasks in CV with groups consisting of binary sensitive (protected) attributes s ∈{0,1}. Such binary sensitive attributes can be written as s_0 and s_1 to represent s=0 and s=1 respectively. Also, the dataset 𝒟 with size |𝒟| constitutes of pairs of input x and label y with y ∈{0,1}, unless otherwise stated. We slightly abuse the notation of 𝒟 to represent both the set and the distribution. Some mathematical notations are stated as follows. [N] denotes {1,2, ..., N} and . denotes the ℓ_2-norm. We use 𝒲⊆ℝ^d and Λ to represent the parameter spaces of the model w and an additional training parameter λ respectively. §.§ Group Fairness Metrics To evaluate the group fairness of predictions generated by deep learning models, we can employ various measures based on how likely the model can predict a particular outcome for each group. Demographic parity (DP) <cit.> is commonly used for assessing the fairness of the model for binary sensitive attributes based on the 80% rule from <cit.>. Given a validation dataset 𝒟_val, we can partition it according to the sensitive attributes 0 and 1 as 𝒟_val,0 and 𝒟_val,1 respectively. Then, the empirical form of DP for binary classification tasks is defined as |PPR_𝒟_val,0 - PPR_𝒟_val,1|, where PPR_D is the ratio of positive predictions to all samples in D. On the other hand, equal opportunity (EO) [6] measures the absolute difference in true positive rates (sensitivities) between two protected groups. While DP takes into account the inherent biases in the whole dataset, EO only considers biases originating from the positive samples. Ideally, when DP or EO equals zero, the model is completely unbiased or fair. In multi-label classification tasks, EO is defined as the worst EO on a particular label, and similarly for DP. §.§ Problem Setup In the typical FL setting with N clients and one server, the goal is to train a global deep learning model f_w parameterized with w ∈𝒲 on each client dataset 𝒟_i (i ∈ [N]) with privacy guarantee. Clients receive the global model from the server (broadcasting phase) and train the model on their own dataset (local training phase). After that, the server collects the updated models from each participating client and aggregates them to get an updated model (aggregation phase). Similarly, the additional parameter λ∈Λ (e.g. the dual variable) that aids the training can be exchanged between the server and each client, and processed on the client during local training and on the server during aggregation. This process is repeated until convergence or a specified communication round. The explicit formulation for the true local risk function represented by a loss function l(f_w(x),y) = l(x,y;w) and a regularization function g is given by, F_i(w,λ):= 𝔼_(x_j, y_j) ∼𝒟_i l(x_j,y_j;w) + g(x_j,y_j;λ, w), and the corresponding empirical risk function is given by, F_i,S(w,λ):= 1/|𝒟_i|∑_(x_j,y_j) ∼𝒟_il(x_j,y_j;w) + g(x_j,y_j;λ, w). We define the global true risk function as F(w, λ) = ∑_i=1^N p_i F_i(w, λ), where p_i is the client coefficient with ∑_i=1^N p_i = 1 and p_i ∈ [0,1], and the corresponding global empirical risk function as F_S(w, λ) = ∑_i=1^N p_i F_i,S(w, λ) . In FedAvg, g(x_j,y_j;λ, w) = 0. In contrast, formulations using regularization-based algorithm such as FedProx <cit.> or FedMoon <cit.> have a non-zero g. § FAIRFEDAVGALM We first introduce the problem formulation for FL with group fairness constraints. Subsequently, we describe the proposed algorithm to achieve the objective. Lastly, we offer the upper bound for the standard deviation of the noise in the Gaussian mechanism on model updates to ensure LDP, and prove the convergence rate of the proposed algorithm with LDP. §.§ Problem Formulation The goal of this work is to ensure group fairness of FL-trained models. We tackle the problem by enforcing fairness during the training. For this purpose, we develop a constrained optimization with relaxation. Specifically, the local training aims to minimize the local risk function while satisfying the equality constraint based on the empirical DP metric. Since the empirical DP is not a differentiable function, we resort to using the formulation based on the loss function. Specifically, given 𝒟^s_0 as the population dataset with s_0, we consider an equality constraint μ̂^s_0_w = μ̂^s_1_w, where μ̂^s_0_w = 1/| 𝒟^s_0|∑_i ∈𝒟^s_0 l(x_j,y_j;w) and μ̂^s_1_w defined similarly. We write the constrained optimization during local training as min_w 1/|𝒟_i|∑_(x_j,y_j) ∼𝒟_i l(x_j,y_j;w) s.t. μ̂^s_0_w = μ̂^s_1_w. We use the similar technique from the augmented Lagrangian approach to relax the constraint. The relaxation provides more freedom for the optimization algorithm to find solutions that may not satisfy all the constraints strictly, but rather approximate them within an acceptable range. The Lagrangian of the problem is rewritten with an additional squared penalty term of μ̂^s_0_w - μ̂^s_1_w controlled by a penalty coefficient β, that is L(w,λ) = 1/|𝒟_i|∑_(x_j,y_j) ∼𝒟_i l(x_j,y_j;w) + λ(μ̂^s_0_w - μ̂^s_1_w) + β/2 (μ̂^s_0_w - μ̂^s_1_w)^2. After that, we solve the following min-max problem instead, min_w max_λ L(w,λ). In the conventional augmented Lagrangian method <cit.>, at each iteration, another sub-iteration is performed to find the approximate solution such that the gradient of the objective is close to zero. Although the original augmented Lagrangian method requires that the final gradient to approximate the minimizer at any given time is bounded following a sequence approaching zero, we relax the condition by requiring bounded gradients, as it will be stated in Assumption <ref> later. Inspired by the augmented Lagrangian method, we can write g(x_j,y_j;λ, w) = λ(μ̂^s_0_w - μ̂^s_1_w) + β/2 (μ̂^s_0_w - μ̂^s_1_w)^2 as the regularization term for the proposed method, with which sub-iterations are performed by clients during the local training. Hence, we can formulate the objective of the local training as min_w max_λ F_i,S(w,λ) . §.§ Algorithm We propose a fair FL algorithm that extends FedAvg based on the augmented Lagrangian method, dubbed as FairFedAvgALM. We assume that each client performs the same amount of local iterations (otherwise we need to use the correction term from <cit.>) to provide more flexibility in the experiment section. The proposed algorithm is shown in Algorithm <ref>. We outline some changes in comparison with FedAvg. The core of the algorithm is SGDA, as opposed to FedAvg in which SGD is used instead. During local training, the i-th client computes the stochastic gradient ∇_w L^(t,k)_i at communication round t and local iteration k from their batch samples ℬ sampled from their local distribution 𝒟_i as ∇_w L_i^(t,k) = ∇_w (1/|ℬ|∑_(x_j,y_j) ∈ℬ l(x_j,y_j;w_i^(t,k-1)) + λ(μ̂^s_0_w_i^(t,k-1) - μ̂^s_1_w_i^(t,k-1)) + β/2 (μ̂^s_0_w_i^(t,k-1) - μ̂^s_1_w_i^(t,k-1))^2). In the end of the local iteration, each client updates λ with the gradient ascent by λ_i,tλ_t-1 + η_λ, t∇_λ L_i^(t,E). Before sending the updates to the server, each client adds a Gaussian noise to them to ensure LDP. After the server receives the updates from the clients, it aggregates both w and λ following FedAvg. It will be shown in Section <ref> that using this heuristic for λ allows convergence at an acceptable rate. §.§ Theoretical Analysis In this section, we introduce the formal analysis of LDP and the convergence rate of FairFedAvgALM. The proof of the convergence rate extends the previous theoretical convergence results of FedAvg from <cit.> and includes the aggregation of λ as well as LDP. The formal definition of differential privacy is given below. A randomized algorithm 𝒜 satisfies (ϵ, δ)-differential privacy if for any two neighboring joint datasets 𝒟 and 𝒟' differing by one sample, and for any subset S of the range of 𝒜, the following holds: ℙ[𝒜(𝒟) ∈ S] ≤ e^ϵℙ[𝒜(𝒟') ∈ S] + δ. In LDP, each client has their own privacy budget (ϵ_i, δ_i). A common method to achieve LDP is to use the Gaussian mechanism, by which a Gaussian noise with zero mean and standard deviation of σ_i is added to the model updates. The privacy budget (ϵ_i, δ_i)-LDP and σ_i are related through the sensitivity of the update, which is defined as Δ l = max_𝒟, 𝒟'f(𝒟) - f(𝒟'), where f represents the multivalued function that depends on the dataset (e.g. local model updates). Before presenting the result, we show the sensitivity of both primal and dual updates, Δ l_p and Δ l_d respectively, in the following lemma. Assume that the loss function l is bounded by l_max (l ≤ |l_max|), the dual variable λ is bounded by λ_max (|λ| ≤λ_max), and the gradient of the loss function is bounded by D (∇_w l(z)≤ D, ∀ z∈𝒟). The sensitivities of primal updates and dual updates are given by Δ l_p(t) ≤2 η_w,tD/|ℬ| + 8η_w,tλ_maxD/|ℬ| + 8 η_w,tβ D l_max(5|ℬ| - 2)/(|ℬ| - 2)^2 and Δ l_d(t) ≤4η_λ,t l_max/|ℬ|. See Section A.1 of the supplementary materials. The sensitivities of the updates above are sufficient to estimate the upper bound for the standard deviation of the noise <cit.>, which is explicitly stated in the following theorem. Given that the total number of communication rounds is T, the upper bounds of σ_i,λ and σ_i,w to achieve (ϵ_i, δ_i)-LDP for the i-th client with constant learning rates, η_w,t = η_w and η_λ,t = η_λ, are σ_i,w≤Δ l_p √(2T log (1/δ_i))/ϵ_i and σ_i,λ≤Δ l_d √(2T log (1/δ_i))/ϵ_i. The bound gives a rough estimation on the required noise levels to achieve the desired level of privacy. We provide the upper bound of the convergence rate based on the empirical primal risk function R_S(w) := max_λ L(w, λ). Before presenting the result, we list several definitions and key assumptions, which are stated below. The function h: 𝒲→ℝ is Lipschitz continuous if there exist G> 0 such that, for any w, w' ∈𝒲 and ξ∈𝒟, h(w; ξ) - h(w'; ξ)≤ Gw - w'. Define a function f: 𝒲×Λ→ℝ. f(w,·) is ρ-strongly convex if for all w ∈𝒲 and λ, λ' ∈Λ, f(w,λ) ≥ f(w, λ') + ⟨∇_λ f(w,λ'),λ - λ'⟩ + ρ/2λ - λ'^2. f(w,·) is ρ-strongly concave if -f(w,·) is ρ-strongly convex. For randomly drawn batch samples ξ and for all i∈[N], the gradients ∇_w F_i,S(w, λ; ξ) and ∇_λ F_i,S(w, λ; ξ) have bounded variances B_w and B_λ respectively. If g_i,w(w, λ|ξ) := ∇_w F_i,S(w,λ;ξ) is the local estimator of the gradient, 𝔼_ξ [g_i,w(w, λ|ξ) - ∇_w F_i,S(w, λ)^2] ≤ B_w^2, and the case for λ is similar but bounded by B_λ^2. The function f is L-smooth if it is continuously differentiable and there exists a constant L > 0 such that for any w, w' ∈𝒲, λ, λ' ∈ℝ, and ξ∈𝒟, [ ∇_w f(w, λ; ξ) - ∇_w f(w', λ'; ξ); ∇_λ f(w, λ; ξ) - ∇_λ f(w', λ'; ξ) ]≤ L [ w - w'; λ - λ' ]. For all i∈[N], the stochastic gradient of F_i,S(w,λ) is bounded, that is for all w ∈𝒲, λ∈Λ and ξ∈𝒟, we have ∇_w f(w, λ; ξ) ≤ D. In nonconvex analysis, it is not uncommon to use Polyak-Łojasiewicz (PL) condition on the objective function. h(w) satisfies the PL condition if there exists a constant μ > 0 such that, for any w ∈𝒲, 1/2∇ h(w)^2 ≥μ(h(w) - min_w'∈𝒲 h(w')). For simplicity, we assume full participation and the same number of local iterations for each client. The minimum empirical primal risk is R^*_S = min_w R_S(w). The upper bound of the convergence rate is given by the following theorem. Define κ = L/μ. Let η_w,t = 2/μ t and η_λ, t = 16 κ^2/μ t^2/3. Given that Assumption <ref> and Assumption <ref> hold, each F_i,S(w,λ) is L-smooth, each F_i,S(·,λ) satisfies μ-PL condition, and each F_i,S(w,·) is ρ-strongly concave, we have 𝔼 R_S(w_T+1) - R_S^* = 𝒪(Γ + B_w^2 + dσ_w^2 + B_λ^2 + dσ_λ^2 /T^2/3), after T communication rounds, where Γ := F_S^* - ∑_i=1^N p_i F_i,S^*, F_S^* := min_w max_λ F_S(w,λ) and F_i,S^* := min_w max_λ F_i,S(w,λ). See Section A.2 of the supplementary materials. Γ quantifies statistical heterogeneity of the FL system. In the case of strong non-iid, the saddle solution of the global risk function might be different from the weighted sum of each saddle local risks. Note that the convergence is slower than <cit.> (𝒪(1/T)) due to the minimax optimization. § EMPIRICAL RESULTS In this section, we consider the performance of FL-trained deep learning models, including the prediction accuracy and fairness performance (DP and EO) of the FL-trained model, on CelebA and ImSitu datasets. We also provide the results with different levels of statistical heterogeneity as well as the Gaussian mechanism for LDP. Lastly, we provide a qualitative analysis of the FL-trained models using Grad-CAM <cit.> visualizer to illustrate how the enhanced fairness performance is achieved by the proposed algorithm. Implementation. The learning rate of λ is decreased by a factor of b for every round. Moreover, the penalty term β also increases by a factor of b every round. We also use step learning rate decay to reduce the fluctuations in performance as the training progresses. We synthetically create the data heterogeneity by introducing label skews with balanced samples, which can be implemented using Dirichlet distribution parameterized by α on the labels <cit.>. Each experiment is repeated three times to capture different realizations. The code is available at https://github.com/gwmdunda/FairFedAvgALMhttps://github.com/gwmdunda/FairFedAvgALM. Baselines. The following are the baselines used for the comparison study. * FedAvg. It is the universal baseline in FL which aggregates all model updates by weighted average. * FairALM-FedAvg. This is the modified version of FairALM <cit.> that fits FL. It aims to optimize L(w,λ) = 1/|𝒟_i|∑_(x_j,y_j) ∼𝒟_i l(x_j,y_j;w) + λ(μ̂^s_0_w - μ̂^s_1_w) + η_λ (μ̂^s_0_w + μ̂^s_1_w). The Lagrangian is utilized as the local training objective to extend the original method, which is only applicable in centralized learning. * FairFed <cit.>. The server receives the local DP metrics, and based on them and the global trend, the server adjusts the value of p_i adaptively before averaging the model updates. In the CelebA experiments, DP metric is used, whereas in the ImSitu experiments, EO metric is used. * FPFL <cit.>. It enforces fairness by solving the constrained optimization on the sample loss function F_S with two constraints, which correspond to the sensitive attribute 0 and 1, as the absolute difference between F_S and the loss evaluated on the particular sensitive group being less than a tolerance threshold. Even though the author stated that the local training can only perform one local iteration per round, we extend their methods to multiple local iterations because we use a large batch size. In this experimental study, we set the threshold value to zero. Hence, we reformulate it as a local constrained optimization with g(w,λ) = λ_0 |F'_i(w^t_i,k-1) - μ̂^s_0_w^t_i,k-1| + λ_1 |F'_i(w^t_i,k-1) - μ̂^s_1_w^t_i,k-1)| + β/2((F'_i(w^t_i,k-1) - μ̂^s_0_w^t_i,k-1)^2 + (F'_i(w^t_i,k-1) - μ̂^s_1_w^t_i,k-1)^2 ), where λ := [λ_0, λ_1 ]^⊺∈ℝ^2. §.§ CelebA Dataset Description and setup. CelebA dataset <cit.> contains over 200,000 images of celebrities, each annotated with 40 attribute labels. In this experiment, we study a binary classification task for predicting attractiveness in images with male (gender) as the sensitive attribute. Since the image size is 178 × 218, we preprocess the input image by center cropping it to 178 × 178 and then resize it to 128 × 128. The model used for the prediction is the smaller version of ResNet-18 where the number of out channels from the first to fourth layer are 64, 128, 256, and 512, respectively. Other important hyperparameters are listed in Section C.1 of the supplementary materials. The FL system consists of 10 clients participating in the training with roughly 2800 samples that are distributed from the central dataset. The test evaluation is conducted on a test dataset which has more samples than the local dataset. By default, the Dirichlet coefficient α is 1 unless stated otherwise. Baseline comparison. Table <ref> shows the performance comparison of predicting attractiveness of images in CelebA dataset. Overall, the proposed method can gain good fairness performance for both DP and EO while sacrificing the accuracy by 4%. The trade-off between accuracy and fairness can also be observed in this task. The proposed method gains 6.8 % DP and 12 % EO while sacrificing 3.5 % accuracy compared with FedAvg. It also shows the best possible reduction in DP and EO. As attractiveness is a potentially biased label, FairFedAvgALM demonstrates its effectiveness in handling group fairness under such situation. In contrast, FPFL reduces the accuracy more while decreasing both DP and EO less compared with FairFedAvgALM. FairFed does not offer any improvement in fairness. FairALM-FedAvg improves the fairness performance leniently as it slightly degrades the accuracy. Statistical heterogeneity. We compare the performance of the proposed method with FedAvg on the other two levels of data heterogeneity, α=0.2 and α=5, and tabulate the result in Table <ref>, while maintaining the same setups and hyperparameters as before. As expected, the accuracy significantly drops when the system is more statistically heterogeneous (α = 0.2), and increases slightly in accuracy when the system is more homogeneous (α=5). The proposed method can improve fairness of the model with different levels of heterogeneity. Interestingly, when the system is statistically heterogeneous, the trained model is fairer compared with the same model trained on a more statistically homogeneous system. Scalability with the number of clients. We study the effect of scaling up the number of clients on the performance of two algorithms: FedAvg and FairFedAvgALM, while maintaining the same amount of samples per client, the same setups, and hyperparameters. As shown in Table <ref>, the accuracy decreases as the number of clients increases. On the other hand, both fairness metrics improve as the number of clients increases for both algorithms. The proposed algorithm can still maintain better fairness performance compared with FedAvg across different numbers of clients. Gaussian mechanism for LDP. We extend the previous setups from the baseline comparison by adopting Gaussian mechanism. We set σ_w = σ_λ, perform grid search on the set {0.1, 0.01, 0.001, 0.0001 }, and find that beyond 0.001, the trained model may diverge. Comparing Table <ref> and Table <ref>, we see that adding the Gaussian noise to the model updates degrades the accuracy and fairness performance. Firstly, the accuracy drops by 3-4 % for both FedAvg and FairFedAvgALM. Furthermore, the fairness aspect is heavily impacted for FedAvg where the DP performance is increased by almost 7 %, while the proposed method only increases by 1 %. Interestingly, EO drops by 2 % with both methods, but with higher variance. Qualitative analysis. We study the behavior of the trained models by FedAvg and FairFedAvgALM through the GradCAM visualization. From Figure <ref>, we can empirically observe how FedAvg and FairFedAvgALM predict based on the input images. In general, FairFedAvgALM captures smaller regions on the face than FedAvg. For example, in the second image, FedAvg captures the eye and the forehead region to make a prediction, whereas FairFedAvgALM only takes the forehead information. Furthermore, FairFedAvgALM avoids regions that implicitly encode gender information. For instance, in the fourth image, FedAvg captures a chubby cheek, which is often associated with women, while FairFedAvgALM captures the lower hair, which is more gender-agnostic. §.§ ImSitu Dataset Description and setup. ImSitu dataset comprises more than 200,000 images capturing everyday events, with each image annotated with a verb and a corresponding set of nouns. In our study, we employ ResNet-18 <cit.>, which is pre-trained on the Imagenet (ILSVRC) dataset. The task is to predict the activity of each image from 211 possible labels. The verb label and gender label of each image were filtered according to the existence of the gender attribute and annotated, based on the methodology proposed by <cit.>. Prior to inputting the image into the model, we resize it to 256× 256 and randomly crop a part of the region of size 224 × 224. Other important hyperparameters are listed in Section C.2 of the supplementary materials. The FL system in question is composed of four clients. The testing of the final model is conducted on unused samples of the clients. By default, the Dirichlet coefficient α is 2 unless stated otherwise. Baseline comparison. The performance comparison between the proposed method and the baselines is shown in Table <ref>. Because the empirical DP becomes insensitive as the number of classes increases, we need to consider the performance on substasks, each consisting of positive and negative labels. In general, the proposed method can significantly improve the fairness of the model in a more complex dataset. The proposed method can achieve 3% improvement in DP and 6 % improvement in EO over FedAvg while reducing the accuracy at most by 6 %. Although the absolute improvement in terms of fairness seems minor, the relative improvement can reach 50 %, and the fairness improvement is consistent across different subtasks. In the cooking-driving task, the model trained by FairFedAvgALM improves DP by 2% and EO by 4% while sacrificing the accuracy by roughly 3% compared to FedAvg. In this scenario, FairFed struggles to show any improvement in fairness while degrading the accuracy. FPFL can reach better fairness performance at the cost of larger accuracy drop. FairALM-FedAvg can improve the fairness of this task without sacrificing accuracy. For the shaving-moisturizing task, around 6 % drop in accuracy of FairFedAvgALM is compensated by the 3 % decrease of DP from the FedAvg. FairFed has a minor improvement in performance while sacrificing little accuracy. Compared to FairFedAvgALM, FairALM-FedAvg improves DP similarly but is not as aggressive in terms of EO. Some interesting observations are made in the assembling-hanging task. Firstly, the accuracy of FedAvg is not always superior compared to fairness-aware algorithms. In fact, FairALM-FedAvg has the highest accuracy while offering better fairness compared to FedAvg. Secondly, FairFed performs better than FairALM-FedAvg in terms of EO. The proposed method also outperforms FPFL in DP by 0.5 % and EO by 3 %. The proposed method still achieves the best fairness performance without sacrificing accuracy for this particular subtask. § CONCLUSION In this paper, we proposed FairFedAvgALM, an FL algorithm based on augmented Lagrangian framework to impose group fairness constraints. The algorithm is a simple extension of FedAvg, enabling its seamless integration into typical FL systems, incurring negligible communication costs, and being compatible with LDP. We showed that the upper bound of the theoretical convergence rate of the proposed algorithm on nonconvex problems is 𝒪(1/T^2/3). We also theoretically demonstrated that adding the squared penalty term to the local objective increases the sensitivity of the primal update, which in turn increases the required noise level compared to FedAvg. Our experiments on CelebA and ImSitu datasets suggested that FairFedAvgALM can reduce the unfairness on trained FL models quite well with varying degrees of improvement under different levels of statistical heterogeneity, numbers of clients, and the presence of the Gaussian mechanism. The trade-off between the accuracy of predictions and fairness is empirically observed, and the proposed method enforces fairness more consistently compared to other methods. in 1,..., [fitpaper=true,scale=1.0,pages=]
http://arxiv.org/abs/2307.04567v1
20230710135910
Homogenization of a reaction-diffusion problem with large nonlinear drift and Robin boundary data
[ "Vishnu Raveendran", "Ida de Bonis", "Emilio N. M. Cirillo", "Adrian Muntean" ]
math.AP
[ "math.AP", "35B27, 35B45, 35Q92, 35A01" ]
Homogenization of a reaction-diffusion problem with large nonlinear drift and Robin boundary data Vishnu Raveendran^a,*, Ida de Bonis^b, Emilio N.M. Cirillo^c, Adrian Muntean^a ^a Department of Mathematics and Computer Science, Karlstad University, Sweden ^b Dipartimento di Pianificazione, Design, Tecnologia dell'Architettura, Sapienza Università di Roma, Italy ^c Dipartimento di Scienze di Base e Applicate per l’Ingegneria, Sapienza Università di Roma, Italy [email protected] August 12, 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================== We study the periodic homogenization of a reaction-diffusion problem with large nonlinear drift and Robin boundary condition posed in an unbounded perforated domain. The nonlinear problem is associated with the hydrodynamic limit of a totally asymmetric simple exclusion process (TASEP) governing a population of interacting particles crossing a domain with obstacle. We are interested in deriving rigorously the upscaled model equations and the corresponding effective coefficients for the case when the microscopic dynamics are linked to a particular choice of characteristic length and time scales that lead to an exploding nonlinear drift. The main mathematical difficulty lies in proving the two-scale compactness and strong convergence results needed for the passage to the homogenization limit. To cope with the situation, we use the concept of two-scale compactness with drift, which is similar to the more classical two-scale compactness result but it is defined now in moving coordinates. We provide as well a strong convergence result for the corrector function, starting this way the search for the order of the convergence rate of the homogenization process for our target nonlinear drift problem. Keywords: Homogenization; reaction-diffusion equations with large nonlinear drift; two-scale convergence with drift; strong convergence in moving coordinates; effective dispersion tensors for reactive flow in porous media. MSC2020: 35B27; 35B45; 35Q92; 35A01 Homogenization of a reaction-diffusion problem with large nonlinear drift and Robin boundary data Vishnu Raveendran^a,*, Ida de Bonis^b, Emilio N.M. Cirillo^c, Adrian Muntean^a ^a Department of Mathematics and Computer Science, Karlstad University, Sweden ^b Dipartimento di Pianificazione, Design, Tecnologia dell'Architettura, Sapienza Università di Roma, Italy ^c Dipartimento di Scienze di Base e Applicate per l’Ingegneria, Sapienza Università di Roma, Italy [email protected] August 12, 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Our interest lies in performing the periodic homogenization limit for a reaction-diffusion problem with large nonlinear drift and Robin boundary condition posed in an unbounded perforated domain; see the structure of our target equations (<ref>)–(<ref>). The particular nonlinear drift problem we have in mind is associated with the hydrodynamic limit of a totally asymmetric simple exclusion process (TASEP) governing a population of interacting particles crossing a domain with obstacle <cit.>. In this framework, we aim to derive rigorously the upscaled model equations and the corresponding effective coefficients for the case when the microscopic dynamics are linked to a particular choice of characteristic length and time scales that lead to an exploding nonlinear drift. Ideally, we wish to get as well some insights into the structure of the corrector for the homogenization procedure. The question of exploding drifts is a bit unusual in the context of homogenization asymptotics as only particular scalings leading to blow-up can be handled rigorously. Hence, besides a couple or relatively recent papers [we are going to mention a few of them briefly in the following], there are not many contributions in the area. The celebrated concept of two-scale convergence with drift meant to cope with at least one particular exploding scaling has been introduced in <cit.> and later developed in <cit.> very much motivated by attempts to understand to so-called turbulent diffusion (see e.g. <cit.> where one speaks about convective microstructures). Our own motivation is aligned with statistical mechanics-motivated approaches to modeling reactive transport in porous media <cit.>. For phenomenological approaches to describing the physics of porous media, we refer the reader for instance to <cit.>. Closely related large-drift upscaling questions appear e.g. in the design of filters to improve catalysis <cit.> and in the control of microstructure for an efficient smoldering combustion in solid fuels <cit.>. The classical Taylor dispersion topic belongs to this context as well, compare <cit.>. Another class of periodic homogenization problems in the same framework arise from suitably scaled SDEs descriptions with exploding "volatilities" that lead to Fokker-Plank type counterparts with correspondingly exploding drifts (cf. e.g. <cit.>). Other relevant work related to large drift homogenization problems posed for different geometries can be found e.g. in <cit.>. This work is organized as follows: In section <ref>, we introduced our microscopic model and the geometry of the heterogeneous domain. Then we list the structural assumptions that we rely on to study the homogenization limit of our problem. In section <ref>, we show the existence and uniqueness of strong solutions to the target microscopic problem. The difficulty is here twofold: the type of nonlinearity and the unboundedness of the domain. We first study the model equations posed on o a bounded domain and then treat the case of the unbounded domain, which is where the microscopic problem needs to be formulated. To this end, we are using a suitable extension of the concept of solution, a comparison principle, jointly with a suitable monotonicity argument. We conclude this section by showing -independent energy estimates and the positivity of the solution. These estimates are key ingredients for the passage to the homogenization limit. In section <ref>, we prove our main result which is the rigorous upscaling of the microscopic problem posed in an unbounded periodically perforated domain. The next steps follow the path of the periodic homogenization technique. We first define an extension operator which preserve the energy estimates from the original problem. Finally, we employ the two-scale convergence with drift and related compactness results together with strong convergence-type arguments in moving coordinates. Using such results, we derive the upscaled model which is a nonlinear reaction-dispersion equation coupled with an elliptic cell problem. In section <ref>, we study the difference of solution to micro-problem to solution to macro-problem in H^1 norm with the help of a corrector function. A couple of brief remarks and some ideas for potential future work are the subject of the closing section. § THE MICROSCOPIC MODEL Let Y⊂ℝ^2 be a unit square in ℝ^2. We define the standard cell Z as Y having as inclusion an impenetrable compact object Z_0 called obstacle that is placed inside Y (i.e. Z:=Y\ Z_0). We assume ∂ Z_0 has C^2 boundary regularity and ∂ Y∩∂ Z_0=∅ . We denote ∂ Z_0 as Γ_N To give our geometry a porous media description, we define the pore skeleton to be Ω_0^:=⋃_(k_1,k_2)∈ℕ×ℕ{(Z_0+Σ_i=1^2 k_i e_i)}, where >0 and {e_1,e_2} is the orthonormal basis of ℝ^2. We can now describe the (open) total pore space and its total internal boundary via Ω^:=ℝ^2∖Ω_0^, and respectively, Γ_N^:= ⋃_(k_1,k_2)∈ℕ×ℕ{(Γ_N+Σ_i=1^2 k_i e_i)}. We denote by n_, and respectively, n_y the unit normal vectors across the interfaces Γ_N^ε, and respectively, Γ_N; they are directed outward with respect to ∂Ω_. As target system, we consider the following reaction-diffusion-convection problem ∂ u^ε/∂ t +div(-D^ε∇ u^ε+ 1/B^εP(u^ε)) =f^ε Ω^ε× (0,T), (-D^ε∇ u^ε+ 1/B^εP(u^ε))· n_ = g_N(u^) Γ_N^× (0,T), u^(0) =g Ω^. Here f^:Ω_→ℝ and g_N:ℝ→ℝ are given functions, D^(x_1,x_2):=D(x_1/, x_2/) for (x_1,x_2)∈Ω_, where D is a 2× 2 matrix and Z–periodic defined in the standard unit cell Z, B^(x_1,x_2):=B(x_1/, x_2/) where B is a 2× 1 vector with positive entries and Z–periodic, C^(x_1,x_2):=C(x_1/, x_2/) where C(·) is Z periodic function. What concerns the nonlinear drift P(·):ℝ→ℝ, we set P(u^) as P(u^)=u^(1-C^ u^), with ∫_Z BC dy=0. If C^=1, then the structure of the nonlinear drift, i.e. B^εP(u^ε), is precisely what one gets as mean field limit for a suitable TASEP process (cf. <cit.>). It is worth noting that the homogenization question has been posed already for such kind of problem (see <cit.>). The novelty here is the treatment of the exploding scaling on nonlinear drift. §.§ Structural assumptions We consider the following restrictions on data and model parameters. We summarize them in the list of assumptions <ref>–<ref>, viz. * For all η∈ℝ^2 there exists θ, θ>0 such that θη^2≤ Dη·η≤θη^2, D^∈ C_#^2,β(Z)^2× 2, where β∈ (0,1). * B∈ C_#^1,β(Z)^2, C∈ C_#^1,β(Z) satisfies divB=0 (0,T)× Z div(BC)=0 (0,T)× Z B· n_y =0 (0,T)×Γ_N. . * f^ :ℝ^2→ℝ^+ such that f^∈ C_c^2(ℝ^2) and f^2-drift(B^*)f, where the convergence is defined in the sense of Definition <ref>. * g_N∈ C^1(ℝ) satisfies -g_N(x)x <0 x≠ 0, g_N(x) ≥ g_N(y) x≥ y. Note that in order to prove uniqueness of microscopic problem we need to assume g_N(x)≤ g_N(y) x≥ y, Hence to prove the uniqueness of microscopic problem we assume g_N is a constant. At later stage using Theorem <ref> we prove that domain of g_N(·) is positive. So, condition (<ref>) is automatically satisfied if we assume g_N is positive constant. * g:ℝ^2→ℝ^+ such that g∈ C_c^∞(ℝ^2). Assumptions <ref>–<ref> are technical, but they have all a natural physical explanation. <ref> is linked to the choice of non-degenerating diffusion process in the underlying stochastic description, <ref> points out the incompressibility of the flow and how the flow behaves along the boundary of the perforations (the microscopic obstacles), and <ref>–<ref> are simple structural choices for the volume and boundary production terms. Both <ref> and <ref> are essential for the success of our work, while <ref>–<ref> can be replaced by other suitable options. Note that the relatively high regularity stated in <ref>–<ref> is primarily needed to ensure the well-posedness of the microscopic problem. To reach the homogenization limit and to guarantee the strong convergence of the corrector we only need minimal regularity on the data. § WELL-POSEDNESS OF THE MICROSCOPIC PROBLEM Let us introduce our concept of solution. A weak solution to problem (<ref>)–(<ref>) is a function u^∈ L^2(0,T;H^1(Ω^))∩ H^1(0,T;L^2(Ω^)) satisfying the identity ∫_Ω^∂_t u^ϕ dx+∫_Ω^ D^∇ u^∇ϕ dx -1∫_Ω^ B^ u^(1-C^ u^) ∇ϕ dx= ∫_Ω^ f^ϕ dx- ∫_Γ_N^ g_N(u^) ϕ dσ for all ϕ∈ H^1(Ω^) and a.e. t∈ (0,T) with initial condition u(0)=g. Before proceeding to any asymptotic study of type → 0, we must ensure first that this concept of solution is suitable for the problem at hand. §.§ Analysis for bounded domains, extension arguments, -independent bounds In this section, we are concerned with guaranteeing the weak solvability of the microscopic problem (<ref>)–(<ref>), i.e. we look for solutions in the sense of Definition <ref>. Relying on fundamental results from <cit.>, we first show the existence of a strong solution for the same model equations while they are formulated in a bounded domain. As next step, by using a comparison principle combined with a monotone convergence argument, we show that a particular sequence of extensions of the solution to the bounded domain problem converges to the weak solution of our original problem (<ref>)–(<ref>) posed on an unbounded domain. Consider the following nonlinear reaction diffusion convection equation u_t-a_ij(t,x,u)u_x_ix_j+b(t,x,u,u_x) =0 (0,T)×Ω a_ij(t,x,u)u_x_i· n = ψ (0,T)×∂Ω u(0,x)=ψ _0(x,u) with the assumptions (P1) νξ^2 ≤ a_ij(t,x,u)ξ_iξ_j≤μξ^2. (P2) If |y|≤ M for some constant M>0, ∂ a_ij(t,x,y)/∂ y, ∂ a_ij(t,x,y)/∂ x, ψ, ∂ψ(t,x,y)/∂ x,∂ψ(t,x,y)/∂ y, ∂^2 a_ij(t,x,y)/∂^2y, ∂^2 a_ij(t,x,y)/∂ y∂ t, ∂^2 a_ij(t,x,y)/∂ x∂ t are uniformly bounded in their respective domain. (P3) For fixed x,t,y and arbitrary p, there exist μ>0 |b(t,x,y,p)|≤μ (1+p^2). (P4) There exist C_0, C_1,C_2>0 such that for (t,x)∈ (0,T)×Ω, b(t,x,y,p) satisfies the condition -yb(t,x,y,0)≤ C_0y^2+C_1, |b_p|(1+|p|)+|b_y|+|b_t|≤ C_2(1+p^2). (P5) There exist C_0, C_1>0 such that for (t,x)∈ (0,T)×∂Ω, ψ(t,x,y) satisfies the condition -yψ(t,x,y)< 0 (t,x)∈ (0,T) ×∂Ω |y|>0. (P6) For |y|≤ M, where M is a constant, a_ij(t,x,y), b(t,x,y,p) and ψ(t,x,y) are continuous in their arguments. (P7) For |y|,|p|≤ M a_ij_x(t,x,y), b(t,x,y,p) are Hölder continuous in variable x with exponent β ψ_x(t,x,y) is Hölder continuos in x and t with exponent β and β/2 respectively. (P8) ∂Ω is of class H^2+β. (P9) ψ(0,x,0)=0. Then there exist unique solution u∈ H^2+β,1+β/2(Ω_T) satisfying (<ref>)–(<ref>) and u_L^∞((0,T)×Ω)≤ M, where M is dependent on C_0,C_1 and ψ_0 and independent of |Ω|. Note that the formulation of Theorem <ref> is done using notations similar to the ones used in the monograph <cit.>. The proofs of the existence and uniqueness properties follow directly from Theorem 7.4 in <cit.>. The proof of the upper bound (<ref>) is a consequence of combining both Theorem 7.2 and Theorem 7.3 of <cit.>. Assume <ref>, <ref> and <ref> to hold true. Let E⊂Ω^ be a bounded domain and v∈ L^2(0,T;H^1(E))∩ H^1(0,T;L^2(E))∩ L^∞ ((0,T)× E), u∈ L^2(0,T;H_E)∩ H^1(0,T;L^2(E))∩ L^∞ ((0,T)× E), where H_E:={u∈ H^1(E): u=0 ∂ E\ (Γ^∩∂ E)}. If u,v satisfy the following identities ∫_E∂_t u ϕ dx+∫_E D^∇ u ∇ϕ dx - 1/ε∫_E B^ P(u) ∇ϕ dx= ∫_E f^ϕ dx-∫_Γ_N^∩∂ E g_N(u) ϕ dσ, ∫_E∂_t v ψ dx+∫_E D^∇ v ∇ψ dx - 1/ε∫_E B^ P(v) ∇ψ dx= ∫_E f^ψ dx-∫_Γ_N^∩∂ E g_N(v) ψ dσ, and u(0) =v(0), u ≤ v ∂ E\ (Γ^∩∂ E). for all ϕ∈ H_E, ψ∈ H^1(E), then it holds u≤ v E. Let (v-u)=(v-u)^+-(v-u)^-, where (v-u)^- :=max{0,-(v-u)}, (v-u)^+ :=max{0,(v-u)}. Since u≤ v ∂ E\(Γ^∩∂ E), (v-u)^-=0 at ∂ E\(Γ^∩∂ E). In (<ref>) and (<ref>) we choose the test functions ϕ, ψ to be both (v-u)^-. Subtracting the corresponding results form each other yields the expressions: ∫_E∂_t (v-u) (v-u)^- dx+∫_E D^∇ (v-u) ∇ (v-u)^- dx - 1/ε∫_E B^ (P(v)-P(u)) ∇ (v-u)^- dx= ∫_Γ_N^∩∂ E (g_N(u)-g_N(v)) (v-u)^- dσ, ∫_E∂_t (v-u)^- (v-u)^- dx+∫_E D^∇ (v-u)^- ∇ (v-u)^- dx = 1/∫_E B^ (P(v)-P(u)) ∇ (v-u)^- dx +∫_Γ_N^∩∂ E (g_N(v)-g_N(u)) (v-u)^- dσ. For δ>0, we have 1/∫_E B^ (P(v)-P(u)) ∇ (v-u)^- dx =1/∫_E B^ (v-u) ∇ (v-u)^- dx -1/∫_E B^ C^ (v-u)(v+u) ∇ (v-u)^- dx ≤ C() ∫_E|(v-u)^-||∇ (v-u)^-| +C(,||u||_∞,||v||_∞) ∫_E|(v-u)^-||∇ (v-u)^-| ≤ C(δ,)∫_E|(v-u)^-|^2 dx+δ∫_E|∇(v-u)^-|^2 dx. By (<ref>) and (<ref>), we get ∫_Γ_N^∩∂ E (g_N(v)-g_N(u)) (v-u)^- dσ≤ 0. Now, using <ref>, (<ref>) and (<ref>) on (<ref>), for a δ>0 small enough, we obtain 1/2d/dt(v-u)^-^2+(θ-δ)∫_E|∇(v-u)^-|^2 dx≤ C(δ,)∫_E|(v-u)^-|^2 dx. Applying Grönwall's inequality to (<ref>), we conclude (v-u)^-=0. Hence, we are led to u≤ v for a.e x∈ E and all t∈ [0,T]. Assume <ref>–<ref> hold. Then for every fixed >0, there exists a unique weak solution u^∈ L^2(0,T;H^1(Ω^))∩ H^1(0,T;L^2(Ω^)) to the problem (<ref>)–(<ref>) in the sense of Definition <ref> We begin the proof with defining a boundary value problem similar to (<ref>)–(<ref>) on a bounded domain. Then we study the existence, uniqueness and global boundedness property of the bounded domain problem. Later using some monotonicity argument we derive unique weak solution to (<ref>)–(<ref>). We define Ω^_R:=(- R, R)^2∩Ω^, Γ_NR^:=∂Ω^_R\∂ (- R, R)^2, where R∈ℕ. Let f_R^, g__R^, D^_R, B^_R be restriction of f^, g_N^, D^, B^ on Ω^_R respectively. Let us consider the following boundary value problem ∂ u_R^ε/∂ t +div(-D^ε_R ∇ u_R^ε+ 1/B_R^εP(u_R^ε)) =f_R^ε Ω_R^ε× (0,T), (-D^ε∇ u_R^ε+ 1/B_R^εP(u_R^ε))· n_ = g_R^ Γ_NR^× (0,T), u_R^ =0 (∂Ω\Γ_NR^) × (0,T), u_R^(0) =g Ω^_R. In problem (<ref>)–(<ref>), we choose a_ij(t,x,u)=D^_R_ij(x), b(t,x,u,u_x)=B·∇ (u^_R(1-(u^_R)))-f^_R, ψ (t,x,u)= g_N(u^), ψ_0(x)=g(x). Using assumption <ref>, we find that (<ref>) is equivalent to (<ref>). Using assumption <ref> we obtain D^_R_ij(x) satisfies condition (P1),(P2),(P6),(P7). We use assumption <ref> and we get ∇ B (u^_R(1-u^_R))= B ∇ u^_R-B2u^_R∇ (u^_R). By Young's inequality and by (<ref>) we verify that the the term (<ref>) satisfies (P3),(P4),(P6) and (P7). From <ref> and using <ref> we have that g_N satisfies (P2), (P5) and (P9). Now, by Theorem <ref> we deduce that there exists a unique u^_R∈ H^2+β,1+β/2(Ω) which solves (<ref>)–(<ref>) and u^_R_L^∞((0,T)×Ω_R^) ≤ M, where M is a constant independent of R. Using arguments similar to those involved in the proof of Theorem <ref> and jointly with <ref>, we obtain max_t∈ (0,T)u_R^_L^2(Ω_R^)≤ C, u_R^_L^2(0,T;H^1(Ω_R^))≤ C, where C is independent of R. Following similar steps of Theorem <ref>, we obtain 0≤ u_R^, for a.e. x∈Ω^_R and all t∈ (0,T). Now, we define an extension of the bounded domain problem in unbounded domain. In particular we set u_R^= u_R^ x∈Ω_R^ 0 x∈Ω^\Ω_R^ . From (<ref>)–(<ref>) and (<ref>), we get u^_R_L^∞((0,T)×Ω_R^)≤ M, max_t∈ (0,T)u_R^_L^2(Ω_R^)≤ C, u_R^_L^2(0,T;H^1(Ω_R^))≤ C. Using (<ref>)–(<ref>), Lemma <ref> and (<ref>) together with Monotone Convergence Theorem, Banach-Alaoglu theorem (see <cit.>) there exists u^∈ L^2(0,T;H^1(Ω^))∩ H^1(0,T;L^2(Ω^)) such that u_R^→ u^ L^2((0,T)×Ω^), ∇u_R^⇀∇ u^ L^2(0,T;L^2(Ω^)), ∂ (u_R^)/∂ t⇀∂ (u^)/∂ t L^2(0,T;L^2(Ω^)), P(u_R^) ⇀ P( u^) L^2(0,T;L^2(Ω^)), g_N(u_R^) ⇀ g_N( u^) L^2(0,T;L^2(Γ_N^)). Convergences (<ref>)–(<ref>) guarantee the existence of weak solution to the problem (<ref>)–(<ref>) in the sense of Definition <ref>. To prove the uniqueness of the weak solution , we consider u^_1,u^_2∈ L^2(0,T;H^1(Ω^)) weak solutions to problem (<ref>)– (<ref>) in the sense of Definition (<ref>). Then we have ∫_Ω^(u^_1)_t ϕ dx + ∫_Ω^ D^∇ u^_1 ∇ϕ dx = 1/∫_Ω^ B^ P(u^_1)∇ϕ dx+∫_Ω^f^ϕ dx- ∫_Γ_N^ g_N(u^_1) ϕ dσ, ∫_Ω^(u^_2)_t ϕ dx + ∫_Ω^ D^∇ u^_2 ∇ϕ dx = 1/∫_Ω^ B^ P(u^_2)∇ϕ dx+∫_Ω^f^ϕ dx- ∫_Γ_N^ g_N(u^_2) ϕ dσ, for all ϕ∈ L^2(0,T;H). We choose the test function ϕ:= u^_1-u^_2 for (<ref>) and (<ref>) and substracting (<ref>) to (<ref>) we obtain ∫_Ω^ϕ_t ϕ dx + ∫_Ω^D^∇ϕ∇ϕ dx = 1/∫_Ω^ B^ (P(u_1^)-P(u_2^))∇ϕ dx+∫_Γ_N^( g_N(u_2^)-g_N(u_1^) )ϕ dσ. From <ref>, we get ∫_Γ_N^( g_N(u^_2)-g_N(u^_1) )ϕ dσ≤0. Using the Mean Value Theorem, (<ref>), and the structure of P(·) we get 1/∫_Ω^ B^ (P(u^_1)-P(u^_2))∇ϕ dx≤ C()∫_Ω^ϕ∇ϕ dx, where C is constant determined by C=sup_r∈ [0,M] P'(r), while M is the constant from (<ref>). Using the ellipticity condition (<ref>) together with (<ref>) and (<ref>), we obtain d/dtϕ^2_L^2(_Ω^)+θ∫_Ω^ |∇ϕ|^2 dx ≤ C∫_Ω^ϕ∇ϕ dx. Using Young's inequality and choosing δ small enough, we get d/dtϕ^2_L^2(_Ω^)+(θ-δ) ∫_Ω^ |∇ϕ|^2 dx ≤ C(δ)∫_E ϕ^2 dx, d/dtϕ^2_L^2(_Ω^)≤ C(δ)∫_Ω^ϕ^2 dx. Now, using Grönwall's inequalty, we get ϕ=0 a.e. in Ω^ for all times t∈ (0,T). Hence the uniqueness of the weak solution to problem (<ref>)–(<ref>) follows. Note that in Theorem <ref> we are considering Neumann boundary conditions. In our problem (<ref>)–(<ref>), we are dealing with a Dirichlet boundary condition imposed across a part of the boundary. Since it is a homogeneous condition and also the sets (∂Ω\Γ_NR^) and Γ_NR^ are disjoint, we can adapt the proof of Theorem <ref> to our setup (<ref>)–(<ref>). §.§ Energy estimates Assume <ref>–<ref> hold. There exists a constant C>0 independent of ε such that the following energy estimate hold u^ε_L^2(0,T;H^1(Ω^))≤ C, u^ε_L^∞(0,T;L^2(Ω^)≤ C, u^ε_L^∞((0,T)×Ω^)≤ C, u^ε_L^2(0,T;L^2(Γ_N^))≤ C. We prove the estimate (<ref>) by choosing the test function ϕ= u^ in the weak formulation (<ref>), we get 1/2d/dtu^_L^2(Ω^)+θ∫_Ω^|∇ u^|^2 dx≤1/∫_Ω^B^ P(u^)∇ u^ dx + ∫_Ω^f^ u^ dx-∫_Γ_N^ g_N(u^) u^ dσ. We define P(u^):=(u^)^2/2-C^(u^)^3/3, then we have ∇P(u^) =P(u^) ∇ u^ . Now, using <ref>, (<ref>), we have ∫_Ω^B^ P(u^)∇ u^ dx = ∫_Ω^B^∇P(u^) dx =-∫_Ω^∇· B^P̃(u^) dx + ∫_Γ_N^B^· n P(u^) dx =0. Using (<ref>) and Grönwall's inequality from (<ref>) we get the required results, i.e. (<ref>) and (<ref>). The estimate (<ref>) follows directly from (<ref>). Since Γ_N^ is uniformly Lipschitz, we use the trace result stated in Theorem 15.23 of <cit.> and we obtain (<ref>). Assume <ref>–<ref> hold. Let u^ be a weak solution of (<ref>)–(<ref>). Then u^≥ 0. We choose as test function ϕ=(u^)^- in (<ref>), where u^=(u^)^+-(u^)^- with u^-=max{0,-u^} and u^+=max{0,u^} and we get ∫_Ω^∂_t u^ (u^)^- dx+∫_Ω^ D^∇ u^∇ (u^)^- dx - 1/∫_Ω^ B^ P(u^) ∇ (u^)^- dx = ∫_Ω^ f^ (u^)^- dx-∫_Γ_N^ g_N(u^) (u^)^- dσ. Using (A1), we have 1/2d/dt||(u^)^-||^2_L^2(Ω^)+θ∫_Ω^ | ∇ (u^)^- |^2 dx + 1/∫_Ω^ B^ P((u^)) ∇ (u^)^- dx ≤ -∫_Ω^ f^ (u^)^- dx+∫_Γ_N^ g_N((u^)^-) (u^)^- dσ. After an integration by parts with respect to the space variable and <ref> , we get 1/∫_Ω^B^ P(-(u^)^-)∇ (u^)^- dx =-1/∫_Ω^B^( (u^)^-(1+C^(u^)^-)∇ (u^)^-) dx =-1/∫_Ω^B^∇1/2((u^)^-)^2-B^C^1/3∇((u^)^-)^3 dx =1/∫_Ω^∇ B^1/2((u^)^-)^2+∇ (B^C^)1/3((u^)^-)^3 dx +1/∫_Γ_N^B^· n (1/2((u^)^-)^2+C^1/3((u^)^-)^3) dσ =0. Since ∫_Γ_N^ g_N(u^) (-u^)^- dσ=∫_Γ_N^ g_N((-u^)^-) (-u^)^- dσ, by (<ref>) and <ref> we get -∫_Ω^ f^ (u^)^- dx+ ∫_Γ_N^ g_N((-u^)^-) (-u^)^- dσ≤ 0. Combining (<ref>), (<ref>) and (<ref>), we obtain 1/2d/dt||(u^)^-||^2_L^2(Ω^)≤ 0 As a direct application of Grönwall's inequality (see Appendix B of <cit.>) on (<ref>), we conclude (u^)^-^2_L^2(Ω^)≤ 0. Hence u^≥ 0 a.e. in Ω^ and for all t∈ (0,T). § DERIVATION OF THE UPSCALED MODEL In this section we pass → 0 to the homogenization limit and derive the corresponding upscaled equation and effective coefficients. §.§ Extension to fixed domain If u^∈ H^1(Ω^)∩ L^∞(Ω^), where Ω^ defined as in (<ref>). Then there exist a constant C>0 and an extension of u^ to H^1(ℝ^2) denoted by u^ such that u^|_Ω^ =u^, u^_L^2(ℝ^2) ≤ C u^_L^2(Ω^), ∇u^_L^2(ℝ^2) ≤ C ∇u^_L^2(Ω^), u^_L^∞(ℝ^2) ≤ C u^_L^∞(Ω^). Define Ω_R^ :=((- R,+ R)× (- R,+ R))∩Ω^, where R∈ℕ, clearly Ω_R^∩Ω_0^ =∅. Let u_R^:=u_|_Ω_R^^. Since u^∈ H^1(Ω^) we have u_R^∈ H^1(Ω_R^). By using Theorem 2.1 of <cit.> there exist a u∈ H^1((- R,+ R)× (- R,+ R)) and C>0 which is independent of R such that u_R^|_Ω_R^ =u_R^, u_R^_L^2((- R,+ R)× (- R,+ R)) ≤ C u_R^_L^2(Ω_R^), ∇u_R^_L^2((- R,+ R)× (- R,+ R)) ≤ C ∇ u_R^_L^2(Ω_R^), u_R^_L^∞((- R,+ R)× (- R,+ R)) ≤ C u_R^_L^∞(Ω^), ≤ C u^_L^∞(Ω^). The inequalities (<ref>) and (<ref>) follows directly from Theorem 2.1 of <cit.>. The inequality (<ref>) follows from the fact that the proof of the extension operator contains a standard reflection argument, hence it preserves the L^∞ bound in the extended regions(see Theorem 9.7 of <cit.>). Now, we prove the following identity u_R+N^|_Ω_R^=u_R^ Ω_R^ for all N∈ℕ. From the definition of u_R^ and u_R+1^, we have u_R+1^|_Ω_R^=u_R^. We define the extension of u_R+1^ in such a way that, if u_R^ is the extension of u_R^, then u_R+1^|_Ω_R^=u_R^ and in (- (R+1), (R+1))^2\ (- R, R)^2, we extend u_R+1 using a reflection argument similar to Theorem 9.7 of <cit.> and <cit.>. Now inductively we get (<ref>). We define extension of u^ on ℝ^2 as u^ where u^ is defined as for any x∈ℝ^2 there exists R∈ℕ such that x∈ (- R,+ R)× (- R,+ R)) and u^(x)= u_R^(x). By identity (<ref>), u^(x) is well defined function on ℝ^2. Using Fatou's Lemma, (<ref>) and (<ref>), we have ∫_ℝ^2(u^)^2dx =∫_ℝ^2lim_R→∞χ_((- R,+ R)× (- R,+ R)) (u^)^2dx =∫_ℝ^2lim_R→∞χ_((- R,+ R)× (- R,+ R)) (u_R^)^2dx ≤lim inf_R→∞∫_ℝ^2χ_((- R,+ R)× (- R,+ R)) (u_R^)^2dx ≤ C u_R^_L^2(Ω_R^)^2 ≤ C u^_L^2(Ω^)^2. Similarly, we prove ∫_ℝ^2|∇u^|^2dx =∫_ℝ^2lim_R→∞χ_((- R,+ R)× (- R,+ R)) |∇u^|^2dx =∫_ℝ^2lim_R→∞χ_((- R,+ R)× (- R,+ R)) |∇u_R^|^2dx ≤lim inf_R→∞∫_ℝ^2χ_((- R,+ R)× (- R,+ R)) |∇u_R^|^2 dx ≤ C ∇u_R^_L^2(Ω_R^)^2 ≤ C ∇ u^_L^2(Ω^)^2. The inequality u^_L^∞(ℝ^2)≤ C u^_L^∞(Ω^) follows from (<ref>) and (<ref>). It should be noted that our proof of the extension Lemma <ref> relies on the fact that at the level of the standard cell Z, the obstacle does not touch the external boundary of Z. §.§ Strong convergence In this section we prove that the sequence of solutions to the microscopic problem (<ref>) strongly converges to a limit formulated in moving co-ordinates. To obtain the wanted result, we adapt to our setup similar techniques as discussed also in <cit.>,<cit.> and <cit.>. Let f∈ L^2(Z), g∈ L^2(Γ_N) be given functions. Then the boundary value problem -Δ_y v=f Z, ∇ v· n=g Γ_N, v , has a unique solution v∈ H^1_#(Z)/ℝ if and only if the compatibility condition ∫_Z f dy=∫_∂ Z g dσ_y is satisfied. The proof follows via standard argument involving Fredholm alternative. We define B^* e_i:=∫_YB(y)e_idy|Z|, Ω^(t):={x+B^*t/:x∈Ω^}, v^(t,x):=u^(t,x+B^*t). We define {e_j}, j∈ℤ^2 as orthonormal basis for L^2((0,1)^2) with compact support and C^2 differentiable. Related to e_j, we define e_j,k(x):= e_j(x-k), q_j,k(t,x):= e_j,k(x-B^*t). Note that e_j,k and q_j,k forms orthonormal basis for L^2(ℝ^2). We have v^(t,x)=∑_j∈ℕ,k∈ℤ^2v^_jk(t)e_jk(x), v^(t,x)=∑_j∈ℕ,k∈ℤ^2v^_jk(t)e_jk(x), where v^(t,x) is the extension of v^(t,x) on ℝ^2 as defined in Lemma <ref> and v^_jk(t):=∫_Ω^(t)v^(t,x)e_jk(x) dx, v^_jk(t):=∫_ℝ^2v^(t,x)e_jk(x) dx. Assume <ref>–<ref> hold. Then for all δ>0, there exist a R_δ>0 such that u^(t,x+B^*t)_L^2(Ω^(t)\Ω(R_δ,))≤δ where Ω(R_δ,):=(Ω^∩ (-R_δ,R_δ)^2). We choose a specific cutoff function ψ∈ C^∞(ℝ^2) such that ψ(x)= 0 x∈ [-1,1] 1 x∈ [-2,2]^c and 0≤ψ(x)≤ 1 for x∈ (-2,-1)∪ (1,2) . Now, for x∈ℝ^2, we define ψ_R(x):=ψ(|x|R), ψ_R^(t,x):=ψ_R(x-B^*t). Consider the weak formulation (<ref>). Integrating it over (0,t) and choosing ϕ(t,x)=ψ_R^(t,x)u^(t,x), we get ∫_0^t ∫_Ω^∂_t u^ (ψ_R^(t,x)u^) dxdt+∫_0^t∫_Ω^ D^∇ u^∇ (ψ_R^(t,x)u^) dxdt -1∫_0^t∫_Ω^ B^ u^(1-C^ u^) ∇ (ψ_R^(t,x)u^) dxdt = ∫_0^t∫_Ω^ f^ (ψ_R^(t,x)u^) dxdt -∫_0^t∫_Γ_N^ g_N(u^) (ψ_R^(t,x)u^) dσ dt. Using integration by parts with respect to time variable, we have ∫_0^t∫_Ω^∂_t u^ (ψ_R^(t,x)u^) dxdt=12B^*∫_0^t∫_Ω^ (u^)^2 ∇ψ_R^(t,x) dxdt+12∫_Ω^ (u^(t,x))^2 ψ_R^(x,t) dx -12∫_Ω^ (u^(0,x))^2 ψ_R^(0,x) dx, while the second term in the left hand side of (<ref>) becomes ∫_0^t∫_Ω^ D^∇ u^∇ (ψ_R^(t,x)u^) dx=∫_0^t∫_Ω^ D^∇ u^∇ψ_R^(t,x) u^ dxdt+∫_0^t∫_Ω^ D^∇ u^∇ u^ψ_R^(x,t) dxdt. We have 1∫_0^t∫_Ω^ B^ u^(1-C^ u^) ∇ (ψ_R^(t,x)u^) dxdt=1∫_0^t∫_Ω^ B^ u^ψ_R^(t,x)∇ u^ dxdt +1∫_0^t∫_Ω^ B^ u^ u^∇ψ_R^(t,x) dxdt -1∫_0^t∫_Ω^ B^ C^∇ u^ψ_R^(t,x)(u^)^2 dxdt -1∫_0^t∫_Ω^ B^ C^ (u^)^3 ∇ψ_R^(t,x) dxdt, integration by parts with respect to the space variable and <ref>, we get 1∫_0^t∫_Ω^ B^ u^(1-C^ u^) ∇ (ψ_R^(t,x)u^) dxdt=-1∫_0^t∫_Ω^ B^ u^∇ (ψ_R^(t,x) u^) dxdt +1∫_0^t∫_Ω^ B^ u^ u^∇ψ_R^(t,x) dxdt+1∫_0^t∫_Ω^ B^ C^ u^∇(ψ_R^(t,x)(u^)^2) dxdt -1∫_0^t∫_Ω^ B^ C^ (u^)^3 ∇ψ_R^(t,x) dxdt, 1∫_0^t∫_Ω^ B^ u^(1-C^ u^) ∇ (ψ_R^(t,x)u^) dxdt= 12∫_0^t∫_Ω^ B^ u^ u^∇ψ_R^(t,x) dxdt -12∫_0^t∫_Ω^ B^ C^ (u^)^3 ∇ψ_R^(t,x) dxdt, 12B^*∫_0^t∫_Ω^ (u^)^2 ∇ψ_R^(t,x) dxdt+12∫_Ω^ (u^(t,x))^2 ψ_R^(t,x)) dx -12∫_Ω^ (u^(0,x))^2 ψ_R^(0,x)) dx +∫_0^t∫_Ω^ D^∇ u^∇ψ_R^(t,x) u^ dxdt+∫_0^t∫_Ω^ D^∇ u^∇ u^ψ_R^(t,x) dxdt - 12∫_0^t∫_Ω^ B^ u^ u^∇ψ_R^(t,x) dxdt +12∫_0^t∫_Ω^ B^ C^ (u^)^3 ∇ψ_R^(t,x) dxdt = ∫_0^t∫_Ω^ f^ (ψ_R^(t,x)u^) dxdt -∫_0^t∫_Γ_N^ g_N(u^) (ψ_R^(t,x)u^) dσ dt. Using <ref> and the definition of ψ_R^, we have lim_R→∞12∫_Ω^ (u^(0,x))^2 ψ_R^(0,x)) dx=0. By the definition (<ref>), we have |∇ψ_R^|, |∇·∇ψ_R^|≤C/R, where C>0 is a constant. Using <ref>, (<ref>) and Theorem <ref>, we conclude lim_R→∞∫_0^t∫_Ω^ D^∇ u^∇ψ_R^(t,x) u^ dxdt=0. Consider the following auxiliary problem: Find G_i (with i∈{1,2}) such that -Δ_y G_i =B^*e_i-B(y)e_i Z, ∇ G_i· n = 0 Γ_N, G_i . Using Lemma <ref>, definition (<ref>), there exist a weak solution G_i∈ H^1_#(Z)/ℝ satisfying (<ref>)–(<ref>). Now using the change of variable y= x/ and defining G_i^(x):=G_i(x/), we get -^2Δ_x G_i^ =B^*e_i-B^(x)e_i Ω_, ∇ G_i^· n = 0 Γ_N^. Thanks to the auxiliary problem (<ref>)–(<ref>) we can write 12∫_0^t∫_Ω^(B^*-B) (u^)^2 ∇ψ_R^(t,x) dxdt =∑_i=1^21/2∫_0^t∫_Ω^∇ G_i^∇( (u^)^2 ∂ψ_R^(t,x)∂ x_i) dxdt. We observe that ∇ G_i^=∇_y G(x/), and |∇( (u^)^2 ∂ψ_R^(t,x)∂ x_i)|≤ |u^∇ u^||∇ψ_R|+|(u^)^2∇·∇ψ_R|. By Theorem <ref> and by (<ref>) we can now estimate the right hand side of (<ref>) in the following way 12∫_0^t∫_Ω^(B^*-B) (u^)^2 ∇ψ_R^(t,x) dxdt ≤CR. Using Theorem <ref>, <ref> and observing that ψ_R^ is nonnegative, we have -∫_0^t∫_Γ_N^ g_N(u^) (ψ_R^(t,x)u^) dσ dt≤ 0. To obtain an estimate on the nonlinear term, we consider the following auxiliary problem: Find H_i (with i∈{1,2}) such that -Δ_y H_i =BCe_i Z, ∇ H_i· n = 0 Γ_N, H_i . Using Lemma <ref>, (<ref>), there exists a weak solution H_i∈ H^1_#(Z)/ℝ satisfying (<ref>)–(<ref>). Using the change of variable y=x/ and defining H_i^:=H_i(x/), we have -^2Δ_x H_i^ =BCe_i Ω_, ∇ H_i^· n = 0 Γ_N^. Hence (<ref>) and (<ref>) and following similar steps used to obtain (<ref>)–(<ref>), lead to 12∫_0^t∫_Ω^ B^ C^ (u^)^3 ∇ψ_R^(t,x) dxdt≤CR. Combining (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), we get lim_R→∞∫_Ω^ (u^(t,x))^2 ψ_R^(t,x)) dx=0. So, for any δ>0, there exists a R_δ such that ∫_Ω^ (u^(t,x))^2ψ_R_δ^(t,x) ) dx≤δ. Finally, using the change of variable x→(x+B^*t), we find ∫_Ω^(t)(u^(t,x+B^*t))^2ψ_R_δ(x) ) dx≤δ. Hence, (<ref>) is proved. Assume <ref>–<ref> hold. Then for p,s∈ (0,T) | ∫_Ω^ (p)v^(p,x)q_j,k-∫_Ω^ (s)v^(s,x)q_j,k|≤ C√(p-s), where C is a positive constant depending on j, k∈ℕ. Using the Fundamental Theorem of Calculus, for p,s∈ (0,T) we can write ∫_Ω^ (p)v^(p,x)e_j,k-∫_Ω^ (s)v^(s,x)e_j,k=∫_t^pddt∫_Ω^ (t)v^(t,x)e_j,k. By the change of variable x→ x-B^*t and the product rule, we get ∫_Ω^ (p)v^(p,x)e_j,k-∫_Ω^ (s)v^(s,x)e_j,k=∫_s^p∫_Ω^(ddtu^(t,x)q_j,k-B^*∇ q_j,k u^) dx dt. Choosing now ϕ=q_j,k as test function in (<ref>) and integrating the result from s to p with respect to time variable, we obtain ∫_s^p∫_Ω^∂_t u^ q_j,k dx=-∫_s^p∫_Ω^ D^ ∇ u^∇ q_j,k dxdt +1∫_s^p∫_Ω^ B^ u^∇ q_j,k dxdt -1∫_s^p∫_Ω^ B^ C^ (u^)^2 ∇ q_j,k dxdt+∫_s^p∫_Ω^ f^ q_j,k dxdt - ∫_s^p∫_Γ_N^ g_N(u^) q_j,k dσ dt. Combining (<ref>) and (<ref>) leads to ∫_Ω^ (p)v^(p,x)e_j,k-∫_Ω^ (s)v^(s,x)e_j,k=-∫_s^p∫_Ω^ D^∇ u^∇ q_j,k dxdt -1∫_s^p∫_Ω^ B^ C^ (u^)^2 ∇ q_j,k dxdt+∫_s^p∫_Ω^ f^ q_j,k dxdt - ∫_s^p∫_Γ_N^ g_N(u^) q_j,k dσ dt+∫_s^p∫_Ω^B^-B^*∇ q_j,k u^ dx dt. Using Theorem <ref>, <ref>, and the definition of q_j,k, we have |∫_s^p∫_Ω^ D^∇ u^∇ q_j,k dxdt| ≤ C ∫_s^p∇ u^_L^2(Ω^)∇ q_j,k_L^2(Ω^)dt ≤ C ∇ u^_L^2(0,T;L^2(Ω^))√(p-s) ≤ C√(p-s), and |∫_s^p∫_Ω^ f^ q_j,k dxdt|≤ C√(p-s). The trace theorem and <ref> ensure that | ∫_s^p∫_Γ_N^ g_N(u^) q_j,k dσ dt| ≤ C∫_s^p∫_Γ_N^∩ supp{q_j,k} |u^||q_j,k| dσ dt ≤ C ∫_s^pu^_L^2(Γ_N^) dt ≤ C ∫_s^p∇ u^_L^2(Ω^) dt ≤ C√(p-s). Since the functions q_j,k have compact support, using the auxiliary problem (<ref>)–(<ref>), and the integration by parts with respect to space variable, we obtain |∫_s^p∫_Ω^B^-B^*∇ q_j,k u^ dx dt| ≤∑_i=1^2|∫_s^p∫_Ω^Δ G_i ∂/∂ x_i q_j,k u^ dx dt| ≤∑_i=1^2|∫_s^p∫_Ω^∇ G_i ∇(∂/∂ x_i q_j,k) u^ dx dt| + |∫_s^p∫_Ω^∇ G_i ∂/∂ x_i q_j,k∇ u^ dx dt| ≤ C√(p-s). Similarly, using auxiliary problem (<ref>)–(<ref>) and Theorem <ref> we get | 1∫_s^p∫_Ω^ B^ C^ (u^)^2 ∇ q_j,k dxdt| ≤ C√(p-s). Combining the estimates (<ref>)–(<ref>) into (<ref>), we obtain (<ref>). Assume <ref>–<ref> hold. Then there exists v_0∈ L^2(0,T;L^2(ℝ^2)) such that lim_→ 0∫_0^T∫_Ω^(t)|v^ - v_0|^2dxdt=0, where v^ and Ω^(t) are defined in (<ref>) and (<ref>), respectively. We first prove that the sequence {v^_jk} has a subsequence that converges to some {v^0_jk} as → 0. From (<ref>), we have |v^_jk(t)| ≤∫_Ω^|v^(t,x)e_jk(x)|dx ≤v^_L^2(Ω^)e_jk_L^2(Ω^) ≤ C. By Lemma <ref>, we know that the sequence {v^_jk(t)} is equicontinous, hence by using Arzela-Ascoli Theorem we have that there exists a subsequence (denoted again by v^_jk(t)) such that v^_jk(t) uniformly converges to some v^0_jk(t) on C([0,T]). Since [0,T] is a bounded domain, the uniform convergence leads to strong convergence in L^2(0,T). We define v^0_*(t,x):=∑_j∈ℕ,k∈ℤ^2v^0_jk(t)e_jk(x). Claim 1: for fixed j and k, there exists a constant C_jk>0 such that |v^_jk(t)-|Y||Z|v^_jk(t)|≤ C_jk ^2. Proof of Claim 1: v^_jk(t)-|Y||Z|v^_jk(t)=∫_ℝ^2(χ_Y(x/)-|Y||Z|)v^(t,x)e_jk(x)dx, where χ_Y is the characteristic function defined on Z and extended periodically to whole ℝ^2, where χ_Y(x)=1 if x∈ Y, 0 if x∈ Y_0. Since ∫_Z(χ_Y-|Y||Z|)dx=0, the following auxiliary problem has a unique weak solution -ΔΠ(y) = χ_Y(y)-|Y||Z| Z, Π-Z . Using the change of variable y=x/, defining Π^(x):=Π(x/), and stating the problem for whole ℝ^2, we get -^2ΔΠ^(x) = χ_Y(x/)-|Y||Z| ℝ^2. We use (<ref>) in (<ref>) and we get |v^_jk(t)-|Y||Z|v^_jk(t)| ≤^2∫_ℝ^2|∇Π^∇(v^(t,x)e_jk(x))|dx ≤^2∫_ℝ^2∩ supp(e_jk)|∇Π^∇(v^(t,x)e_jk(x))|dx ≤ C_jk^2. Note that to get the inequality (<ref>), we used (<ref>) and Theorem <ref>. Hence we proved the Claim 1. Claim 2: for any δ>0 there exists R_δ∈ℕ such that v^χ_[-R_δ,R_δ]^2-∑_|j|,|k|≤ R_δv^_jke_jk_L^2(0,T;L^2(ℝ^2))< δ/5, where χ_[-R_δ,R_δ]^2 is characteristic function of [-R_δ,R_δ]^2. Proof of Claim 2 follows similar arguments as in Lemma 4 from <cit.>. We use Theorem <ref>, Lemma <ref>, Lemma <ref> and Rellich–Kondrachov theorem. For details see Lemma 4 from <cit.>. Now, from Lemma <ref> there exists N_1∈ℕ such that v^- v^χ_[-N_1,N_1]^2_L^2(0,T;L^2(ℝ^2))< δ/5. Using the property (<ref>) for v^ and (<ref>), we can guarantee that v^χ_[-R_δ,R_δ]^2-∑_|j|,|k|≤ R_δv^_jke_jk_L^2(0,T;L^2(Ω^(t)))< δ/5. Choosing small enough and using (<ref>), we have ∑_|j|,|k|≤ R_δv^_jke_jk-|Z||Y|∑_|j|,|k|≤ R_δv^_jke_jk_L^2(0,T;L^2(Ω^(t)))< δ/5. Since v^_jk strongly converges to v^0_jk in L^2(0,T), for small enough , we are lead to ∑_|j|,|k|≤ R_δv^_jke_jk-∑_|j|,|k|≤ R_δv^0_jke_jk_L^2(0,T;L^2(Ω^(t)))< |Z||Y|δ/5. From (<ref>), we get ∑_|j|,|k|≤ R_δv^0_jke_jk-v^0_*(t,x) _L^2(0,T;L^2(Ω^(t))< δ/5. Choosing in (<ref>)–(<ref>) N_1,R_δ large enough, we obtain for small enough that v^(t,x)-|Z||Y|v^0_*(t,x) _L^2(0,T;L^2(Ω^(t))< δ/5. Finally, we conclude the proof by choosing v^0(t,x)=|Z||Y|v^0_*(t,x). Observe that, by using the change of variable x→ x-B^*t into the identity (<ref>), we obtain the following strong two-scale convergence with drift lim_→ 0∫_0^T∫_Ω^|u^(t,x) - v_0(t,x-B^*t)|^2dxdt=0. This is a useful result that will appear in a several context in the following. §.§ Two-scale convergence with drift In this section we recall the concept of two-scale convergence with drift. Let r∈ℝ^2 and u^∈ L^2(0,T;L^2(Ω^), we say u^ two-scale converges with drift r to u_0, if for all ϕ∈ C_c^∞((0,T)×ℝ^2;C_#^∞ (Z)) the following identity holds lim_→ 0∫_(0,T)×Ω^ u^(t,x)ϕ(t,x-rx,x/)dxdt= ∫_0^T∫_ℝ^2∫_Z u_0(t,x,y)ϕ(t,x,y)dydxdt. We denote the convergence as u^2-drift(r)u_0. Let r∈ℝ^2 and u^∈ L^2(0,T;L^2(Ω^). We say that u^ strongly two-scale converges with the drift r to u_0 if and only if lim_→ 0∫_0^T∫_Ω^|u^(t,x) - u_0(t,x-rt,x)|^2dxdt=0. We denote this convergence by u^u_0. §.§.§ Compactness results Let ϕ∈ L^2((0,T)×ℝ^2;C_#(Z)). Then lim_→ 0∫_0^T∫_ℝ^2ϕ(t,x-B^*t,x)^2dxdt=∫_0^T∫_ℝ^2∫_Zϕ(t,x,y)^2dydxdt holds true. We refer the reader to Proposition 2.6.7 in <cit.> for the details leading to a proof of this statement. Let u^∈ L^2(0,T;H^1(Ω^)). Assume there exists a constant C>0 independent of such that u^_L^2(0,T;H^1(Ω^))≤ C. Then, there exist u_0∈ L^2(0,T;H^1(ℝ^2)) and u_1∈ L^2((0,T)× H^1(ℝ^2);H_#^1(Z)) such that u^2-drift(r) u_0, ∇ u^2-drift(r)∇_x u_0+∇_y u_1. For a detailed proof of this compactness result, see <cit.>. Let u^∈ L^2(0,T;L^2(Γ_N^)). Assume there exists a constant C>0 independent of such that u^_L^2(0,T;L^2(Γ_N^))≤ C. Then, there exists u_0∈ L^2(0,T;L^2(ℝ^2×Γ_N)) such that lim_→ 0∫_(0,T)×Γ_N^ u^(t,x)ϕ(t,x-rt,x/)dxdt= ∫_0^T∫_ℝ^2∫_Γ_N u_0(t,x,y)ϕ(t,x,y)dydxdt. For a detailed proof, see Proposition 5.4 of <cit.>. It is useful to note that the function v_0 which we obtained from (<ref>) and u_0 coming from (<ref>) are equal for a.e (t,x)∈ (0,T)×ℝ^2. To see this, we can argue as follows: for any ϕ∈ C_c^∞((0,T)×ℝ^2), we have ∫_0^T∫_ℝ^2(u_0-v_0)ϕ dxdt =lim_→ 01/|Z|∫_0^T∫_Ω^(u^(t,x)-v_0(t,x-B^*t))ϕ(t,x-B^*t)dxdt ≤lim_→ 01/|Z|u^(t,x)-v_0(t,x,x-B^*t)_L^2((0,T)×Ω^)ϕ(t,x-B^*t)_L^2((0,T)×Ω^) =0. Hence, we can conclude that v_0 and u_0 coincide. §.§ Limit problem – Structure of the upscaled model equations The main result of this paper is stated in the next Theorem. A connected companion result is Theorem <ref>. Assume <ref>–<ref> hold and g_N(r)=r for all r∈ℝ. Then the weak solution u^ to the microscopic problem (<ref>)–(<ref>) two-scale converges with the drift B^* to u^0(t,x) as → 0, where u^0(t,x) is the weak solution of the homogenized problem, viz. ∂_t u_0 +div( -D^*(u_0,W)∇_x u_0) =1/|Z|∫_Z f dy - |Γ_N|/|Z|g_N(u_0) (0,T)×ℝ^2, u_0(0) =g ℝ^2 -∇_y· D(y) ∇_y w_i+B(y)(1-2C(y)u_0)·∇_y w_i =∇_y D(y) e_i+B^* e_i-B(y)(1-2C(y)u_0)· e_i (0,T)×ℝ^2× Z, ( -D(y)∇_y w_i+B(y)(1-2C(y)u_0)w_i)· n_y = ( -D(y)e_i)· n_y (0,T)×ℝ^2×Γ_N, w_i(t,x,·) , where B^*e_i:=∫_Z B(y)e_idy|Z| and D^*(u_0,W):=1/|Z|∫_Z D(y)(I+[ ∂ w_1/∂ y_1 ∂ w_2/∂ y_1; ∂ w_1/∂ y_2 ∂ w_2/∂ y_2 ]) dy +1/|Z|B^*∫_Z W(y)^t dy-1/|Z|∫_ZB(y)(1-2C(y)u_0) W(y)^t dy for all t∈ (0,T) and a.e x∈ℝ^2, y∈ Z, with W:=(w_1,w_2). Using the compactness result stated in Theorem <ref> and the energy estimates obtained in Theorem <ref>, we can state that there exist u_0∈ L^2(0,T;H^1(ℝ^2)) and u_1∈ L^2((0,T)× H^1(ℝ^2);H_#^1(Z)) such that u^2-drift(B^*) u_0, ∇ u^2-drift(B^*)∇_x u_0+∇_y u_1. Take ϕ_0∈ C_c^∞((0,T)×ℝ^2) and ϕ_1∈ C_c^∞((0,T)×ℝ^2)× C_#^∞(Z). Now, in the weak formulation (<ref>) of our microscopic problem, we choose ϕ=ϕ_0(t,x-B^*t)+ϕ_1(t,x-B^*t,x) to obtain: ∫_0^T∫_Ω^∂_t u^ϕ_0(t,x-B^*t) dxdt+∫_0^T∫_Ω^ D^∇ u^∇ϕ_0(t,x-B^*t) dxdt -1∫_0^T∫_Ω^ B^ u^(1-C^ u^) ∇ϕ_0(t,x-B^*t) dxdt +∫_0^T∫_Ω^∂_t u^ϕ_1(t,x-B^*t,x) dxdt +∫_0^T∫_Ω^ D^∇ u^∇ϕ_1(t,x-B^*t,x) dxdt - ∫_0^T∫_Ω^ B^ u^(1-C^ u^) ∇ϕ_1(t,x-B^*t,x) dxdt = ∫_0^T∫_Ω^ f^ϕ_0(t,x-B^*t) dxdt- ∫_0^T∫_Γ_N^ g_N(u^) ϕ_0(t,x-B^*t) dσ dt ∫_0^T∫_Ω^ f^ϕ_1(t,x-B^*t,x) dx- ^2∫_0^T∫_Γ_N^ g_N(u^) ϕ_1(t,x-B^*t,x)dσ dt. By using the integration by parts and the chain rule, we have ∫_0^T∫_Ω^∂_t u^ϕ_0(t,x-B^*t) dxdt =-∫_0^T∫_Ω^ u^∂_tϕ_0(t,x-B^*t) dxdt+ B^*/∫_0^T∫_Ω^ u^∇_xϕ_0(t,x-B^*t) dxdt. We rewrite the next term as ∫_0^T∫_Ω^∂_t u^ϕ_1(t,x-B^*t,x) dx =-∫_0^T∫_Ω^ u^∂_tϕ_1(t,x-B^*t,x) dxdt+ B^*∫_Ω^ u^∇_xϕ_1(t,x-B^*t,x) dxdt. The following calculations rules will be used in the sequel, viz. ∇(ϕ_1(t,x-B^*t,x)) =∇_xϕ_1(t,x-B^*t,x)+1/∇_yϕ_1(t,x-B^*t,x), ∇(ϕ_0(t,x-B^*t)) =∇_xϕ_0(t,x-B^*t). From assumption <ref>, we deduce ∫_0^T∫_Ω^ B^ u^(1-C^ u^) ∇ϕ_0(t,x-B^*t) dxdt =-∫_0^T∫_Ω^ B^∇ u^ϕ_0(t,x-B^*t) dxdt+∫_0^T∫_Ω^ B^ C^ u^∇ u^ϕ_0(t,x-B^*t) dxdt. To derive the structure of the cell problem, we choose in (<ref>) as test function ϕ_0≡ 0. Then from (<ref>)–(<ref>), we are led to -∫_0^T∫_Ω^ u^∂_tϕ_1(t,x-B^*t,x) dxdt+ B^*∫_0^T∫_Ω^ u^∇_xϕ_1(t,x-B^*t,x) dxdt +∫_0^T∫_Ω^ D^∇ u^∇_x ϕ_1(t,x-B^*t,x) dxdt+∫_0^T∫_Ω^ D^∇ u^∇_y ϕ_1(t,x-B^*t,x) dxdt ∫_0^T∫_Ω^ B^(1-2C^ u^)∇ u^ϕ_1(t,x-B^*t,x) dxdt=∫_0^T∫_Ω^f^ϕ_1(t,x-B^*t,x) - ^2 ∫_0^T∫_Γ_N^g_N(u^) ϕ_1(t,x-B^*t,x)dσ dt. As a direct application of Theorem <ref>, we obtain lim_→ 0∫_0^T∫_Ω^ u^∂_tϕ_1(t,x-B^*t,x) dxdt=0. Using (<ref>), we have lim_→ 0 B^*∫_0^T∫_Ω^ u^∇_xϕ_1(t,x-B^*t,x) dxdt= B^*∫_0^T∫_ℝ^2∫_Z u_0 ∇_xϕ_1(t,x,y) dydxdt =-B^*∫_0^T∫_ℝ^2∫_Z ∇_xu_0 ϕ_1(t,x,y) dxdt. Relying on <ref> and (<ref>), we can write lim_→ 0∫_0^T∫_Ω^ D^∇ u^∇_x ϕ_1(t,x-B^*t,x) dxdt = 0, lim_→ 0∫_0^T∫_Ω^ D^∇ u^∇_y ϕ_1(t,x-B^*t,x) dxdt = ∫_0^T∫_ℝ^2∫_Z D^ (∇_xu_0+∇_yu_1) ∇_y ϕ_1(t,x,y) dydxdt. Using <ref> together with the strong convergence result stated in Theorem <ref>, and recalling as well (<ref>), we get lim_→ 0∫_0^T∫_Ω^ B^(1-2C^ u^)∇ u^ϕ_1(t,x-B^*t,x) dxdt = ∫_0^T∫_ℝ^2∫_Z B(y)(1-2C(y) u_0)(∇_x u_0+∇_yu_1) ϕ_1(t,x,y) dydxdt. By <ref> and Theorem <ref>, we get lim_→ 0∫_0^T∫_Ω^ f^ϕ_1(t,x-B^*t,x) dxdt= 0. Using <ref> jointly with Theorem <ref> yields lim_→ 0^2∫_0^T∫_Γ_N^ g_N(u^) ϕ_1(t,x-B^*t,x)dσ dt= 0. Now, passing to → 0 in (<ref>) and using (<ref>)–(<ref>), we obtain -B^*∫_0^T∫_ℝ^2∫_Z ∇_xu_0 ϕ_1(t,x,y) dydxdt+ ∫_0^T∫_ℝ^2∫_Z D^∇_xu_0 ∇_y ϕ_1(t,x,y) dydxdt +∫_0^T∫_ℝ^2∫_Z D^∇_yu_1 ∇_y ϕ_1(t,x,y) dydxdt+∫_0^T∫_ℝ^2∫_Z B(y)(1-2C(y) u_0)∇_x u_0 ϕ_1(t,x,y) dydxdt +∫_0^T∫_ℝ^2∫_Z B(y)(1-2C(y) u_0)∇_yu_1 ϕ_1(t,x,y) dydxdt=0. Looking now at (<ref>), the choice ϕ_1(t,x,y)=ϕ_2(t,x)ϕ_3(y) yields for almost every (t,x)∈ (0,T)×ℝ^2 the identity: -B^*∫_Z ∇_xu_0 ϕ_3(y) dy+ ∫_Z D^∇_xu_0 ∇_y ϕ_3(y) dy+∫_Z D^∇_yu_1 ∇_y ϕ_3(y) dy +∫_Z B(y)(1-2C(y) u_0)∇_x u_0 ϕ_3(y) dy +∫_Z B(y)(1-2C(y) u_0)∇_yu_1 ϕ_3(y) dy=0. The structure of (<ref>) allows us to choose further u_1(t,x,y):=W(t,x,y)·∇_x u_0(x), where W:=(w_1,w_2) with w_i (with i∈{1,2}) solving the following cell problem: -B^*∫_Z e_i ϕ_3(y) dy+ ∫_Z D^ e_i ∇_y ϕ_3(y) dy+∫_Z D^∇_yw_i ∇_y ϕ_3(y) dy +∫_Z B(y)(1-2C(y) u_0) e_i ϕ_3(y) dy +∫_Z B(y)(1-2C(y) u_0)∇_yw_i ϕ_3(y) dy=0. Note that (<ref>) is the weak formulation of the cell problem (<ref>)–(<ref>). To derive the macroscpic equation (<ref>), in (<ref>) we choose ϕ_1≡ 0 and we get -∫_0^T∫_Ω^ u^∂_tϕ_0(t,x-B^*t) dxdt +∫_0^T∫_Ω^ D^∇ u^∇ϕ_0(t,x-B^*t) dxdt + ∫_0^T∫_Ω^B^*-B^/ u^∇_xϕ_0(t,x-B^*t) dxdt +1∫_Ω^ B^ C^ (u^)^2 ∇ϕ_0(t,x-B^*t) dxdt = ∫_0^T∫_Ω^ f^ϕ_0(t,x-B^*t) dxdt- ∫_0^T∫_Γ_N^ g_N(u^) ϕ_0(t,x-B^*t) dσ dt. Using (<ref>), we get lim_→ 0 -∫_0^T∫_Ω^ u^∂_tϕ_0(t,x-B^*t) dxdt = -∫_0^T∫_ℝ^2∫_Z u_0 ∂_tϕ_0(t,x) dydxdt =|Z|∫_0^T∫_ℝ^2∂_t u_0 ϕ_0(t,x) dxdt. Using (<ref>) and <ref>, we get lim_→ 0∫_0^T∫_Ω^ D^∇ u^∇_x ϕ_0(t,x-B^*t) dxdt = ∫_0^T∫_ℝ^2∫_Z D(y) (∇_x u^0+∇_y u_1) ∇_x ϕ_0(t,x) dxdt. Using the auxiliary problem (<ref>)–(<ref>) and (<ref>), (<ref>) we have lim_→ 0∫_0^T∫_Ω^ B^*-B^/ u^∇_xϕ_0(t,x-B^*t) dxdt=lim_→ 0-∑_i=1^2∫_0^T∫_Ω^Δ G_i^ u^∂/∂ x_iϕ_0(t,x-B^*t) dxdt =lim_→ 0∑_i=1^2∫_0^T∫_Ω^∇ G_i^∇ (u^∂/∂ x_iϕ_0(t,x-B^*t)) dxdt =lim_→ 0∑_i=1^2∫_0^T∫_Ω^1/∇_y G_i(x/) ∇ u^∂/∂ x_iϕ_0(t,x-B^*t) dxdt +lim_→ 0∑_i=1^2∫_0^T∫_Ω^1/∇_y G_i(x/) u^∇_x(∂/∂ x_iϕ_0(t,x-B^*t)) dxdt = ∑_i=1^2∫_0^T∫_ℝ^2∫_Z ∇_yG_i(y)(∇_xu_0+∇_yu_1)∂ϕ_0/∂ x_i(t,x)dydxdt +∑_i=1^2∫_0^T∫_ℝ^2∫_Z ∇_yG_i(y)u_0∇_x(∂ϕ_0/∂ x_i(t,x))dydxdt =∑_i=1^2∫_0^T∫_ℝ^2∫_Z ∇_yG_i(y)(∇_xu_0+∇_yu_1)∂ϕ_0/∂ x_i(t,x)dydxdt -∑_i=1^2∫_0^T∫_ℝ^2∫_Z ∇_yG_i(y)∇_xu_0(∂ϕ_0/∂ x_i(t,x))dydxdt =∑_i=1^2∫_0^T∫_ℝ^2∫_Z ∇_yG_i(y)∇_yu_1∂ϕ_0/∂ x_i(t,x)dydxdt =-∑_i=1^2∫_0^T∫_ℝ^2∫_Z Δ_yG_i(y)u_1∂ϕ_0/∂ x_i(t,x)dydxdt =∫_0^T∫_ℝ^2∫_Z(B^*-B(y))u_1∇_xϕ_0(t,x)dydxdt =-∫_0^T∫_ℝ^2∫_Z(B^*-B(y))∇_x u_1ϕ_0(t,x)dydxdt. Using the auxiliary problem (<ref>)–(<ref>), (<ref>) and (<ref>), we obtain lim_→ 0∫_0^T∫_Ω^ B^ C^/ (u^)^2 ∇_xϕ_0(t,x-B^*t) dxdt=-lim_→ 0∑_i=1^2∫_0^T∫_Ω^Δ H_i^ (u^)^2 ∂/∂ x_iϕ_0(t,x-B^*t) dxdt =lim_→ 0∑_i=1^2∫_0^T∫_Ω^∇ H_i^∇ ((u^)^2 ∂ϕ_0/∂ x_i(t,x-B^*t)) dxdt =2lim_→ 0∑_i=1^2∫_0^T∫_Ω^1/∇_y H_i(x/) u^∇ u^∂ϕ_0/∂ x_i(t,x-B^*t) dxdt +lim_→ 0∑_i=1^2∫_0^T∫_Ω^1/∇_y H_i(x/) (u^)^2 ∇_x(∂ϕ_0/∂ x_i(t,x-B^*t)) dxdt = 2∑_i=1^2∫_0^T∫_ℝ^2∫_Z ∇_yH_i(y)u_0(∇_xu_0+∇_yu_1)∂ϕ_0/∂ x_i(t,x)dydxdt +∑_i=1^2∫_0^T∫_ℝ^2∫_Z ∇_yH_i(y)(u_0)^2∇_x(∂ϕ_0/∂ x_i(t,x))dydxdt =2∑_i=1^2∫_0^T∫_ℝ^2∫_Z ∇_yH_i(y)u_0(∇_xu_0+∇_yu_1)∂ϕ_0/∂ x_i(t,x)dydxdt -2∑_i=1^2∫_0^T∫_ℝ^2∫_Z ∇_yH_i(y)u_0∇_xu_0(∂ϕ_0/∂ x_i(t,x))dydxdt =2∑_i=1^2∫_0^T∫_ℝ^2∫_Z ∇_yH_i(y)u_0∇_yu_1∂ϕ_0/∂ x_i(t,x)dydxdt =-2∑_i=1^2∫_0^T∫_ℝ^2∫_Z Δ_yH_i(y)u_0u_1∂ϕ_0/∂ x_i(t,x)dydxdt =2∫_0^T∫_ℝ^2∫_ZB(y)C(y)u_0u_1∇_xϕ_0(t,x)dydxdt =-2∫_0^T∫_ℝ^2∫_ZB(y)C(y)∇_x(u_0 u_1)ϕ_0(t,x)dydxdt. Using <ref>, we get lim_→ 0∫_0^T∫_Ω^ f^(x,x) ϕ_0(t,x-B^*t) dxdt= ∫_0^T∫_ℝ^2∫_Z f(x,y)dydxdt. Using <ref> and Theorem (<ref>), gives lim_→ 0∫_0^T∫_Γ_N^ g_N(u^) ϕ_0(t,x-B^*t) dσ dt =lim_→ 0∫_0^T∫_Γ_N^ u^ϕ_0(t,x-B^*t) dσ dt =|Γ_N|∫_0^T∫_ℝ^2u_0(t,x)ϕ_0(t,x)dxdt. By (<ref>)–(<ref>), the passage to the limit → 0 in (<ref>) discovers the weak form |Z|∫_0^T∫_ℝ^2∂_t u_0 ϕ_0(t,x) dxdt+∫_0^T∫_ℝ^2∫_Z D(y) ∇_x u_0 ∇_x ϕ_0(t,x) dxdt +∫_0^T∫_ℝ^2∫_Z D(y) ∇_y u_1 ∇_x ϕ_0(t,x) dxdt-∫_0^T∫_ℝ^2∫_Z(B^*-B(y))∇_x u_1ϕ_0(t,x)dydxdt -2∫_0^T∫_ℝ^2∫_ZB(y)C(y)∇_x(u_0 u_1)ϕ_0(t,x)dydxdt=∫_0^T∫_ℝ^2∫_Z f(x,y)ϕ_0(t,x)dydxdt -|Γ_N|∫_0^T∫_ℝ^2u_0(t,x)ϕ_0(t,x)dxdt. Inserting the ansatz (<ref>) into (<ref>), we can rewrite the last identity as ∫_0^T∫_ℝ^2∂_t u_0 ϕ_0(t,x) dxdt+∫_0^T∫_ℝ^2 D^*(u_0,W) ∇_x u_0 ∇_x ϕ_0(t,x) dxdt =1/|Z|∫_0^T∫_ℝ^2∫_Z f(x,y)ϕ_0(t,x)dydxdt-|Γ_N|/|Z|∫_0^T∫_ℝ^2u_0(t,x)ϕ_0(t,x)dxdt, where D^*(u_0,W) defined as (<ref>) and (<ref>) is the weak formulation of (<ref>). It is worth noting that the homogenization limit derived in Theorem <ref> and the corrector convergence result stated in Theorem <ref> in the next section are still valid if we replace the assumption g_N(r)=r with g_N(r)= k, where k∈ℝ is fixed arbitrarily. § SEARCHING FOR CORRECTORS In this section, we study the strong convergence of the corrector term obtained from the homogenized limit problem. To prove such corrector-type result, we rely on techniques similar to those used in <cit.> to prove the strong convergence of solutions to the microscopic problem in L^2. We begin with stating two auxiliary lemmas which later will be employed to prove the wanted strong convergence result. We omit their proofs since they are straightforward extensions of classical results related to the concept of two-scale convergence (Theorem 17 of <cit.>) and, respectively, to the two-scale convergence with drift (see in particular <cit.>). Let v^∈ L^2(0,T;L^2(Ω^)) such that v^2-drift(B^*) v_0 as → 0 for some v_0=v_0(t,x,y)∈ L^2((0,T)×ℝ^2;L_#^2(Z)). Then it holds lim inf_→ 0∫_0^T∫_Ω^(v^)^2 dxdt≥∫_0^T∫_ℝ^2∫_Z v_0^2 dydxdt. Let v^∈ L^2(0,T;Γ_N^) such that lim_→ 0∫_(0,T)×Γ_N^ v^(t,x)ϕ(t,x-B^*t,x/)dxdt= ∫_0^T∫_ℝ^2∫_Γ_N v_0(t,x,y)ϕ(t,x,y)dydxdt, for some v_0(t,x,y)∈ L^2((0,T)×ℝ^2;L_#^2(Γ_N)). Then it holds lim inf_→ 0∫_0^T∫_Γ_N^(v^)^2 dxdt≥∫_0^T∫_ℝ^2∫_Γ_N v_0^2 dydxdt. Let u^∈ L^2(0,T;H^1(Ω^)) satisfy strongly two-scale convergence with drift to u_0 in the sense of Defenition <ref>. Then it holds: lim_→ 0u^_L^2((0,T)×Ω^)=u_0_L^2((0,T)×ℝ^2× Z). By the Minkowski inequality, we have lim_→ 0u^(t,x)_L^2((0,T)×Ω^)≤lim_→ 0 u^(t,x)-u_0(t,x-B^*t,x)_L^2((0,T)×Ω^) +lim_→ 0u_0(t,x-B^*t,x)_L^2((0,T)×Ω^). Now, using Lemma <ref> and (<ref>) to deal with (<ref>) leads to 0≤|lim_→ 0u^(t,x)_L^2((0,T)×Ω^)-u_0_L^2((0,T)×ℝ^2× Z)|≤ 0. The main result of this section is stated in the next Theorem. Assume <ref>–<ref> hold, g_N(r)=r for all r∈ℝ and f^ f . Then lim_→ 0∇( u^(t,x/)-u_0(t,x-B^*t)- u_1(t,x-B^*t,x/))_L^2(0,T;L^2(Ω^))=0, where u^ solves the original microscopic problem and u_0 and u_1 are given cf. (<ref>) and (<ref>), respectively. We choose as test function ϕ=u^ in the weak formulation (<ref>). Integrating the result from 0 to t, we obtain ∫_0^t ∫_Ω^∂_t u^ u^ dxds+∫_0^t ∫_Ω^D^∇ u^∇ u^ dxds - 1/∫_0^t ∫_Ω^B^ P(u^)∇ u^ dxds = ∫_0^t ∫_Ω^f^ u^ dxds-∫_0^t ∫_Γ_N^ g_N(u^) u^ dσ ds. Using (<ref>) into (<ref>), yields 1/2∫_Ω^ (u^)^2 dx +∫_0^t ∫_Ω^D^∇ u^∇ u^ dxds= 1/2∫_Ω^ (u^(0))^2 dx +∫_0^t ∫_Ω^f^ u^ dxds -∫_0^t ∫_Γ_N^ g_N(u^) u^ dσ ds. Since u^ strongly converges to u_0, f^ strongly converges to f in the sense of (<ref>), we have lim_→ 0∫_0^t ∫_Ω^f^ (x)u^ (t,x)dxds=∫_0^t ∫_ℝ^2∫_Z f(x,y) u_0(t,x) dydxds. We integrate (<ref>) from 0 to p, using Lemma <ref>, Lemma <ref> and pass → 0, we arrive at |Z|/2∫_0^p∫_ℝ^2 (u_0)^2 dxdt+ lim_→ 0 ∫_0^p ∫_0^t ∫_Ω^D^∇ u^∇ u^ dxdsdt = |Z|/2∫_0^p∫_ℝ^2 g^2 dxdt +∫_0^p∫_0^t ∫_ℝ^2∫_Z f u_0 dydxdsdt - lim_→ 0∫_0^p∫_0^t ∫_Γ_N^ (u^)^2 dσ dsdt. Using Lemma <ref> and (<ref>) in (<ref>), we have 1/2∫_0^p∫_ℝ^2 (u_0)^2 dxdt+ lim_→ 0 1/|Z|∫_0^p ∫_0^t∫_Ω^D^∇ u^∇ u^ dxdsdt ≤1/2∫_0^p∫_ℝ^2 g^2 dxdt +1/|Z|∫_0^p∫_0^t ∫_ℝ^2∫_Z f u_0 dydxdsdt- |Γ_N|/|Z|∫_0^p∫_0^t∫_ℝ^2 u_0^2 dx dsdt. Now, observing the structure of D^* as it appears in (<ref>), we obtain for any ξ∈ℝ^2 that D^*ξ·ξ =1/|Z|∫_Z D(ξ+∇_y ∑ w_i ξ_i)· D(ξ+∇_y ∑ w_i ξ_i). Using the structure of u_1 from (<ref>) and (<ref>), we have D^*∇_x u_0 ·∇_x u_0 =1/|Z|∫_Z D(∇_x u_0+∇_y u_1)· (∇_x u_0+∇_y u_1). Consider the weak form of (<ref>) and choose the test function u_0. We thus obtain ∫_0^t∫_ℝ^2∂_t u_0 u_0 dxds+∫_0^t∫_ℝ^2 D^*∇_x u_0 ∇_x u_0 dxds =1/|Z|∫_0^t∫_ℝ^2∫_Z fu_0dydxds-|Γ_N|/|Z|∫_0^t∫_ℝ^2u_0^2dxds, and hence, it holds as well that 1/2∫_ℝ^2 u_0^2 dx+1/|Z|∫_0^t∫_ℝ^2∫_Z D(∇_x u_0+∇_y u_1)· (∇_x u_0+∇_y u_1)dy dxds =1/|Z|∫_0^t∫_ℝ^2∫_Z fu_0 dydxds-|Γ_N|/|Z|∫_0^t∫_ℝ^2u_0^2dxds+ 1/2∫_ℝ^2 g^2 dx. Integrating (<ref>) from 0 to p and then comparing with (<ref>), we get 1/2 ∫_0^p∫_ℝ^2 u_0^2 dxdt+ lim_→ 01/|Z|∫_0^p ∫_0^t∫_Ω^D^∇ u^∇ u^ dxdsdt ≤1/2∫_0^p∫_ℝ^2 u_0^2 dxdt+1/|Z|∫_0^p∫_0^t∫_ℝ^2∫_Z D(∇_x u_0+∇_y u_1)· (∇_x u_0+∇_y u_1)dy dxdsdt. So, lim_→ 0∫_0^p ∫_0^t∫_Ω^D^∇ u^∇ u^ dxdsdt ≤∫_0^p ∫_0^t∫_ℝ^2∫_Z D(∇_x u_0+∇_y u_1)· (∇_x u_0+∇_y u_1)dy dxdsdt. From Lemma (<ref>) and (<ref>), we have ∫_0^p∫_0^t∫_ℝ^2∫_Z D(∇_x u_0+∇_y u_1)· (∇_x u_0+∇_y u_1)dy dxdsdt ≤lim_→ 0∫_0^p∫_0^t ∫_Ω^D^∇ u^∇ u^ dxdsdt. Comparing (<ref>) and (<ref>), lead us to lim_→ 0∫_0^p∫_0^t∫_Ω^D^∇ u^∇ u^ dxdsdt=∫_0^p∫_0^t∫_ℝ^2∫_Z D(∇_x u_0+∇_y u_1)· (∇_x u_0+∇_y u_1)dy dxdsdt. Now, differentiating (<ref>) with respect to p, using the Fundamental Theorem of Calculus, and finally choosing t=T, allows us to write lim_→ 0∫_0^T∫_Ω^D^∇ u^∇ u^ dxdt=∫_0^T∫_ℝ^2∫_Z D(∇_x u_0+∇_y u_1)· (∇_x u_0+∇_y u_1)dy dxdt. By the ellipticity condition of D, we have θ ∇( u^(t,x)-u_0(t,x-B^*t)- u_1(t,x-B^*t,x/))_L^2(0,T;L^2(Ω^)) ≤∫_0^T ∫_Ω^D^∇( u^(t,x)-u_0(t,x-B^*t)- u_1(t,x-B^*t,x/)) ·∇( u^(t,x)-u_0(t,x-B^*t)- u_1(t,x-B^*t,x/)) dxdt. Using the definition of two-scale convergence with drift, we have lim_→ 0∫_0^T ∫_Ω^D^∇ u^∇ u_0 dxdt =∫_0^T∫_ℝ^2∫_Z D(∇_x u_0+∇_y u_1)·∇_x u_0 dy dxdt, lim_→ 0∫_0^T ∫_Ω^ D^∇ u^∇ u_1 dxdt =∫_0^T∫_ℝ^2∫_Z D(∇_x u_0+∇_y u_1)·∇_y u_1 dy dxdt, lim_→ 0∫_0^T ∫_Ω^ D^∇ u_0∇ u_0dxdt =∫_0^T∫_ℝ^2∫_Z D∇_x u_0∇_x u_0 dy dxdt, lim_→ 0∫_0^T ∫_Ω^^2 D^∇ u_1∇ u_1 dxdt =∫_0^T∫_ℝ^2∫_Z D∇_y u_1∇_y u_1dy dxdt, Now, using (<ref>), (<ref>)–(<ref>), we finally obtain lim_→ 0∫_0^T ∫_Ω^D^∇( u^(t,x)-u_0(t,x-B^*t)- u_1(t,x-B^*t,x/)) ·∇( u^(t,x)-u_0(t,x-B^*t)- u_1(t,x-B^*t,x/)) dxdt=0. We conclude our proof by using (<ref>) and (<ref>) to obtain the desired result (<ref>). § CONCLUSION AND OUTLOOK We rigorously derived a macroscopic equation for a reaction-diffusion problem with nonlinear drift posed in an unbounded perforated domain together with Robin type boundary data. The main challenge for the homogenization was the presence of the nonlinear drift term which is scaled with a factor of order of 𝒪(1). The key tools used to handle the homogenization asymptotics were the two-scale convergence with drift and the strong convergence formulated in moving co-ordinates. To study the well-posedness of microscopic problem we relied on a classical result from <cit.> that we extended to cover the case of the unbounded perforated domain as needed in our target problem. To do so, we used a suitable comparison principle jointly with a monotonicity argument. The obtained upscaled equation is a reaction-dispersion equation posed in an unbounded domain, strongly coupled with an elliptic cell problem posed in a bounded domain. The corresponding dispersion tensor carries information concerning the microscopic diffusion and drift. We also provided a strong convergence result related to the structure of the corrector, exploiting the difference in the micro- and macro-solutions in the H^1 norm. We did perform the analysis and subsequent discussion in two-space dimensions since the original modeling of our equations was done in terms of interacting particle systems inter-playing in a plane. However, both the homogenization result and the corrector convergence hold for higher dimensions. One technical assumption deserves further attention – we assumed here that the mean value of the coefficient in front of the nonlinear drift is zero. We believe it is a challenging open problem to handle the homogenization of such nonlinear drift case when the mean value of this drift coefficient is non vanishing. In this work, we only discussed about the convergence of the corrector. From a more practical perspective, it would be though very useful to derive a corrector estimate which can later be used to analyse the problem numerically. Till now, to our knowledge, the only corrector estimate result related to large-drift homogenization problems is reported in <cit.>, where the authors considered a linear diffusion-large convection problem. Along the same line of thinking, developing a two-scale finite element approach for the micro-macro problem could be an excellent research direction to pursue (following e.g. the works <cit.>, <cit.>, or <cit.>). The challenging parts for the numerical analysis are the presence of the large nonlinear drift occurring in the microscopic problem as well as the strong coupling through the transport coefficient in the upscaled problem. § ACKNOWLEDGMENTS We thank H. Hutridurga (Bombay) for fruitful discussions during his KAAS seminar. V.R. thanks M. Eden (Karlstad) for interesting discussions related to the corrector result. The work of V.R. and A.M. is partially supported by the Swedish Research Council's project “Homogenization and dimension reduction of thin heterogeneous layers" (grant nr. VR 2018-03648). amsplain
http://arxiv.org/abs/2307.06062v1
20230712102434
Predicting the turbulent transport of cosmic rays via neural networks
[ "D. I. Palade" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.GA", "physics.plasm-ph", "physics.space-ph" ]
[email protected] National Institute of Laser, Plasma and Radiation Physics, Măgurele, Bucharest, Romania Faculty of Physics, University of Bucharest, Măgurele, Romania A fast artificial neural network is developed for the prediction of cosmic ray transport in turbulent astrophysical magnetic fields. The setup is trained and tested on bespoke datasets that are constructed with the aid of test-particle numerical simulations of relativistic cosmic ray dynamics in synthetic stochastic fields. The neural network uses, as input, particle and field properties and estimates transport coefficients 10^7 faster than standard numerical simulations with an overall error of ∼ 5 % . Predicting the turbulent transport of cosmic rays via neural networks D. I. Palade August 12, 2023 ===================================================================== § INTRODUCTION Cosmic rays (CRs) are charged particles originating mostly in galactic supernovae remnants and being accelerated by the Fermi mechanism in shock waves <cit.> although, more exotic sources are possible <cit.>. Due to their (ultra)relativistic energies, CRs can permeate systems of all sizes, from the heliosphere, to galaxies and even the extra-galactic medium <cit.>. Consequently, CRs carry important information to our understanding of fundamental high-energy physics, astrophysical magnetic fields, the structure of astrophysical media, space weather, etc. <cit.>. The dynamics of CRs is a long-standing fundamental problem of astrophysics with many open questions that are difficult to solve for several reasons. During their journey from sources to detection, CRs interact mainly with magnetic fields. The latter are highly turbulent <cit.> and, via non-linear interactions with particles, lead to consistent (anomalous) transport phenomena beyond ballistic motion <cit.>. The CR's energies cover a wide range of magnitudes (from MeV to 10^20eV <cit.>) while the turbulent features of the magnetic fields are quite diverse, spanning different space-scales, anisotropies, and fluctuation amplitudes <cit.>. This richness in physical regimes opens a variety of possible transport types ranging from subdiffusive to superdiffusive in the perpendicular and parallel directions, with complicated dependencies <cit.>. Despite this discouraging picture, a lot of progress has been made in the past decades in the topic of turbulent CRs transport. Quasilinear approaches <cit.> and non-linear extensions <cit.> gained a lot of attention due to their technical simplicity and ability to provide insight in the physical mechanisms involved. Unfortunately, such models are known to fail in a lot of relevant cases such as the limit of strong turbulence or the 90 degree problem. A much more accurate approach, which gained momentum in the scientific community, is the method of test-particle simulations either in synthetic <cit.> or MHD generated <cit.> turbulent magnetic fields. Within this approach, the dynamics of CRs is mimicked at the statistical level with an ensemble of fictitious particles that allows for a direct evaluation of the transport coefficients. Its main drawback is the relatively high numerical effort required, which is hardly compatible with the diversity of possible physical regimes. It is clear that the astrophysics community would benefit from a fast and accurate method of prediction in their quest to understand the CR turbulent transport. A helping hand might come from another scientific front. In the recent years, we have witnessed the rise of artificial intelligence (AI) methods, in particular machine learning (ML) techniques <cit.>, that are able to provide inferences in various mathematical problems. From the multitude of ML methods, the reader is directed towards artificial neural networks (ANNs) <cit.>. The promise of ML is that, regardless of the chosen technique, if sufficient data is available, it can learn from that data and make fast and reliable prognoses for unknown cases. Such abilities would be equivalent to having analytical expressions at hand and would bypass the technical difficulties that arise in doing many simulations of CR dynamics. Given this scientific context, the purpose of the present work is to illustrate the methodology for developing an artificial neural network designed for predictions of cosmic ray turbulent transport in astrophysical magnetic fields. For the training and testing phases, a database is constructed with the aid of test-particle numerical simulations in synthetically generated fields. Since the main purpose is methodological, we restrict ourselves here to the evaluation of perpendicular diffusion <cit.> in a relatively limited range of physical parameters. Nonetheless, it is the hope of the author that this work will open a path towards more extensive databases and, consequently, more potent ANNs. The rest of this manuscript is organized as follows. The Theory section <ref> describes, briefly, the CR transport model (<ref>), the test-particle numerical method (<ref>), the general architecture of ANNs (<ref>) and the construction of a training and testing database (<ref>). The Results Section (<ref>) presents the convergence properties and the predictive power of the ANN. In the Conclusions section (<ref>), the main findings are resumed and perspectives are discussed. § THEORY The ingredients of an ANN are the programming structure and the training dataset. For our problem, the latter is constituted of input-output pairs representing field-particle properties and diffusion coefficients. Such data is obtained with the numerical method of test particles that move according to a transport model. In this section, these elements, shown schematically in Fig. (<ref>), are discussed in reverse order. §.§ The transport model A cosmic ray is a charged particle characterized by its rest mass m, charge q, position 𝐫(t) and velocity 𝐯(t)=d𝐫(t)/dt in a global reference frame obeying the relativistic Newton-Lorentz equation in the presence of an astrophysical magnetic field 𝐁: d(γ𝐯)/dt =q/m(𝐯×𝐁). The kinetic energy of the particle is T=mc^2(γ-1) where the Lorentz factor γ=(1-v^2/c^2)^-1/2, c is the speed of light and any electric field is neglected 𝐄=0. A cartesian system of coordinates 𝐫≡ (x,y,z) is defined. The magnetic field 𝐁 is decomposable into an average constant component B_0 along Oz and a fluctuating part 𝐛 which is turbulent. Relative to this decomposition we coin the Oz direction as being "parallel" ê_z≡ê_∥ and the plane (x,y) as "perpendicular". Naturally, for any wavevector 𝐤=(k_x,k_y,k_z), k_z≡ k_∥ and (k_x,k_y)≡𝐤_⊥. The velocity 𝐯 = v_∥ê_∥+𝐯_⊥ can be used to define the pitch angle μ = v_∥/v, where v^2=|𝐯|^2. The turbulent component 𝐛 of the total magnetostatic field 𝐁 = B_0ê_∥+𝐛 is represented in the paradigm of a 2D model <cit.>: 𝐛(𝐫) = ê_∥×∇_⊥ a_∥(𝐫). The magnetic vector potential a_∥ is Gaussian, zero-averaged ⟨ a_∥(𝐫)⟩ = 0 and homogeneous <cit.>. The last property is evident in the spectrum of its fluctuations: ⟨ã_∥(𝐤) ã_∥(𝐤^')⟩ = S(𝐤)δ(𝐤+𝐤^'). By ã_∥(𝐤) was denoted the Fourier transform of a_∥(𝐫), while ⟨·⟩ stands for statistical average. The overall turbulence amplitude is defined as b = √(⟨𝐛^2(𝐫)⟩) and the spectrum is assumed to be of Kolmogorov type with parallel-perpendicular anisotropic dependencies <cit.>: S(𝐤_⊥,k_∥) ∼(k_⊥λ_⊥)^q(1+(k_∥λ_∥)^2)^-s/2/k_⊥(1+(k_⊥λ_⊥)^2)^(s+q)/2, where q=3, s=5/3 and λ_∥,λ_⊥ are "bend-over" scales, intimately related to coherence/integral scales <cit.>. Within this transport model, realizations of the turbulent field 𝐛 drive associated CR trajectories 𝐫(t) via the eq. of motion (<ref>). The ensemble of trajectories can be used to derive, as statistical averages, the diffusion transport coefficients in any direction Ox: D_x,x(t) = ⟨(x(t)-x(0))^2⟩/2t. In this paper, we are discussing regimes that are strictly diffusive and for which the quantity of interest is the asymptotic perpendicular diffusion coefficient D_⊥ = lim_t→∞D_x,x(t). In this context, the problem of CR turbulent transport is equivalent to asking: what are the diffusion coefficients Y=D_⊥ for any given set of particle-field input parameters X = (T,μ,b,λ_⊥,λ_∥)? For practical reasons related to numerical implementation, all quantities involved in the transport model (eqns. (<ref>)-(<ref>)) are scaled as follows. The magnetic fields 𝐛→𝐛B_0, the time t→ tω_c^-1, the velocities 𝐯→𝐯c, the kinetic energy T→ T mc^2, the space scales (𝐫,λ_i)→ (𝐫,λ_i)ρ_L^0 and the wave-vectors 𝐤→𝐤/ρ_L^0. The cyclotron frequency is defined as ω_c=|q|B_0/m while the "bare" Larmor radius ρ_L^0=mc/(|q|B_0). Consequently, the diffusion coefficients are scaled as D→ D m_0c^2/(qB_0). §.§ Test-particle numerical simulations The present work employs the method of test-particle simulations <cit.> (or direct numerical simulations <cit.>) for the calculus of diffusion coefficients. The main idea is that we can mimic the reality of turbulent dynamics through a numerical representation of the transport model described in the previous subsection (<ref>). A finite ensemble of N_p random fields {𝐛} with appropriate spectral properties (distribution and spectra) is constructed. For each realization of this ensemble, a CR trajectory 𝐫(t) can be computed by solving eq. (<ref>). In the limit of many particles N_p→∞, the transport coefficients are given by statistical averages over the numerical ensemble of trajectories {𝐫(t)} (<ref>). More details on the method can be found in <cit.>. The magnetic vector potential, a_∥, is assumed Gaussian, zero-averaged, and homogeneous; thus, it can be constructed numerically in each realization with a Fourier-like (harmonic) decomposition <cit.>: a_∥(𝐫) = bλ_⊥√(2/N_c)∑_j=1^N_csin(𝐤^j𝐫 + α^j) where α^j are independent random initial phases distributed uniformly in the interval [0,2π). The partial waves 𝐤^j≡(𝐤_⊥^j,k_∥^j) are independent random variables generated with the use of the acceptance-rejection algorithm <cit.> for the PDF S(𝐤). It can be shown that the representation (<ref>) is, indeed, zero-averaged with the correct spectrum (<ref>). In the limit of many partial waves N_c→∞, the Gaussianity is also achieved (provided by the central limit theorem). Solving eq. (<ref>) is a standard numerical problem which is tackled here with a relativistic Boris pusher <cit.>. Magnetic field values are evaluated in each realization directly using eqn. (<ref>). Within the present scaling, the gyro-period is 2π and the time step is chosen δ t = 2π/10. This level of time discretization was found to be sufficient for an accurate prediction of the Larmor rotation and CR dynamics. Practical experience has shown that using N_p = 10^4 and N_c=500 in a simulation is enough to achieve acceptable Gaussianity, Eulerian, and Lagrangian convergence, thus reliable results on transport. §.§ Artificial neural networks In this section, the generic features of an ANN architecture that will be used for predicting the transport of CRs in turbulent magnetic media are described. Its structure is represented graphically in Fig. (<ref>). For more details, see <cit.>. A general ANN is a computing setup consisting of m+2 "layers", each layer i contains n_i "neurons" and each neuron j is described by a numerical value Z_i,j, a "bias" β_i,j, an "activation function" f_i,j and some "weights" W_i,j,k. The latter connects recursively neurons from neighbouring layers, thus, W is an irregular tensor of (m+1)× n_i× n_i-1 dimension. The first layer, i=0, contains the values of the input variable X, Z_0,j = X(j). The last layer, i=m+1 contains the values of the output variable Y, Z_m+1,j = Y(j). The rest are coined "hidden layers". Within an ANN there is a feed-forward recurrent relation between layers: Z_i,j = f_i,j(∑_k=1^n_i-1W_i,j,k.Z_i-1,k+β_i,j) The purpose of an ANN is to model the true relation (which is unknown) between the input variable X and the output Y. This is achieved by finding appropriate weights and biases {W_i,j,k,β_i,j} that minimize an error function E between the output of the network, Z_m+1, and the real output, Y. This must be done over a dataset of N_d pairs {X^p,Y^p}_p=1,N_d which should be reasonably large. The error function E is defined as Minkowski error: E = 1/n∑_p=1^N_d∑_j=1^n_m+1(Z_m+1,j^p-Y^p(j))^2 . The error minimization is reached iteratively, over multiple "epochs", using ADAM <cit.>, a modified version of gradient descent algorithm <cit.>. In this approach, during each iteration (epoch), the weights and biases {W_i,j,k, β_i,j} (denoted generically θ), are updated using a "learning"/"training" procedure: v_θ = α_1 v_θ + (1-α_1)(∂ E/∂θ)^2 m_θ =α_2 m_θ/1-α_1^epoch+1-α_2/1-α_1^epoch(∂ E/∂θ) θ = θ - γm_θ/ε+√(v_θ) where γ, α_1,α_2, ε are parameters controlling the learning rate and the convergence of the ANN. The "moments" v_θ,m_θ are initialized to zero, while the weights and biases θ≡{Ŵ_i,j,k, β_i,j} are normal random variables with zero average and unit variance. The ADAM algorithm (<ref>)-(<ref>) should find, asymptotically, the global minimum of the error function E. This means that the ANN should predict output values Z_m+1^p as suitable approximations for the real Y^p, ∀ p=1,N_d and beyond. Based on numerical experience, the optimal values used for the present ANN are m = 2 hidden layers and n_0=3, n_1=10, n_2=5, n_3=1 neurons in each layer. The activation functions are f_i,j(z) = tanh(z), ∀ i=1,m, j=1,n_i and f_m+1,j(z) = z^2, ∀ j=1,n_m+1 for the hidden, respectively, output layers. The parameters for ADAM are γ = 0.001, α_1=0.9,α_2=0.95,ε=10^-7. §.§ Defining the database At this point, we must recall how the question of CR turbulent transport fits into the formal description of an ANN. The eqns. of the model (<ref>)-(<ref>) have a set of free parameters, namely (T,μ, b, λ_⊥,λ_∥). In our simulations, we chose μ=0.5 and λ_∥ = 10λ_⊥, values relevant to observations from solar winds. The rest, X = (T, b, λ_⊥) represents the input variable from ANN's perspective. For each set of parametric values X, associated perpendicular diffusion coefficients are obtained as output Y=D_⊥. In order to train and test the ANN, we need to construct a database of N_d pairs (X^p,Y^p)_p=1,N_d. The latter is obtained through the test-particle method described in Section (<ref>). An appropriate number of N_d = 5× 10^3 simulations has been performed. Note that this task is far more demanding than constructing and training the ANN, since it requires a much larger volume of CPU computing time. The n_0-dimensional space of X parameters must be truncated to a finite domain. This is done by choosing a cube of limiting values: T∈(0, 10) which for protons corresponds to a kinetic energy (0eV, 10GeV), b∈(0, 1) corresponding to (0,B_0) amplitude of fluctuations, λ_⊥∈(0.2,20). Note that for a proton, T=1 corresponds to ≈ 1GeV energy and a scaled Larmor radius ρ_L≈ 1.7, thus, comparable with most values of correlation lengths, λ_⊥. In order to avoid spurious biases, the values {X^p} are generated randomly uniform inside the parametric hypercube. It is important to observe that the database can be completed with analytical results. For example, at b=0→ D_⊥=D_∥ = 0. Approximately 5-10% of the database can be filled with such exact results. If this fraction is too large, the hypercube becomes non-uniformly filled, and the ANN learns predominately the analytical values, which are of no interest. From the total database, 90% of the (X,Y) pairs are used for training, while the remaining 10% will serve as testing grounds for the assessment of ANN's predictive abilities. § RESULTS §.§ Turbulent transport The transport model has been used in the framework of the test-particle method to evaluate CR trajectories and, consequently, perpendicular diffusion coefficients Y=D_⊥ for different random combinations of free input parameters X≡(T,b,λ_⊥) values. The equations of motion (<ref>) are fully relativistic and do not rely on the guiding-center approximation <cit.>. Consequently, particle trajectories describe the full Larmor rotation induced by the average magnetic field B_0 as well as the "scattering" in the fluctuating field 𝐛 which is 2D. This level of description is needed whenever the characteristic length scales of fluctuations, in our case λ_⊥, are comparable with the Larmor radius ρ_L. This is precisely the case for the database since for most energies ρ_L∼ 1 (in the scaling described in Section (<ref>)) while λ_⊥∈(0.2,20). In Figs. (<ref>)-(<ref>) are shown typical CR trajectories in the perpendicular plane (Fig. (<ref>)) and in full 3D space (Fig. (<ref>)). Due to the fact that the pitch angle has been set to μ=0.5 in all simulations, the particles propagate along the parallel direction with larger velocities than the perpendicular guinding-center drifts. The Larmor rotation and the scattering due to fluctuations are obvious. The running diffusion coefficient D_⊥(t) related to the mean-square-displacement of particles, ⟨ x^2(t)⟩, is computed accordingly with eq. (<ref>). Another, more frequently used, formula for diffusion is d(t)=∂_t⟨ x^2(t) ⟩/2. The reason for which D_⊥(t) is used here instead of d_⊥(t) is due to the fact that we capture Larmor rotations in our model. This component of motion exhibits large fluctuations at small times in d_⊥(t). D_⊥(t), on the other hand, due to its functional form, suppresses the effect of rotations very fast as well as the statistical fluctuations at long times. Nonetheless, both forms converge asymptotically to the same value. All these features can be seen in Fig. (<ref>) where two typical time dependent diffusion profiles are shown for the same set of free parameters. It must be emphasized that our database describes only pure diffusive regimes (in the perpendicular plane). In general, anomalous behaviour is possible as it was described, many times, in literature <cit.>. In fact, the finite parallel correlation length λ_∥ = 10λ_⊥ is responsible for the decay of the Lagrangian correlation and, consequently, the saturation of the running diffusion to a constant value, thus, diffusive transport. The mechanism is simple: the parallel motion of CRs in fields with parallel dependence induces an effective decorrelation time τ_c ≈λ_∥/v_∥. Describing subdiffusive or superdiffusive transport remains an important task for future developments of ANNs. §.§ Convergence properties of the ANN Once the database has been build and the ANN constructed (programmed), we are all set to start the training phase. A random configuration of initial weights and biases {W_i,j,k,β_i} is chosen and the minimization procedure (via ADAM algorithm) begins. The random nature of initial {W_i,j,k,β_i} is needed to avoid setting the ANN in a configuration point from which won't be able to converge to the global minimum. Due to the same reason, different initializations lead to distinct paths of convergence. There is also a question of weather the true global minimum is achieved <cit.> or the algorithm gets stuck a neighbouring local minimum. Regardless, the practical experience has shown that the asymptotic states are appropriate approximations of the real minimum. In Fig. (<ref>) it is shown a typical evolution of the error (<ref>) relative to the norm of output values. The learning stage is fast, approximately 5 s on a typical personal CPU. One must note that the error function is almost always decreasing, but the evolution can manifest periods of quasi-plateau, as the one between 400-900 epochs in Fig. (<ref>). For this reason, it is important to allow for long periods of minimization, in order to avoid confusing a local minima (the plateau) with the global one. The number of particles propagated within a single simulation in this work is set to N_p =10^4. Looking at Fig. (<ref>), it might seem that this value is unnecesary since the diffusion coefficient D_⊥(t) saturates to a nice, constant, plateau without temporal fluctuations. While this is true, this does not mean that the ensemble is sufficiently well represented and, consequently, that the asymptotic value is statistically robust (without fluctuations). In other words, if N_p is too small, for two identical simulations we might obtain relatively different diffusions. This would have a detrimental impact on the database, introducing a spurious numerical fluctuation and making the job of convergence much more difficult. When the ADAM algorithm reaches the final plateau and an acceptable small error, one is ready to enter the testing phase. Now, the ANN, that is to say the values of {W_i,j,k,β_i} obtained after minimization, can be used to evaluate diffusion coefficients for the remaining 10% input, X^test, of the database that was not used in the training stage. The predicted values, Z_m+1^test, are compared with exact output diffusions Y^test from the database. Fig. (<ref>) shows the histogram of relative errors between ANN's predictions and real output, Err = 100(Z_m+1^test/Y^test-1). While there are points with relatively large errors (usually those of very small diffusion Y^test≪ 1), the overall accuracy of the ANN can be estimated with the second moment of the histogram and it is close to 5%. When comparing computing times between test-particle simulations, t_sims, and ANN, t_ANN, the latter shows its true power: for a single diffusion coefficient on a two-processor CPU t_sim≈ 2s while t_ANN≈ 10^-7s. Thus, ANNs are 10^7 times faster than numerical simulations. §.§ Making predictions with the ANN The histogram of errors obtained in the testing phase (<ref>) suggests a global error of ∼ 5%. But this is not enough to ensure an acceptable accuracy of the ANN. In fact, there are points which have departed more than 10% from the exact values. Moreover, these errors might be a serious liability if they are clustered in some special way inside the parametric space of input variables. In order to test whether this is the case, we must look for reduced spaces inside the database and evaluate if the ANN's predictions are a good fit for the real data. We do that on two distinct levels: first we look at two-dimensional dependencies of D_⊥ varying two parameters at a time, than, we look at simple dependencies between D_⊥ and only one parameter. In Figs. (<ref>)-(<ref>) we compare ANN output (red dots) with exact values (blue surface, obtained with test-particle simulation). Fig. (<ref>) uses as variables the magnetic turbulence amplitude b and the perpendicular length λ_⊥ while the energy is set to T=1. Fig. (<ref>) sets b=0.2 and varies λ_⊥ and T, while in Fig. (<ref>) λ_⊥ = 5. As one can see, all predicted values (red) lie close to the surface of exact diffusions (blue) in all cases. Thus, there is no special parametric domain where errors tend to accumulate and the ANN exhibits similar accuracy across the entire database. In Figs. (<ref>)-(<ref>) we set two parameters to constant values and vary only the third, as it follows: Fig. (<ref>) uses b=0.2, λ_⊥=5, Fig. (<ref>) T=1, λ_⊥=5 and Fig. (<ref>) T=1, b=0.2. The overlap between predictions (red) and exact data (blue) are, again, consistently accurate across all simulations. Furthermore, at the level of such single parameter dependencies, we can understand how using an ANN tool could enhance our ability to investigate and understand the physical processes at play. For example, having easy access to the red curve in Fig. (<ref>) allows one to observe the maxima in the diffusion profile and infer about the existence of two competing mechanisms in the influence of particle energy. One is related to the monotonic increase of parallel velocity v_∥ with T which increases the transport, while the other is connected to the finite Larmor radius effects that tend to decrease diffusion <cit.>. § CONCLUSIONS The present work described a methodology for building artificial neural networks designed for predictions of cosmic ray transport in turbulent magnetic fields. The ANN developed here has a standard architecture with two hidden layers and tanh/quadratic activation functions. It uses the ADAM algorithm for optimizations in the learning phase. The input data in the training/testing database consists of the values of free parameters for CRs and turbulence, chosen inside a (hyper)cube of convenience. The output is represented by associated diffusion coefficients. The values of the latter are evaluated using a transport model which is numerically tackled with the aid of test-particle simulations. The learning stage showed fast and good convergence properties. In the testing phase, a good fit between exact and ANN predicted data was found with overall errors of ≈ 5%. The predictive power of the ANN is proven by the dependencies of the transport coefficients on individual parameters. The most stringent feature of the network is its speed, since it is able to compute diffusion on bulk cases approximately 10^7 times faster than test-particle simulations. One might argue that the overall database constructed here is too limited: the model of turbulent fields (2D with a specific spectrum and connected correlation lengths λ_∥ = 10λ_⊥) is too simple to represent most astrophysical regimes and the numerical range of parameters (energy, correlation lengths, field strengths) is quite narrow. This is all true, but the purpose of this work was not to exhaust wide physical regimes. It was the author's intention to present a proof of concept for the use of machine learning techniques in the field of astrophysics, applied to a particular problem. Hopefully, the community will explore the use of this tool in future works using more sophisticated transport models and wider databases, thus, developing more potent ANNs. It is likely that, given the numerical effort required in building large databases, collaborations will be suitable. A word of caution. Developing ANNs to evaluate quantities of interest is not real knowledge. It does not constitute insight into any physical processes or quantitative mechanisms. Nonetheless, for practical applications where predictions are important, ANNs might be the choice of preference. Finally, even for more academically oriented questions, having a fast tool at hand to explore different regimes could be invaluable in the quest for disentangling the physical mechanisms at work. § STATEMENT The database and the ANN developed in this study are available upon reasonable request from the authors. § ACKNOWLEDGEMENTS This research was partially supported by Romanian Ministry of Research, Innovation and Digitalization under Romanian National Core Program LAPLAS VII – contract no. 30N/2023. apsrev
http://arxiv.org/abs/2307.03972v1
20230708131059
Evaluating the Capability of Large-scale Language Models on Chinese Grammatical Error Correction Task
[ "Fanyi Qu", "Yunfang Wu" ]
cs.CL
[ "cs.CL" ]
Autonomy 2.0: The Quest for Economies of Scale Shuang Wu, Bo Yu, Shaoshan Liu, Yuhao Zhu August 12, 2023 ============================================== Large-scale language models (LLMs) has shown remarkable capability in various of Natural Language Processing (NLP) tasks and attracted lots of attention recently. However, some studies indicated that large language models fail to achieve promising result beyond the state-of-the-art models in English grammatical error correction (GEC) tasks. In this report, we aim to explore the how large language models perform on Chinese grammatical error correction tasks and provide guidance for future work. We conduct experiments with 3 different LLMs of different model scale on 4 Chinese GEC dataset. Our experimental results indicate that the performances of LLMs on automatic evaluation metrics (e.g. F_0.5 score) falls short of the previous sota models because of the problem of over-correction. Furthermore, we also discover notable variations in the performance of LLMs when evaluated on different data distributions. Our findings demonstrates that further investigation is required for the application of LLMs on Chinese GEC task. § INTRODUCTION Building on InstructGPT <cit.>, ChatGPT has demonstrated its powerful ability to understand complex instruction and generate reasonable responses on various of NLP tasks. Following the technical trajectory of ChatGPT, a significant number of high-quality LLMs have emerged in recent times in both academia and industry, such as LLaMA <cit.>, ChatGLM <cit.> and PaLM <cit.>. Previous studies found that these LLMs have achieved great performance on a wide range of NLP tasks, including machine translation <cit.>, named entity recognition <cit.> and text summarization <cit.>. Certain studies have token comprehensive investigations into the performance of LLMs in the domain of English grammatical error correction, yielding some interesting findings <cit.>. LLMs are not able to outperform sota models in terms of automatic evaluation metrics. This is primarily because LLMs tend to make unnecessary modifications to make the input sentences more fluent, which may result in over correction problem, and in some cases, even alter the original semantics of the input sentences. In this report, we aim to explore the performance of LLMs in Chinese GEC task. We conducted experiments on various LLMs to investigate the influence of model size on the GEC results. Additionally, we attempted different test dataset from various data sources to explore the impact of data distribution on the outcomes. § EXPERIMENTAL SETUP §.§ Dataset We conduct experiments on four Chinese GEC dataset to provide a comprehension demonstration of LLMs' capability. The detailed statistics of these dataset are shown in Table <ref>. §.§.§ GEC data from Chinese learners We apply the test set of NLPCC-2018<cit.> and the validation set of MuCGEC<cit.> for evaluation. These two dataset collect the grammar errors made by foreigners during their process of learning Chinese. §.§.§ GEC data from Chinese native speaker examination We apply the validation set of FCGEC<cit.> and the validation set of NaCGEC<cit.> for evaluation. These two dataset are collected from Chinese native speakers' language examinations. §.§ Model We conduct experiments on 3 LLMs with different model scales: * ChatGPT[https://platform.openai.com/docs/api-reference]: we evaluate the performance of ChatGPT with OpenAI's official API. We choose gpt-3.5-turbo as the evaluated model, which stands out as the most advanced and specifically optimized for chat functionality. * ChatGLM-6B <cit.>: ChatGLM is an open bilingual language model based on GLM framework which is optimized for Chinese QA and dialogue and exhibits a robust capacity for Chinese understanding. * LLaMA-7B <cit.>: LLaMA is a collection of foundation LLMs ranging from 7B to 65B parameters proposed by Meta AI. we applied the 7B model for evaluation. §.§ Evaluation Metric We evaluate models' performance with Precision, Recall and F_0.5 from word level and char level respectively. We adopt the official implementation of MaxMatch (M^2) <cit.> scorer to calculate word-level F_0.5 score and choose PKUNLP as our word segment tool. We apply ChERRANT [https://github.com/HillZhang1999/MuCGEC/tree/main/ scorers/ChERRANT] for char-level metric calculation. §.§ Prompt Considering the differences in performance of large language models, we designed different prompts for them. These prompts are roughly the same in semantics, but there are some differences in details. The prompts are shown in Figure <ref> §.§ Setting details We set temperature to 0.6 when applying ChatGPT for a reliable generated result. For ChatGLM-6B and LLaMA-7B, we conduct experiments on 4 GeForce NVIDIA 3080 Ti GPUs. § EXPERIMENT RESULTS The experiment results are shown in Table <ref>. There are some results worthy of discussion. First, different data sources result in distinct evaluation results. LLMs exhibit significantly superior performance when trained on Chinese learner data (NLPCC and MuCGEC), as opposed to Chinese native speaker examination data (FCGEC and NaCGEC). According to our observations, the grammatical errors made by Chinese learners primarily involve the misuse of similar words or phrases, rather than incorrect sentence structures. In contrast, GEC data from Chinese native speaker examination maintains a higher level of regularity and is consisted of more complex structural errors. It is noteworthy that there exists gaps between GEC data from Chinese examination and Chinese native speakers' daily spoken habits. Second, different model scales also lead to distinct performance. The unified trend is that ChatGPT performs similarly on Precision with other 2 smaller models while achieves significant improvement in Recall. This implies that the evaluated LLMs have similar error correction capability while their error detection ability differs a lot. Third, there still exists great gaps between state-of-art models and LLMs on automated evaluation metrics. Previous work <cit.> has found the problem of over-correction for LLMs, which has also been noticed in our experiment. What's more, it is hard to explain why the char-level evaluation metrics is significantly lower than word-level evaluation metrics, which is not noticed in previous work. § CONCLUSION In this report, we explore the performance of various LLMs on Chinese grammatical error correction task. Experimental results indicate that there still remains gap between LLMs' performance and current sota models. Furthermore, the performance of different LLMs' performance is greatly impacted by the distribution of test data. Future work can focus on addressing the over-correction problem of LLMs and explore the untapped potential of LLMs in the field of grammatical error correction tasks. acl_natbib
http://arxiv.org/abs/2307.05065v1
20230711071230
Metatickles and Death in Damascus
[ "Saira Khan" ]
cs.MA
[ "cs.MA", "cs.LO" ]
Retrieval-augmented GPT-3.5-based Text-to-SQL Framework with Sample-aware Prompting and Dynamic Revision Chain Chunxi Guo, Zhiliang Tian (), Jintao Tang, Shasha Li, Zhihua Wen, Kaixuan Wang and Ting Wang () August 12, 2023 ============================================================================================================== The prescriptions of our two most prominent strands of decision theory, evidential and causal, differ in a general class of problems known as Newcomb problems. In these, evidential decision theory prescribes choosing a dominated act. Attempts have been made at reconciling the two theories by relying on additional requirements such as ratification (<cit.>) or “tickles” (<cit.>). It has been argued that such attempts have failed (<cit.>; <cit.>). More recently, Huttegger (<cit.>) has developed a version of deliberative decision theory that reconciles the prescriptions of the evidentialist and causalist. In this paper, I extend this framework to problems characterised by decision instability, and show that it cannot deliver a resolute answer under a plausible specification of the tickle. I prove that there exists a robust method of determining whether the specification of the tickle matters for all two-state, two-act problems whose payoff tables exhibit some basic mathematical relationships. One upshot is that we have a principled way of knowing ex-ante whether a reconciliation of evidential and causal decision theory is plausible for a wide range of decision problems under this framework. Another upshot is that the tickle approach needs further work to achieve full reconciliation. § INTRODUCTION Decision theory offers a normative framework for determining rational choice. Its primary components are a set of beliefs (probabilities) over states of the world and a set of valuations (utilities) over the different outcomes of acts in these states of the world. Two prominent forms of decision theory are the causalist and the evidentialist approaches. Causal decision theory determines rational action by evaluating what an agent can expect to bring about by her action. Evidential decision theory determines rational action by evaluating what evidence an agent’s action provides her with. The theories prescribe different acts as rational under a class of problems known as Newcomb problems. It is frequently held that the causalist prescription is the correct one (<cit.>; <cit.>; <cit.>; <cit.>).[Though some, such as <cit.>, <cit.> and <cit.> support the evidentialist conclusion.] The characteristic feature of Newcomb problems is that there is a correlation between state and act such that the choosing of the act is understood to be good evidence for a state of the world. The result is that evidentialism prescribes choosing an act which is strictly worse in both states of the world. The evidentialist recognises that though the agent cannot causally bring about a different state of the world, they deny that causality is important for practical rationality (<cit.>). Rather, the rational act should be based on its “news value”. That is, an agent ought to prefer a proposition to another just in case she would rather learn that proposition over the other. In light of criticism of this position, attempts have been made – notably by Jeffrey (<cit.>) and Eells (<cit.>) – to amend evidential decision theory to better accord with causalist prescriptions. In this paper, I focus on a version of reconciliation developed by Huttegger (<cit.>) and show that it cannot reconcile evidential and causal decision theory without further, questionable assumptions. Huttegger uses an idea due to Eells called the “tickle” defence: that the evidentialist becomes increasingly confident that the state of the world is not causally dependent on her act as a result of knowledge of her beliefs and desires. However, Huttegger employs the deliberative apparatus developed by Skyrms (<cit.>) and thus overcomes some objections to the original Eellsian approach.[In particular, the assumption that the agent access to a proposition which fully describes her beliefs and desires. Under Huttegger's approach, this is not assumed but rather reached through a process of deliberation.] Section 2 of this paper expounds the technical differences between causal and evidential decision theory and briefly outlines two decision problems: the Newcomb problem and Death in Damascus. Section 3 discusses Eells' approach to resolving the difference between the evidentialist and causalist prescriptions and details Huttegger's proposed amendment using deliberative dynamics. Huttegger's approach delivers the (commonly considered) correct answer for the evidentialist in the Newcomb problem. Section 4 considers the same framework applied to a class of problems characterised by decision instability. These are where, as soon as the agent leans toward performing one action, the other looks preferable. In more technical terms: there is no dominant act (no act which is preferred regardless of the state of the world) and every act is in principle causally unratifiable (after we have chosen the act we would prefer to have chosen otherwise). In particular, I consider a decision problem known as Death in Damascus (<cit.>). When the payoff table is symmetric, the received view is that both naïve evidentialism and naïve causalism (without any deliberative dynamics) remain silent on which is the correct act to perform. When it is asymmetric, the evidentialist is decisive whereas the causalist is trapped in a state of indecision. A more sophisticated (deliberative) causalist may settle upon choosing an act with probability slightly less than 0.5. In this paper, we see that Huttegger's framework, when applied to this problem, cannot straightforwardly reconcile the evidentialist prescription with the prescription of the causalist (both sophisticated and naïve). In Section 5, I offer an original analysis of the deliberative framework to explicate why it is irresolute in the Death in Damascus problem, and prove some general facts about its irresoluteness given a plausible version of the dynamical process, which I call the shortest-path independence dynamics. I identify the existence of what I call the plane of indifference in all two-act, two-state decision problems which exhibit the basic mathematical structure of either Newcomb or Death in Damascus problems. The key insight is that the specification of the tickle matters only depending on the positioning of this plane of indifference. In particular, regardless of the precise operation of the tickle during deliberation – shortest-path or not – the positioning of the plane in the Newcomb problem renders it the case that deliberation will always lead us to the same conclusion. This is not so in Death in Damascus and reconciliation of evidential and causal decision theory here requires more questionable assumptions. Section 6 discusses the status of reconciliation and the importance of the proof of the indifference plane for future work in deliberative decision theory. Section 7 concludes an offers a view on the status of the Eellsian project. § THE DECISION PROBLEMS The canonical form of evidential decision theory is attributable to Jeffrey (<cit.>). Under his framework, states of the world, acts and outcomes are all propositions of the same kind, forming a Boolean algebra. Probabilities and desirabilities may be applied to any of these propositions. Call the Boolean closure of the set of acts, states and outcomes, the decision-relevant propositions. The agent's conditional expected utility of an act is calculated from her probabilities and desirabilities for maximally specific decision-relevant propositions. Formally, the evidential decision theorist prescribes performing the act, A, that maximises the following conditional expected utility formula, where D denotes desirability, P denotes probability, and S, the state of the world. EU_evid(A) = ∑_i D(S_i & A)P(S_i|A) There are multiple versions of causal decision theory.[Most notably, the subjunctive accounts of Stalnaker (<cit.>) and Gibbard and Harper (<cit.>), as well as the non-subjunctive accounts of Skyrms (<cit.>) and Lewis (<cit.>).] For simplicity, I present Lewis' (<cit.>) account. Like the traditional decision-theoretic framework of Savage (<cit.>), states, acts and outcomes are not propositions of the same Boolean algebra but are separate entities. Probabilities attach to states of the world, and desirabilities or utilities, to outcomes. Lewis builds on the Savage framework but introduces dependency hypotheses which determine the appropriate partition of the state space. A dependency hypothesis is defined as the maximally specific proposition about how outcomes do, and do not, causally depend on the agent's present acts. Formally, the causal decision theorist prescribes performing the act, A, that maximises the following expected utility formula relative to the partition given by the dependency hypothesis.[The merit of the evidential approach is that it is partition invariant and it is much less sensitive to the formal specification of the decision problem. Indeed, it is a more general framework that can be reduced to Savage's decision theory under correct specification of the state space. In comparison, in many causal decision theories, the decision problem must be specified in such a way that each state-act pair is guaranteed to lead to a unique outcome; there is state-act independence; and the desirabilities of the outcomes are not influenced by the state-act pair which eventuated them. None of these restrictions are required in the evidential framework. See Eells (<cit.>) for discussion.] EU_caus(A) = ∑_i D(S_i & A)P(S_i) I now present two decision problems. One which has caused particular worry for the evidentialist, and one which has caused worry for both theories, though it is more frequently levied against the causalist (<cit.>). The first, Newcomb's problem, can be described as follows (<cit.>). Tomas is in a room with two boxes, one of which is opaque and one of which is transparent. Under the transparent box lies $1,000. Under the opaque box, there is either nothing or $1,000,000 and Tomas does not know which. He is offered the option to take either only the opaque box, or both the transparent one and the opaque one. The catch is that there is a predictor who, if she predicts Tomas chooses only the opaque box puts $1,000,000 under it and, if she predicts he chooses both boxes, puts nothing under it. Tomas believes the predictor is reliable. The payoff table is illustrated Table 1. In this decision problem, the causalist recommends taking both boxes, as it can be seen that this act strictly dominates taking only the opaque box. That is, it has higher expected utility under both states of the world. The naïve evidentialist, however, recommends taking only the opaque box, as choosing only the opaque box is good evidence that the predictor put $1,000,000 there. In this decision problem, the evidentialist seems to prescribe the wrong answer and Tomas loses out on a guaranteed $1,000. The Death in Damascus problem is as follows (<cit.>). Death works from an appointment book which specifies a time and a place. If and only if Tereza happens to be in the time and place when Death is there, she dies. Suppose Tereza is in Damascus and she accidentally bumps into Death. He tells her that he is coming for her tomorrow. Her options are either to stay where she is or to flee to Aleppo. The catch is that Death is a reliable predictor of where she will be, so as soon as Tereza believes it is better for her to flee, this constitutes good evidence that Death's appointment for her is in Aleppo and it seems as though she should stay. Analogously, however, if she decides to stay, this constitutes good evidence that Death knows that she stays and so she would be better off fleeing. The problem is therefore one of decision instability. The moment Tereza becomes confident in one option, the other appears more attractive. Here, I consider an asymmetric problem where the cost of fleeing is 1 util. The payoff table is given in Table 2, where we assign 10 utils to Tereza's survival.[While only the asymmetric case is presented in this paper, for completeness, the symmetric case was also analysed. This exhibits multiple lines of equilibria on the faces of the dynamical cube and therefore constitutes greater instability on the boundary than the asymmetric case. However, some would deny that indecision in such a circumstance constitutes a flaw in the theory. See, for example, <cit.>.] In this decision problem, the naïve evidentialist believes that, as Tereza's act is good evidence of the state of the world no matter what she chooses, she ought to stay in Damascus, since she should not pay the extra 1 util to flee to Aleppo. The causalist, however, believes that staying is irrational as it will put the agent in a position from which fleeing looks superior. She is therefore in a state of decision instability. Gibbard and Harper (<cit.>) argue that this is the correct answer as neither choice is ratifiable. Other forms of causal decision theory, for example, the deliberative framework of Joyce (<cit.>) or Arntzenius (<cit.>), prescribe the mixed act of fleeing with probability 0.474.[This is derived using Joyce's (<cit.>) framework for Murder Lesion applied to Death in Damascus assuming conditional probabilities P(S2|A2) = P(S1|A1) = 0.99. Under this framework, one's unconditional probabilities are revised in light of the expected utility calculation of an act in conjunction with the probabilistic correlation between state and act. More precisely, let α be a real number, P_t+1(S2) = P_t(S2|EU_t(A2) = α) ≠ P_t(S2) when α≠ 0, so the probability of a state of the world is updated based on its probability conditional upon the expected utility of an act. Further, let x and y be real numbers, if P_t(A2) < 1 and x > y, then P_t(A2|EU_t(A2) = x & EU_t(∼ A2) = y) > P_t(A2), so the choice probability of an act is updated based on its expected utility. The iterative process of updating one's choice probability will continue in this fashion until P_t(A2) = P_t+1(A2) = P_t(A2|EU_t(A2)), so information about its expected utility does not change its choice probability. As in Skyrms' (<cit.>) deliberational framework, this occurs when the expected utility of the two acts are equal.] In Skyrms' and Huttegger's deliberative dynamics, the agent only has access to pure acts and is therefore in a state of indecision when deliberation assigns an act probability of less than 1. In Joyce's framework, the mixed act is a choice for the agent should she have access to a random chance device she may use to pick her final, pure act. That is, a chance device which will determine that she flees with probability 0.474. One might ask whether the evidentialist should be reconciled with the naïve causalist or deliberative causalist. If we sought similar instability as the naïve causalist, it will be clear from the analysis which follows that this will not be achieved: in many cases the deliberative evidentialist is decided. So I ask whether reconciliation with the Joycean causalist is possible on Huttegger's model – whether evidential decision theory can prescribe the mixed act of fleeing with probability 0.474. First, we must explicate the framework. § A BRIEF HISTORY OF THE METATICKLE APPROACH AND HUTTEGGER'S DYNAMICS A prominent evidentialist attempt to prescribe the causalist action in the Newcomb problems is attributable to Eells (<cit.>; <cit.>). This has been referred to as the “tickle” or “metatickle” defence (<cit.>; <cit.>).[It is so named for the following thought experiment. Suppose the agent feels a tickle in his left pinkie just in case the predictor has put $1,000,000 in the opaque box. Then, even though the presence of money depends probabilistically on the agent's act, the tickle is sufficient to screen off the relevance of that act to the state of the world – the tickle tells the agent all he needs to know. A tickle may not always be available but, according to Eells, a “metatickle” is. This is a proposition which describes the agent's beliefs and desires.] Eells argues that the mistake being made by the naïve evidentialist in the Newcomb problem is the inference from some underlying common cause of both state and act, to a dependence of the state on the act. Eells argues that the only way in which the underlying cause could affect an agent's act is through the agent's beliefs and desires since, under our decision theories, these are the entities that determine action.[Eells suggests the common cause could not affect an agent’s act by changing his decision rule. In particular “the agent believes that the causal influence of the common cause is sufficiently insignificant as to be irrelevant to the eventual determination of which act is correct in light of his beliefs and desires... This is because he believes that the causal influence of whatever is causally responsible for his rationality – his training, genetic make-up, and so on – will be overwhelming” (<cit.>).] This implies that if the agent had full knowledge of his beliefs and desires, knowledge of the presence or absence of the common cause would be irrelevant to his act. The intuition is clear with a simple example. Consider a decision problem with the same structure as the Newcomb problem but is instead a decision about whether or not to smoke cigarettes. Suppose that there is a genetic cause, C, that results in both lung cancer and a proclivity to enjoy cigarettes but smoking does not itself result in lung cancer. It is correlated with lung cancer but there is no causal state-act dependence. Causal decision theory recognises this independence and thus prescribes smoking insofar as it is enjoyable to the agent. The naïve evidentialist prescribes abstaining as smoking is good evidence for the presence of the gene which determines lung cancer. The Eellsian evidential decision theorist, however, believes that the only way the common cause can affect the agent's acts is through his beliefs and desires. Let the proposition which describes his beliefs and desires be denoted T for metatickle. We have: P(A | T & C) = P(A | T &∼ C) If an agent has full knowledge of her beliefs and desires, P(T) = 1. So in the presence of the metatickle, P(A | C) = P(A| ∼ C) By symmetry of probabilistic independence, P(C | A) = P(C | ∼ A) Since the cause is not probabilistically dependent on the act in the presence of the metatickle, neither is the state of the world. This means P(S | A & T) = P(S | T) Eells believed that the proposition T was a proposition available to an agent (<cit.>; <cit.>). Conditional upon T, state and act are independent, and if this is the case, evidential decision theory will make the correct prescription: to smoke. Knowledge of the beliefs and desires of the kind caused by the common cause screens off what was previously thought to be evidence about the state of the world: the act. Analogous reasoning will lead the Eellsian evidential decision theorist to two-box in Newcomb's problem; the act of two-boxing is irrelevant to the $1,000,000 being there or not, and one should therefore choose the strictly dominant act.[See also Reichenbach's principle of screening off (<cit.>).] The reasoning behind the metatickle approach is diagrammed in Figure <ref>. For both Eells and Jeffrey (<cit.>), it is the agent's ability to anticipate her own choices that screens off the evidential import of her acts for states of the world.[Indeed, Skyrms (<cit.>) refers to Jeffrey's idea as a “hypothetical version of the metatickle defense”.] However, unlike Eells, Jeffrey does not make reference to common causes. For Jeffrey, deliberation is what allows the sophisticated evidentialist to screen off the correlation between act and state which caused her to disagree with the causalist. He states “it is my credences and desirabilities at the end of deliberation that correspond to the preferences in the light of which I act, i.e., it is my final credence and desirability functions [...] not the initial ones [...] that underlie my choice” (<cit.>). The idea is that the agent should not choose to maximise news value as she now sees it, but as she now expects herself to estimate it after having made the decision. This is known as “ratificationism”. However, Huttegger believes that both Eells and Jeffrey did not adequately specify how the agent comes to fully know her beliefs and desires and achieve this screening off (<cit.>).[See also <cit.>; <cit.>; <cit.>; <cit.>; <cit.>; <cit.> for criticisms of the metatickle approach.] To fill this lacuna, he first turns to the deliberational dynamics of Skyrms (<cit.>). Skyrms models a deliberational process where, as an agent deliberates about which act to choose, this is incorporated into her up-to-date probabilities and desirabilities. The agent has some information at the start of deliberation upon which she can assess expected utility but the deliberation process itself generates information that causes her to recalculate her expected utility. Suppose we assign probabilities to acts that represent the agent’s belief that she will choose that particular act at the end of deliberation. Since states and acts are correlated, act probabilities provide evidence about states of the world which the agent can use to update her expected utility. Deliberation then pushes the agent in the direction of the act that has the higher expected utility in his current assessment. In particular, the direction of his choice probability of choosing both boxes, denoted P(A2), is proportional to the difference in expected utility so that we have: dP(A2)/dt∝ EU(A2) - EU(A1) And dP(A2)/dt = positive if EU(A2) > EU(A1) negative if EU(A2) < EU(A1) 0 if EU(A2) = EU(A1) We will refer to this as the “adaptive dynamics”.[Skyrms also refers to this informally as a dynamical rule which “seeks the good” (<cit.>). He describes such rules as “qualitatively Bayesian” in the sense that the dynamical rule should reflect the agent's knowledge that she is an expected utility maximiser and the status of her present expected utilities as an expectation of her final utilities. Informally, such rules state that act probabilities should increase if the act has utility greater than the status quo, and that the probability of all acts with utilities greater than the status quo should increase. Frequently used dynamical rules that meet these conditions are the replicator dynamics or Nash dynamics, and the dynamics of Brown and von Neumann (<cit.>). Formally, dP(A)/dt = cov(A) - P(A)∑_j cov(A)_j/k + ∑_j cov(A)_j and dP(A)/dt = cov(A)^2 - P(A)∑_j cov(A)_j^2 respectively, where the constant k represents how quickly the agent adjusts her act probabilities.]. It is assumed, in both Skyrms' and Huttegger's frameworks, that the adaptive dynamics operates continuously, though others, such as Eells <cit.> have developed discontinuous approaches. Since this paper is engaging with Huttegger's reconciliation project, I will assume a continuous adaptive dynamics. For Skyrms, the updating of one's choice probability continues until such a time as the agent reaches probability 1 of performing a certain act or the agent reaches a mixed equilibrium where there is no change in her choice probabilities (dP(A2)/dt = 0). The basic intuition capturing the metatickle is that, if Tomas leans toward only taking the one box, the probability of the $1,000,000 being there increases, and so he begins to believe that choosing both boxes is better. Let S2 denote the presence of the $1,000,000. Formally, as P(A2) approaches 0 or 1, the conditional probabilities P(S2|A1) and P(S2|A2) approach 1 and 0, respectively. The value of P(A2) where the expected utility of A2 and the expected utility of A1 are equal is where deliberation stops, and this is Tomas' final probability of two-boxing. On Skyrms' model this does not in fact end in a reconciliation of evidential and causal decision theory. Supposing Tomas is an evidentialist and begins on the fence, he ends deliberation most probably one-boxing, but also attaches some positive probability to two-boxing. To this, Eells (<cit.>) introduces a model called “continual conditional expected utility maximization” which embraces Skyrms' idea that deliberation generates information upon which we should update our expected utilities but also introduces the notion that agents may face an urgency to act. Thus, depending on whether one wants to reach a decision quickly, one might eschew the states of indecision that Skyrms claims the evidentialist stuck in. Eells believes this reconciles the prescriptions of evidential and causal decision theory on Newcomb's problem, resulting in two-boxing. However, as Huttegger (<cit.>) rightly points out, this is a large deviation from traditional evidential decision theory. Whether an agent rushes to a decision or procrastinates are features of an agent not well captured by her preferences. Therefore, the proposed solution arguably fails. Huttegger takes a different approach to reconciliation in light on Skyrms' findings. His amendment to Skyrms' model is a relaxation of the assumption that as P(A2) approaches 0 or 1, the conditional probabilities P(S2|A1) and P(S2|A2) approach 1 and 0. That is, conditional probabilities of the states given acts are not functions of our choice probabilities. Indeed, in the original Eellsian account, there is nothing over and above one's informed beliefs and desires upon which the agent's decision is based; convergence towards one or the other act is not required for the appropriate screening off. Instead, conditional probabilities change by a separate “independence dynamics” as a function of time, or stages, in the deliberational process, moving closer to one another over the course of deliberation.[One may argue against deliberation generating such information for the agent. However, for the purpose of my current analysis, I leave aside these issues. See <cit.> for a discussion.] The independence dynamics is formally defined as follows.[If P(A1) = 0, then P(S2| A1) is not well defined. Huttegger states this obstacle can be overcome by requiring that dynamics of P(S2|A1) is continuous with the dynamics for arbitrarily close states that have P(A1) > 0.] dP(S2|A1)/dt = positive if P(S2|A1) > P(S2|A2) negative if P(S2|A1) < P(S2|A2) Likewise, dP(S2|A2)/dt = positive if P(S2|A2) > P(S2|A1) negative if P(S2|A2) < P(S2|A1) There are also no reappearances of correlations, so d[P(S2|A2) - P(S2|A1)]/dt = 0 if P(S2|A2) = P(S2|A1) Under this dynamical process, evidential deliberation converges to two-boxing since the choice probability of two-boxing is governed by the adaptive dynamics when state and act are independent. It is precisely the introduction of the independence dynamics that brings us to this reconciliation. If the evidentialist does not believe her act is evidence for a state of the world, she in effect uses the same probabilities the causalist uses. Furthermore, while in Skyrms' work, the end point of deliberation is where the choice probability of an act is 1 or dP(A2)/dt = 0, this is not the case under Huttegger's framework.[In Skyrms (<cit.>), that the adaptive dynamics continues until the probability of an act equals 1, and does not exceed 1, is guaranteed by the fact that this is when deliberation ends. This is not the case under Huttegger's framework; deliberation does not end when the probability of an act reaches 1. Therefore, as stated here, it is possible that P(A2) exceeds 1 since the rule that the change in choice probability is proportional to the difference in expected utility does not ensure that P(A2) remains within the probability simplex. As such, we stipulate that the adaptive dynamic rules which are permissible under this general formulation are those which effectively slow as they reach the boundary, therefore remaining within the probability simplex over the course of deliberation.] Rather, deliberation, in most cases, will continue until dP(A2)/dt = 0 and the agent reaches state-act independence. I say “in most cases” since Huttegger does not assume deliberation always leads to full state-act independence. This is because deliberation can sometimes fail provide all the information we need, for example, if the agent believes that the predictor in Newcomb’s problem knows more about how he makes decisions than he knows about himself. If this is so, there are hidden factors influencing his choice which he cannot access via deliberation. Nonetheless, Huttegger states that situations where agents' acts are determined solely on the basis of their desires, beliefs and decision rule are the “most natural setting for decision theory” (<cit.>). As such, I will be considering those cases in which the agent's deliberative process is sufficient to screen off state-act correlations. In Huttegger's framework, the reason that the independence dynamics can continue after the adaptive dynamics concludes is because the operation of the independence dynamics is independent of the adaptive dynamics: it is not a function of the agent's choice probabilities. It is important to note that, on this interpretation, the relative strength of the independence and adaptive dynamics becomes relevant to where the agent ends deliberation. Huttegger's work finds that the exact specification of the operation of the independence dynamics relative to the adaptive dynamics does not matter for Eells' reconciliation project on Newcomb's problem. In this paper, I show that it does matter for other decision problems on which evidential and causal decision theory diverge. I will not reconstruct Huttegger's work on Newcomb's problem here but rather apply his same framework to Death in Damascus. I begin by determining the dynamics on the boundaries and discuss the more complicated interior dynamics in Sections 5 and 6. § DEATH IN DAMASCUS FOR THE DELIBERATIVE EVIDENTIALIST In the language of metatickles, both Tereza's act of staying or fleeing and Death's appointment in Damascus or Aleppo are effects of a common cause; that is, the cognitive architecture of the agent upon which Death bases his appointment, sometimes referred to as the agent's “type” (<cit.>). Thus, conditional on the metatickle, T, which fully captures Tereza's beliefs and desires, states and acts are independent, and knowledge of the beliefs and desires of the kind caused by the common cause screens off the evidence that her choice provided for Death's location. Without making reference to common causes, but noting that deliberation can screen off state-act correlations, Huttegger introduces the independence dynamics, which, along with the adaptive dynamics describes the changes in an agent's choice probability over the course of her deliberation. Under Huttegger's framework, P(S2|A2) and P(S2|A1) may vary independently so the deliberational space is represented in three dimensions; one being P(S2|A1); the other P(S2|A2); and the final being Tereza's probability of fleeing, P(A2), all of which change during the deliberative process. The deliberational space is depicted in Figure <ref>. Note that the cube does not represent a phase diagram as the magnitude of the movement in any particular direction has not been specified. It should rather be thought of as a qualitative tool by which we may analyse where deliberation leads us. Recall the conditional expected utility formulae of evidential decision theory. That is, EU_evid(A1) = D(S1 & A1)P(S1|A1) + D(S2 & A1)P(S2|A1) EU_evid(A2) = D(S1 & A2)P(S1|A2) + D(S2 & A2)P(S2|A2) Given these formulae and the logical fact that P(S1|A1) + P(S2|A1) = 1 and P(S1|A2) + P(S2|A2) = 1 (one or other state of the world must obtain given our act), we may discern the movement of P(A2) on the faces of the cube by calculating the expected utility of both acts. First, let us address the front face, indicated in green, where P(S2|A1) = 1. The top edge is where P(S2|A2) = 1. Here we have EU(A1) = 10 and EU(A2) = -1. Since EU(A2) < EU(A1), by the adaptive dynamics, P(A2) decreases. Similarly, on the bottom edge of the front face, where P(S2|A2) = 0, EU(A2) < EU(A1). It can be verified that all points in between the edges also lead to a final choice probability of P(A2) = 0 on the front face of the cube. This is intuitive as, if P(S2|A1) = 1, Tereza can outsmart Death. That is, if the probability of Death being in Aleppo given that Tereza stays in Damascus is 1, she should surely stay in Damascus and not pay the extra 1 util to flee. Now consider the back face, indicated in yellow, where P(S2|A1) = 0. The top edge is where P(S2|A2) = 1. Here we have EU(A1) = 0 and EU(A2) = -1. Again P(A2) decreases. However, on the bottom edge of the back face, the dynamics look different. Here, P(S2|A2) = 0, so EU(A1) = 0 and EU(A2) = 9. Since EU(A2) > EU(A1), P(A2) increases. The exact point at which Tereza prefers fleeing over staying will be explored in the next section using what I call the plane of indifference. However, we have not yet considered the operation of the independence dynamics on the left and right faces, indicated in pink. This leads us to what Huttegger calls the Eells-Jeffrey manifold, represented by the grey diagonal face in the cube, and consists of all points where P(S2|A2) = P(S2) = P(S2|A1), in other words, where there is state-act independence. Movement toward the Eells-Jeffrey manifold is given by the evolving metatickle which screens off states from acts during an agent's deliberation. If our metatickle is sufficient to reach full state-act independence, we must determine the movement on the manifold itself. All areas above the bold blue line move to P(A2) = 0 and all areas below it move to P(A2) = 1 by the adaptive dynamics. The bold blue line is where P(S2|A2) = P(S2|A1) = 0.45. Here, EU(A1) = EU(A2) = 4.5 so there is no movement in P(A2) as per our specification of the adaptive dynamics. I have not yet discussed the dynamical movement in much of the interior of the cube, which is the subject of the next section, but first it is worth noting the following facts. Here, we have multiple equilibria represented by the bold blue line. All of these choice probabilities of P(A2) render the expected utility of staying equal to that of fleeing, despite the fact that the unconditional probability of Death being in Damascus is 0.45.[It should be noted that such lines of equilibria in general exhibit structural instability. That is, they are sensitive to changes in the dynamical rule (<cit.>] However, this is also the case for the deliberative causalist. Though the mixed act of fleeing with probability 0.474 is the end point of deliberation, at this point, all other acts have equal expected utility so all are equally permissible (<cit.>). Here, one might inquire what then renders the mixed act the correct answer. The reason is that this is the uniquely ratifiable act (should one have the option to execute it using a chance device that represents this probability distribution). That is, it is the only act where, upon knowledge that one has chosen it, one would not prefer otherwise.[This is also supported by consideration of the fact that the mixed act would constitute the Nash equilibrium of a normal form game with Death and Tereza as players. For discussion of the connection between ratifiability in deliberative decision theory and Nash equilibria in game theory, see <cit.>; <cit.>; <cit.>; and <cit.>.] In Sections 5 and 6, I show that the prescription of the mixed act under Huttegger's framework hinges upon two further conditions: (i) the independence dynamics does not take the “shortest path” to state-act independence, and (ii) the relative strength of the adaptive and independence dynamics must be such that they reach the Eells-Jeffrey manifold exactly where P(A2) = 0.474. Since these conditions imply that deliberation must proceed via a very specific route to the precise choice probability, it will not deliver reconciliation under many plausible specifications of the deliberative process. First, I consider what happens under one plausible specification of the independence dynamics. § SHORTEST-PATH INDEPENDENCE AND THE PLANE OF INDIFFERENCE In this section, I offer an original analysis of the Huttegger's deliberative framework given a plausible version of the dynamical process, which I call the shortest-path independence dynamics. I prove the existence of what I call the plane of indifference which determines why the framework is irresolute in the case of Death in Damascus and not in Newcomb's problem. I then show that, under Huttegger's framework, this plane of indifference exists in all two-act, two-state decision problems which exhibit the basic mathematical structure of either Newcomb or decision instability problems. The upshot is that the precise specification of the independence dynamics matters for reconciliation only depending on the positioning of this plane of indifference. This provides a principled way of knowing ex-ante whether a reconciliation of evidential and causal decision theory is plausible for a wide range of decision problems under this framework. Informally, the independence dynamics drives the agent's conditional probabilities toward one another over time, though the exact way in which this occurs is left open in Huttegger's work. One way the independence dynamics could operate is by adjusting one starting conditional probability to match the other. For example, if Tereza's initial value of P(S2|A2) is 0.99 and her initial value of P(S2|A1) is 0.01, she adjusts up the value of P(S2|A1) until it also equals 0.99. However, this does not seem particularly rational. Given the description of the decision problem, both of her initial conditional probabilities reflect the Death's reliability in predicting her action, so there appears no reason to count one rather than the other as more viable for informing her unconditional credence in the state of the world. A more plausible version of the independence dynamics would be one that concludes at the average across her two initial conditional probabilities. Since a movement in the direction of the manifold for one conditional probability then implies an equal movement in the direction of the manifold for the other, the independence dynamics decrees – absent its interaction with the adaptive dynamics – that Tereza's conditional probabilities move in the straight line that captures the shortest path to the manifold. This is illustrated in Figure <ref>, which represents a slice through the dynamical cube and the diagonal line represents the manifold.[Since the analysis is qualitative, this may extend to sufficiently similar independence dynamics, though this has not yet been considered.] To see what this means for our deliberative process, first we must return to an important feature of the dynamical cube previously overlooked. In our earlier illustration, the line of equilibria on the manifold represented a situation where there was no movement prescribed by the adaptive dynamics; any choice probability of P(A2) was acceptable since all mixtures of acts had equal expected utility. Moving off the Eells-Jeffrey manifold, we see that this is not only a feature existing at state-act independence but, as I will show, there exists a whole plane on which the adaptive dynamics prescribes no change in P(A2). This occurs where the two conditional probabilities of state given act, P(S2|A1) and P(S2|A2) sum to 0.9. The fact that this is a plane of the cube follows from the fact that two axes of the 3-dimensional space represent these conditional probabilities. The fact that the adaptive dynamics decrees no change in choice probability on this plane can be seen from the following. Let P(S2|A1) + P(S2|A2) = 0.9 and note it is true by definition that P(S1|A1) = 1 - P(S2|A1) and P(S1|A2) = 1 - P(S2|A2). Then EU(A1) = 0P(S1|A1) + 10P(S2|A1) = 10P(S2|A1) And EU(A2) = 9P(S1|A2) -1P(S2|A2) = 10P(S2|A1) Since the expected utility of both acts are equal as defined in terms of P(S2|A1), the adaptive dynamics prescribes no movement on the plane given by P(S2|A1) + P(S2|A2) = 0.9. Figure <ref> illustrates what I call the plane of indifference. The key feature of this plane is that if one begins deliberation on the plane, since P(A2) does not change, one simply moves by the independence dynamics toward the line of equilibria and ends deliberation with the same choice probability as she began with. Of utmost interest is what happens when we begin deliberation either below or above the plane of indifference. It turns out that if Tereza begins at any point below the plane, where P(S2|A1) + P(S2|A2) < 0.9, Tereza's deliberation concludes that she should flee to Aleppo with probability 1. If she begins above the plane, where P(S2|A1) + P(S2|A2) > 0.9, Tereza concludes she must stay in Damascus, and flee to Aleppo with probability 0. For example, consider P(S2|A1) + P(S2|A2) = 1. Here, we have a 2-dimensional plane which sits above the plane of indifference. All initial choice probabilities will lead Tereza to staying. To see this, note that since we have imposed the constraint P(S2|A2) + P(S2|A1) = 1, and by logical fact, P(S1|A1) + P(S2|A1) = 1 and P(S1|A2) + P(S2|A2) = 1, our constraint implies P(S1|A2) + P(S1|A1) = 1. Given these formulae, we may calculate our expected utilities. First, consider the top edge of the plane, where P(S2|A2) = 1. We see that EU(A1) = 0 and EU(A2) = -1. Since EU(A2) < EU(A1), by the adaptive dynamics, P(A2) must reduce. Similarly, on the bottom edge of the plane where P(S2|A2) = 0, EU(A2) < EU(A1). Since P(S2|A2) + P(S2|A1) = 1, shortest-path independence dynamics drives her unconditional probability P(S2) to 0.5. In the middle of the plane on its intersection with the Eells-Jeffrey manifold, EU(A1) = 5 and EU(A2) = 4 so, again, EU(A2) < EU(A1). As a result, deliberation moves Tereza toward staying in Damascus until we reach a stable equilibrium point where P(A2) = 0 and P(S2|A2) = P(S2) = P(S2|A1) = 0.5. Analogous reasoning applies when we begin on the other side of the plane and P(S2|A1) + P(S2|A2) < 0.9. In what follows, I will prove that the adaptive dynamics is governed by whether we are below or above the plane of indifference for a general payoff table representing a wide range of decision instability problems. Let a denote the utility assigned to survival and b the utility assigned to death. Since we consider an asymmetric payoff table, let c denote the cost of fleeing. Our payoff table represents a general version of a wide range of asymmetric decision instability problems where a > b and c ≤ a - b. Other problems with a similar structure are the Murder Lesion problem and the Psychopath Button (<cit.>; <cit.>; <cit.>). The plane of indifference can be defined in terms of the utilities in the payoff table. Recall that the adaptive dynamics prescribes no movement in P(A2) when EU(A1) = EU(A2). This is when bP(S1|A1) + aP(S2|A1) = (a-c)P(S1|A2) + (b-c)P(S2|A2) By substitution and rearranging, we get P(S2|A1) + P(S2|A2) = a-b-c/a-b We must prove that the sum is defined and that it is greater than or equal to 0 and less than or equal to 2 in order for it to appropriately represent an agent's conditional probabilities. First, by definition of the payoff table a > b, so the denominator is positive and the expression is defined. Second, a-b-c/a-b≥ 0 entails that the numerator is also positive. Note that since a > b, this will be satisfied as long as c ≤ a-b. Of course, this is true from the definition of the asymmetric decision instability problem. If the cost of fleeing was greater than the difference between survival and death, we would not be in a case of asymmetric Death in Damascus as it would never be preferable to flee. Finally, a-b-c/a-b≤ 2 = a-b-c ≤ 2(a-b) = -c ≤ a - b. This is satisfied by definition of the payoff table again, as c is positive and a > b so the left hand side is negative whilst the right is positive. From this equation for the plane of indifference, we can see that as the cost of fleeing increases, the right hand side of the equation reduces, meaning the plane of indifference will move downwards in the diagonal space of the dynamical cube. This decreases the area of the cube where Tereza's deliberation leads her to flee. In other words, the greater the cost of fleeing, the more sure Tereza must be that Death is in Damascus than that he is in Aleppo in order that rationality decree she purchases the ticket to flee.[If c = 0, we are in a symmetric decision instability problem where the plane of indifference intersects the Eells-Jeffrey manifold at P(S2) = 0.5.] Now that we have proved the existence of an indifference plane, we can demonstrate how the adaptive dynamics will operate either side of it in a general setting. Since a-b is positive (the utility of living exceeds that of dying) we can easily replace our equalities in the above existence proof with inequalities. The direction of the inequality does not change throughout the proof. It follows that: P(S2|A1) + P(S2|A2) > a-b-c/a-b EU(A1) > EU(A2) P(S2|A1) + P(S2|A2) < a-b-c/a-b EU(A1) < EU(A2) This means that if the agent begins deliberation above the plane, she will end deliberation with P(A2) = 0 and if she begins below it, she will end deliberation with P(A2) = 1. Here, one might ask whether her dynamical deliberation could cross over the plane. In principle, it could. However, this would be to violate the plausible stipulation we have made that the ideal deliberator approaches the Eells-Jeffrey manifold via the shortest-path independence dynamics. By definition of how I have specified the shortest-path dynamics, the path toward the manifold is perpendicular to the manifold. This can be seen in Figure <ref>. We can also prove that the indifference plane is perpendicular to the manifold by showing that the dot product of the normal vectors of both planes is 0. Since the normal vector of a plane is perpendicular to it, it is sufficient to show that the normal vectors are perpendicular to each other in order to show that the planes are perpendicular. The plane of indifference is given by P(S2|A2) + P(S2|A1) = a-b-c/a-b and the Eells-Jeffrey manifold is given by P(S2|A2) - P(S2|A1) = 0. The normal vectors are therefore A = ⟨ 1, 1 ⟩ and B = ⟨ 1, -1 ⟩. The dot product is thus A · B = 0. The planes are therefore perpendicular and this will hold for any value of a-b-c/a-b. It is clear, therefore, that the shortest-path dynamics decrees dynamical adjustments of conditional probabilities that run parallel to the plane of indifference and do not cross it. Given this feature, one's initial starting point entirely determines the ending point of deliberation. This is true of more general cases than the one considered here, as long as the payoff table bears the same mathematical relationship to the one presented above, where a > b and c ≤ a - b, and raises important questions for the reconciliation of causal and evidential decision theory for problems of decision instability in Huttegger's deliberative framework. Now let us consider why this problem does not arise in Newcomb's problem. In short, the reason is that the structure of the payoff table renders the plane of indifference parallel to the Eells-Jeffrey manifold. This means that, above or below the plane, shortest-path independence dynamics will necessarily pass through it to the Eells-Jeffrey manifold where adaptive dynamics dictates that Tomas takes both boxes. Consider the following generalised payoff table where a > b and c ≤ a-b. Other problems with a similar structure are the Cholesterol problem, Smoking problem, and Solomon's problem (<cit.>; <cit.>; <cit.>). As above, the plane of indifference is found where EU(A1) = EU(A2). This is when bP(S1|A1) + aP(S2|A1) = (b+c)P(S1|A2) + (a+c)P(S2|A2) By substituion and rearranging, we get P(S2|A1) - P(S2|A2) = c/a-b We must prove that the difference is defined and that it lies between -1 and 1 inclusive in order for it to appropriately represent an agent's conditional probabilities. First, by definition of the payoff table a > b, so the denominator is positive and the expression is defined. Second, -1 ≤c/a-b = b - a ≤ c. This is satisfied by definition of the Newcomb payoff table, since if c was strictly less than b-a, c would be negative, and there would be no benefit to two-boxing. Finally, c/a-b≤ 1 = c ≤ a -b. This is again satisfied by the definition of Newcomb payoffs, since if c were strictly greater than a-b, this would mean c + b > a and it would therefore always be better to two-box. Notice here that the relationship that defines the plane is not a sum but a difference. This means that the plane is parallel to the Eells-Jeffrey manifold. This is easily proved by taking the ratio of the components of their normal vectors and showing that they are the same. Indeed, they are both 1. This will hold for any value of c/a-b. It will be illuminating to rewrite the above condition as P(S2|A2) = P(S2|A1) - c/a-b so we see the indifference plane sits below the manifold. This is illustrated in Figure <ref>. The movement decreed by the adaptive dynamics on either side of the plane in the Newcomb problem is given by examining the following biconditional statements. As before, the proof proceeds straightforwardly from the existence proof replacing the equalities with inequalities without any change in direction, as the term a-b is positive. P(S2|A1) > P(S2|A2) + c/a-b EU(A1) > EU(A2) P(S2|A1) < P(S2|A2) + c/a-b EU(A1) < EU(A2) We can see from Figure <ref> that when P(S2|A1) > P(S2|A2), we are below the Eells-Jeffrey manifold. So when P(S2|A1) > P(S2|A2) + c/a-b, we are below the plane of indifference. Here, the biconditional statements above reveal that the rational act according to our adaptive dynamics is to one-box. By analogous reasoning, all points above the indifference plane end deliberation in two-boxing. As the independence dynamics moves the agent towards the Eells-Jeffrey manifold, and the Eells-Jeffrey manifold lies above the indifference plane, the adaptive dynamics decrees that the agent ought to two-box in Newcomb's problem, corroborating Huttegger's conclusion. As the value of c, the monetary sum under the transparent box, increases, the plane of indifference shifts downward in diagonal space away from the Eells-Jeffrey manifold. As a result, the region of the cube where Tomas should rationally one-box reduces. This is intuitive as, by description of the problem, the agent only receives the value c when he two-boxes, so the greater the value of c, the greater the incentive to two-box. The denominator a-b captures the difference between the contents of the opaque box in the two states of the world. If this difference is large, the plane shifts upwards, expanding the region of points which decree as rational one-boxing. This again is intuitive, as the greater the incentive to one-box, the less sure the agent need be that the predictor put a there in order for him to rationally choose it. Note that when c = 0 the plane of indifference is exactly equivalent to the Eells-Jeffrey manifold. It might be tempting to think that if there is nothing under the transparent box, the agent should one-box, but this is not the correct answer. Recall that when we have reached state-act independence, Tomas does not see his act as evidence about the state of the world, so he is rationally indifferent between one-boxing and two-boxing. The causalist answer is the same, as the payoffs are the same under both states of the world. The preceding discussion has shown that it is the plane of indifference which determines rational action in both decision problems. The crucial difference, however, is that regardless of the exact specification of the independence dynamics, the agent's trajectory of deliberation in Newcomb's problem may pass through the indifference plane to the Eells-Jeffrey manifold, since the two are parallel. This means that where one begins deliberation does not determine where one ends in the same way that it does in the Death in Damascus problem. Here, if we accept the plausibility of the shortest-path independence dynamics, movement toward the manifold never crosses the indifference plane, since the independence path and plane of indifference are parallel to one another. This analysis shows that the relatively straightforward reconciliation of causal and evidential deliberation for Newcomb's problem under Huttegger's deliberative framework is not so straightforwardly achieved in problems of decision instability. Much more would have to be said on the nature of the independence dynamics in order to determine whether we may cross the plane of indifference and end deliberation with a resolute answer. In the next section, I turn to these further requirements. § ON THE POSSIBILITY OF RECONCILIATION Recall that, under Huttegger's framework, deliberation ends when the adaptive dynamics prescribes no further movement and when we reach state-act independence. In this section, I show that this will only lead to a reconciliation under two very specific conditions: (i) the independence dynamics must be specified such that it does not take the shortest path to the manifold, and (ii) the adaptive dynamics and independence dynamics must have a relative speed such that they reach the Eells-Jeffrey manifold at precisely the point of reconciliation. As we saw from the previous section, if we take the shortest-path independence dynamics to be true, whether Tereza begins above or below the plane of indifference determines where she will end deliberation. The only time, therefore, where she could end deliberation with P(A2) = 0.474 is when she begins with deliberation with her choice probability at P(A2) = 0.474 and her conditional probabilities precisely on the plane of indifference (where they sum to 0.9). In this case, shortest-path independence will move her directly to the line of equilibria without any change in her choice probability. This is a case where there appears to be no deliberation at all driving her conclusion, and is therefore implausible as a reconciliation of evidential and causal decision theory via deliberation. Of course, there may be viable independence dynamics other than shortest-path independence so let us relax this assumption. However, even if we allow violation of shortest-path independence, it must be the case that the relative speed of the adaptive and independence dynamics is such that the agent reaches the Eells-Jeffrey manifold precisely at the point where it intersects the plane of indifference at P(A2) = 0.474. If Tereza reaches the manifold on the equilibrium line at any point to the left or right of this, P(A2) ≠ 0.474 and dP(A2)/dt = 0 so we do not achieve reconciliation. If Tereza reaches the manifold at any other point above or below the equilibrium line, the adaptive dynamics leads her to P(A2) = 0 or 1 depending on whether this is above or below the plane of indifference. It is only if the two conditions I have specified obtain that we may witness trajectories such as those depicted in Figure <ref>, but the reconciliation here appears forced. Again, it is important to recognise that this was not a issue in the case of Newcomb's problem. Here, regardless of the specification of the independence dynamics, since the Eells-Jeffrey manifold lies on the side of the indifference plane where two-boxing is rational, as long as deliberation leads us to state-act independence, the framework will always prescribe the correct answer. The relative strength of the independence and adaptive dynamics may lead Tomas to different points on the line of equilibria where the Eells-Jeffrey manifold intersects the right face of the cube, but this does not change Tomas' ultimate action, as P(A2) = 1. Where he concludes deliberation only determines his beliefs about his winnings. That is, he believes himself to be more fortunate if he ends deliberation where the probability of the $1,000,000 being there, P(S2), is high, and less fortunate if he ends deliberation where it is low. The analysis I have offered in this section therefore represents a principled way to delineate when the specification of the independence dynamics matters for the reconciliation of evidential and causal decision theory under Huttegger's framework. In particular, it depends on whether the plane of indifference intersects the Eells-Jeffrey manifold or not. If it does not, implying it lies entirely to one side of the Eells-Jeffrey manifold, the specification of the independence dynamics does not matter. Any independence dynamics that moves the agent in the direction of state-act independence over time will lead to the same answer. As is shown from the generalised proofs, for any problem representing the mathematical structure of the generalised Newcomb's problem, the plane of indifference will not intersect the Eells-Jeffrey manifold. For any problem representing the mathematical structure of the generalised Death in Damascus problem, the plane of indifference will be perpendicular to the Eells-Jeffrey manifold, and the specification of the independence dynamics as well as its strength relative to that of the adaptive dynamics, matters for where the agent concludes deliberation. We therefore have a robust way of determining ex-ante whether reconciliation of evidential and causal decision theory is plausible for a wide range of two-state, two-act decision problems under this framework. Note that what is important is not whether the plane of indifference is perpendicular or parallel to the Eells-Jeffrey manifold, but whether it intersects the manifold, meaning that the analysis here could in principle be extended to other decision problems, where the angle of the plane of indifference relative to the manifold differs, in order to determine whether specification of the independence dynamics matters in these problems. Furthermore, we would expect the key result – that the relative strength of the adaptive and independence dynamics matters for reconciliation – to hold in larger (n x n) decision problems, though this has not as yet been investigated. § CONCLUSION The prescriptions of evidential and causal decision theory come apart in two general classes of problems known as Newcomb problems and decision instability problems. Huttegger (<cit.>) has developed a framework for evidential deliberation building on Eells' (<cit.>) metatickle approach and Skyrms' (<cit.>) deliberation dynamics which reconciles the prescriptions of the evidentialist and causalist in Newcomb's problem. Since deliberation results in increasing awareness of our beliefs and desires (and these are the mechanisms by which our action is determined), our acts no longer provide information about the state of the world. That is, deliberation screens off the state-act correlation which previously caused the evidentialist to choose the dominated act in Newcomb's problem. Huttegger's more sophisticated, deliberative evidentialist agent agrees with the causalist in preferring two-boxing. In this paper, I have extended Huttegger's framework to consider an asymmetric case of decision instability: the Death in Damascus problem. I have shown that, in this context, Skyrms' adaptive dynamics and Huttegger's independence dynamics are insufficient to recommend a decisive answer. In Section 5, I consider a plausible version of the independence dynamics, shortest-path independence, and explore the particular features of the deliberative process that this independence dynamics decrees in Death in Damascus. We find that the dynamics decrees different answers for different initial starting points of deliberation. I prove the statements made here are applicable to a more general class of problems of decision instability, as long as the payoff table accords with some simple mathematical relationships. In particular, I show that there exists what I call a plane of indifference where either act is equally acceptable, and this plane of indifference entails that where one concludes deliberation depends entirely on where one begins deliberation. This, however, is not true of the Newcomb case. There are three upshots to this work. First, whilst application of the Eellsian metatickle to deliberation could straightforwardly lead to the correct answer in Newcomb's problem, this notion is not so easily extended to problems of decision instability, and the reconciliation requires assumptions that appear forced. Second, the proof of the plane of indifference for all two-state, two-act problems whose payoff tables exhibit the basic mathematical relationships in Section 5 provides us with a principled way of delineating those cases where the specification of the independence dynamics matters for a reconciliation of evidential and causal decision theory within this framework. Specifically, if the plane of indifference never intersects the Eells-Jeffrey manifold, the specification of the independence dynamics does not matter for reconciliation. If it does, reconciliation requires additional, and potentially questionable, assumptions about the exact specification of the adaptive and independence dynamics. Finally, this work shows that the metatickle approach has so far failed to reconcile evidential and causal decision theory. Eells' and Jeffrey's original ideas were widely criticised for not providing details of how an agent arrives at knowledge of their own beliefs and desires, involving implicit assumptions, or idealisations that limit the metatickle approach (<cit.>; <cit.>; <cit.>; <cit.>; <cit.>; <cit.>; <cit.>). Attempts to resolve this using the theory of deliberation have shown it does not result in a reconciliation, rather the evidentialist is left in a state of indecision in Newcomb's problem (<cit.>). Eells' amendment (<cit.>) to his original idea then introduced spurious assumptions about other features of the agent, such as her felt urgency to act, which are marked deviations from traditional evidential decision theory (<cit.>). In this paper, I have shown that the most recent attempt to salvage Eells' idea, owing to Huttegger (<cit.>) also fails to deliver a reconciliation of evidential and causal decision theory in problems of decision instability. Future work on reconciliation would need to pay heed to the fact that our results will depend heavily on the interaction of the adaptive and independence dynamics, and any attempt at reconciliation would need to specify their relative strength such that evidential decision theory agrees with causal decision theory in both Newcomb and decision instability problems. eptcs
http://arxiv.org/abs/2307.04011v1
20230708164347
Robust Learning-Based Incipient Slip Detection using the PapillArray Optical Tactile Sensor for Improved Robotic Gripping
[ "Qiang Wang", "Pablo Martinez Ulloa", "Robert Burke", "David Cordova Bulens", "Stephen J. Redmond" ]
cs.RO
[ "cs.RO", "cs.LG" ]
[ Kipton Barros August 12, 2023 =================== empty empty The ability to detect slip, particularly incipient slip, enables robotic systems to take corrective measures to prevent a grasped object from being dropped. Therefore, slip detection can enhance the overall security of robotic gripping. However, accurately detecting incipient slip remains a significant challenge. In this paper, we propose a novel learning-based approach to detect incipient slip using the PapillArray (Contactile, Australia) tactile sensor. The resulting model is highly effective in identifying patterns associated with incipient slip, achieving a detection success rate of 95.6% when tested with an offline dataset. Furthermore, we introduce several data augmentation methods to enhance the robustness of our model. When transferring the trained model to a robotic gripping environment distinct from where the training data was collected, our model maintained robust performance, with a success rate of 96.8%, providing timely feedback for stabilizing several practical gripping tasks. Our project website: <https://sites.google.com/view/incipient-slip-detection>. § INTRODUCTION §.§ Background Autonomous robots have yet to achieve human-like dexterity when performing gripping tasks, mainly due to a lack of satisfactory tactile perception and processing abilities. Studies have shown that even humans struggle with simple gripping tasks in the absence of tactile sensation <cit.>. The palm of the human hand contains ∼17,000 mechanoreceptors, i.e., specialized nerve endings that respond to mechanical stimuli such as deformation, pressure, and displacement <cit.>. These receptors play a crucial role in sensing and relaying tactile information to the nervous system <cit.>, allowing humans to adjust their grip in real-time to account for slipperiness and other factors. Building on these insights, researchers have designed tactile sensors replicating part of human hand sensing capabilities and explored slip detection techniques using these sensors to enhance robotic manipulation performance <cit.>. §.§.§ Types of slip The two main types of slip are gross slip and incipient slip. Gross slip refers to the occurrence of slip across the entire contact surface, where the relative motion between the gripper or tactile sensor and the gripped object is typically observable at a macro level <cit.>. On the other hand, incipient slip refers to the initial stage of slip, when parts of the contact surface slip while others remain stuck <cit.>. For example, when an object is held by elastic fingertips, and an external force is applied to the object in a direction tangential to the contact surface, some parts of the fingertips will stretch while others will compress, causing incipient slip at the periphery of the contact surface while the central part remains stuck. As the applied force increases, the slip will finally spread across the entire contact surface, leading to gross slip. Throughout the incipient slip phase, there may not be any observable relative motion between the object and the finger. §.§.§ Slip detection and challenges Previous studies have proposed techniques to detect gross slip and apply corrective measures when the slip is detected to prevent objects from dropping out of the grasp <cit.>. Detecting gross slip may not always be a wise strategy, as it occurs when the entire contact has already started slipping. On the other hand, detecting incipient slip can provide an early warning of an impending and more dangerous gross slip, allowing corrective measures to be applied earlier, and increasing the likelihood of maintaining a safe grip. However, detecting incipient slip is not trivial because it requires the contact interface of the sensor to possess adequate elasticity, enabling one part to undergo sufficient and detectable deformation, resulting in slip, while the other part remains stuck. Furthermore, validating incipient slip can be challenging since it is not generally associated with macro-level relative movement between the sensor/finger and the object. To verify the occurrence of incipient slip, researchers commonly utilize a camera to monitor the contact surface; by examining the camera images, they can visually confirm the presence of incipient slip events <cit.>. However, this method of relying on cameras may not be feasible in real-world situations, such as when gripping everyday objects. §.§ Our contribution Our study presents a new technique for detecting incipient slip using the PapillArray (Contactile, Australia) tactile sensor. This sensor features a square array of nine elastic silicone pillars with varying unloaded heights, promoting different normal forces on the pillars when pressed against a surface. This design enhances the likelihood of inducing incipient slip on shorter pillars when a tangential force is applied. We utilized deep neural networks (NN) to develop our incipient slip detection algorithm, where we made novel use of the data gathered in a previous study <cit.> to construct the dataset for training and evaluating the NN. The primary objective of the NN was to classify inputs into two distinct categories: incipient slip and other, functioning as a binary classifier; other refers to all others states that are not incipient slip, such as gross slip or being stationary. Furthermore, the tactile data at hand is presented in the form of a uniformly-sampled time series. Therefore, to effectively capture the serial nature of the data, we utilize a recurrent neural network (RNN) <cit.>. The inclusion of historical data in a NN model has the potential to enhance its performance in real-time prediction tasks, as it enables the capture of temporal patterns and dependencies, leading to more robust and accurate forecasts <cit.>. We also propose several data augmentation methods designed to enhance the performance and robustness of our trained model, making it robust to environmental confounders. § RELATED WORK Similar to the approach we will take in this paper, the approach proposed in <cit.> treats slip detection as a classification task; the authors employed a support vector machine <cit.> to detect slip using the velocity of embedded pins on the inner surface of a TacTip camera-based tactile sensor <cit.>. Labels of the training data are assigned manually based on the alignment of pin velocities. In a more recent study <cit.>, the authors modified the TacTip sensor used in <cit.> by introducing raised fingerprint-like ridges, decreasing skin thickness, and increasing pin spacing to reduce mechanical coupling between ridges and to create the traction differential and facilitating the shear displacement required for the occurrence of incipient slip. This is similar to the behavior seen on the human finger pad when sheared against an object, thus allowing the sensor to experience incipient slip. They used an external camera to monitor the contact in real-time for data labeling, and then employed a convolutional neural network <cit.> as a binary classifier to detect incipient slip. The GelSight technology is another camera-based tactile sensing system that uses an elastic body to establish a contact with an object, with the built-in camera recording the resulting deformation to obtain tactile data <cit.>. An approach was introduced in <cit.> for detecting incipient slip using the GelSight sensor. This method determines the degree of incipient slip by analyzing the inhomogeneity of the displacement field, which is quantified in terms of entropy. More recently, a more advanced version of the GelSight technology, called GelSlim, was proposed in <cit.>; it employed the deviation of the deformation field from a 2D planar rigid displacement field to determine slip. Compared to camera-based tactile sensors, the distributed optical sensor used in our work, the PapillArray, is less complex in terms of instrumentation<cit.>. It offers several advantages over other sensor designs, including size, temporal resolution, and compliance. A heuristic algorithm that employs the PapillArray tactile sensor to detect incipient slip is proposed in <cit.>. The approach is based on the observation that incipient slip happens when some sensor pillars stop deflecting at the same rate as the contacted object is moving in the sensor's frame of reference. Precisely, this approach detects slip by evaluating the tangential velocity drop with respect to a reference pillar, which is the pillar under the highest normal force (usually the center). In the case of rotational movements, with the center of rotation at the center pillar, the algorithm cannot detect any slip since no movement can be detected in the center pillar. This heuristic approach is further improved in <cit.> to account for rotational slips, detecting the deceleration of each pillar by comparing it to its own recent maximum velocity, and then it checks if other pillars are still in motion to confirm that the deceleration indicates an incipient slip. However, these methods may not be applicable when dealing with deformable or non-planar surfaces, or when only a subset of the pillars makes contact with the object. In such cases, establishing a dependable reference pillar to represent the object's movement in <cit.> becomes challenging; in <cit.>, it is difficult to determine whether the deceleration of pillars is caused by slip or by the shape of the object's surface. In our work, we are motivated to take a learning-based approach in developing a dedicated incipient slip detection algorithm, where we propose domain adaptation techniques to enhance the robustness of our trained model, enabling it to effectively detect incipient slip for more realistic objects and contacts, overcoming the challenges outlined above. § MATERIALS AND METHODS §.§ Hardware §.§.§ Contactile sensor Our study employed the commercial PapillArray sensor from Contactile[<https://contactile.com/>], depicted in Fig. <ref>, which is based on the concept described in <cit.>. The sensor outputs the real-time x-y-z force data experienced by each pillar at a high sampling rate of 1,000 Hz. Our training data was collected using the Dev Kit v1, while for the online evaluation of our trained model, we used the Dev Kit v2. Dev Kit v2 and Dev Kit v1 differ in size and the pillar Shore hardness. §.§.§ Robotic gripping rig Fig. <ref> displays the rig used in our study for the gripping task. The rig features a specialized two-finger gripper (RG2, OnRobot, Germany) with a blue adapter fixed to one of its fingers. This adapter serves to couple the Contactile PapillArray Dev Kit v2 sensor to the gripper finger. A white 3D-printed cuboid is used to extend another finger, matching the length of the finger equipped with the sensor. Moreover, a couple of ArUco markers are attached to this extended cuboid to track the gripper's pose. We replaced the original motor of the RG2 gripper with a stepper motor (MX-28, Dynamixel, US) to achieve high-frequency interruptible control of the gripper. The modified gripper was mounted on a six-axis robot arm (UR5e, Universal Robots, Denmark). §.§ Data preparation §.§.§ Collect slip data and annotate slip events for individual pillars Our training dataset is sourced from <cit.>. In brief, the training data was acquired using a six-degree-of-freedom hexapod robot (H-820, Physik Instrumente, Germany) with the Contactile PapillArray Dev Kit v1 sensor mounted on the top. A transplant acrylic plate is fixed above the sensor on a T-slot frame and a video camera (Logitech Streamcam, Logitech, Switzerland) is positioned above the acrylic plate to capture videos of the contact between the sensor and the plate. During the data collection, the hexapod pushes the sensor vertically against the acrylic place and then moves it laterally to induce a slip. The horizontal movement could be a translation, a rotation, or a combination of both. A total of 200 data sequences were collected, covering a range of compression levels, hexapod movement velocities, and movement directions. The recorded videos are processed using the Matlab Computer Vision Toolbox (MathWorks, USA) to track the pillar tip position. The tangential pillar tip velocity is then used to label the slip state (gross slip or not gross slip) of individual pillars. §.§.§ Collect control data When the sensor is compressed against a flat surface and moved laterally, the tangential velocity measured by each pillar will increase at first, as the sensor starts deforming, before reaching a peak velocity and subsequently decreasing its speed when the pillar stops deforming (Fig. <ref>). If a pillar stops deforming because it is undergoing incipient slip, at least one other pillar will still be deforming laterally; this is observed by an asynchronous decrease of the tangential velocity of the nine pillars (Fig. <ref> - Slip). However, if the object stops moving before any slip occurs, the tangential velocity magnitude of the nine pillars decreases almost simultaneously (Fig. <ref> - Stop). Since the stop events display similar temporal feature to slip events, we collected an additional dataset specifically focusing on stop events, consisting of a total of 28 data sequences. We label the data points in these sequences as other. By incorporating this dataset, the NN is less likely to misclassify between incipient slip and other, thereby improving the accuracy and reliability of the NN. The data collection process was similar to that of the slip events, except that the hexapod's movement was abruptly halted before any slip occurred. Further details on this process can be found in <cit.>. §.§.§ Annotate the incipient slip Based on the definition of incipient slip provided in Section <ref>, we annotate the incipient slip in the dataset as follows: we consider that incipient slip has occurred when at least one pillar slips with respect to the contact surface, while at least one other pillar remains stationary with respect to the contact surface. In other words, we start annotating incipient slip from the moment the first slip occurs on any pillar, and this interval continues until the time when all nine pillars have slipped. The slip label of each pillar is obtained as described in Section <ref>. It should be noted that when annotating incipient slip in the rotational data, we only consider the outer eight pillars. This is because the rotational movement is centered around the central pillar, which never slips by our definition (remains in the same location on the contact area), for our data set. §.§.§ Refine data sequence The sensor output exhibits variance due to noise and sporadically produces glitches that deviate significantly from the mean value, displaying sudden extreme highs or lows. To address these issues, we apply a median filter with a window size of 21 samples on the raw sensor signal, which is sampled at 1,000 Hz. We divided the raw data sequence into non-overlapping windows, with each window containing 40 samples. This division reduced the data rate to 25 Hz. This was done for practical limitations in the hardware and software of our system. More precisely, the maximum refresh rate of our gripper servo is ∼62 Hz, and the computation rate of our classifier is ∼40 Hz. Moreover, it is worth noting that reliable gripping does not necessarily require a high sampling frequency. Indeed, humans have a reaction time of approximately 80-120 ms (equivalent to 8.3-12.5 Hz) <cit.>, enabling us to perform most everyday gripping tasks effectively. Finally, we only consider the x-y forces on the pillars as input in NN training, while excluding the z force. During the data collection process, when the hexapod moves tangentially to induce slip, it remains stationary in the z direction. As a result, we assume that the z force does not play a significant role in detecting incipient slip in our case. It should be acknowledged that in real-world scenarios, the normal force can provide valuable information for humans to detect slip, and it is likely to vary appreciably for different gripping objectives. Therefore, another reason for excluding the z force is to prevent the NN from incorrectly learning that the z force remains relatively stable during slip events, as occurs in our data set. §.§ Training data augmentation §.§.§ Data augmentation by rotational symmetry During the data collection process, the sensor is placed at the origin of the world coordinate frame. Its horizontal surface is parallel to the x-y plane of the world frame of reference, and the side edges align with the x-y axis directions. Hence we use a rotation transformation to augment the data; intuitively, it can be understood as rotating the initial position of the sensor around the z axis by a random angle. For each data point in a sequence, we perform the following mathematical calculations: [ F_x'; F_y' ] = [ cos(θ) -sin(θ); sin(θ) cos(θ) ]·[ F_x; F_y ], θ∈[0,2π), where F_x and F_y represent the force values along the original x-y axis, and F_x' and F_y' are the augmented force values after virtual rotation of the sensor by a randomly sampled angle, θ, from a uniform distribution of [0, 2π). §.§.§ Advanced data augmentation for domain adaptation The data used in our study was collected under idealized conditions, where a hexapod robot was used to compress the sensor against a flat surface and move laterally in a controled manner. In this setup, the force was nearly perpendicular or parallel to the contact surface and the movement speed is nearly constant. However, in real-world robotic gripping, the conditions are expected to be quite different from this idealized setup, and the performance of the model trained on such data is expected to be poor. We identify several issues that may arise when transferring the model trained on idealized data to real-world gripping scenarios, and we propose a range of advanced data augmentation methods to address these issues in the following paragraphs. These methods are designed to generate synthetic data that mimics the real-world variability of the gripping: * Issue: The slipping velocity in real-world robotic gripping is not constant, as it is influenced by various factors such as gravity, friction, and the shape of the object being gripped. However, during the data collection process, the hexapod induces slip at a constant velocity. Remedy: We employ random sampling to sample a percentage of data points from the raw data sequence, thereby generating a new data sequence. And we maintain the frequency of the new sequence at the same rate as the raw sequence (1,000 Hz). This approach can simulate velocity variations to mimic real-world gripping scenarios, as it changes the magnitude differences of some temporally adjacent data points while keeping the time interval unchanged. * Issue: In some gripping scenarios, a portion of the sensor pillars may not be in contact with the object. For instance, this can occur when employing sensors to grip an object with a rounded surface or when gripping an object smaller than the sensor's contact area. Remedy: To simulate an unloaded pillar, we substitute a number of pillar data sequences with zero sequences. Noise is then added to make the generated sequence resemble a realistic sensor signal. The noise is derived from a normal distribution with a mean of 0.0 N and a standard deviation of 0.001 N. * Issue: Unlike with the hexapod, the force generated by the gripper may not be perfectly perpendicular to the x-y plane of the sensor frame of reference, and the force leading to slip may not be perfectly in this plane. This can occur when the gripped object is not flat or the mechanical linkage of the gripper flexes when applying force to the object. Remedy: First, we sampled nine individual pillar sequences from raw sensor sequences with different sensor compression levels and hexapod movement types, and then combined them to form a new sensor sequence. Secondly, we scaled (scale factor ranging from 0.2 to 2.0) the magnitude of values for a number of pillar sequences. Lastly, we randomly permuted the position (by pillar index) of a nine-pillar sequence. Employing these techniques can encourage the NN capture a broader and more comprehensive pattern of incipient slip (see Section <ref>), rather than only learning the limited pattern introduced by the hexapod. §.§ Neural networks The key decision making component of our incipient slip detection approach is a binary classifier. Initially, we trained a NN capable of estimating the probability of incipient slip for each time point in a sequence. Next, we set a threshold to convert the continuous probability into a binary output. To enhance the accuracy of the classifier, we used an ensemble technique that trains multiple independent classifiers concurrently and aggregates their output probabilities to produce the final decision (shown in Fig. <ref>). §.§.§ Architecture Fig. <ref> illustrates the process of inputting a data sequence into the NN and obtaining the corresponding slip classification. The modified data sequence, as explained in Section <ref>, is input into an encoder. Subsequently, the encoder output is passed to a specific type of RNN called a gated recurrent unit (GRU) <cit.>. In our approach, we utilize a single layer of GRU for each propagation step, and we refer to it as a GRU cell. The hidden output from the GRU cell is generated as a combination of the current input and historical information. Moreover, an estimator is included that takes the hidden layer output from the GRU cell and converts it into a probability estimation. The ground truth label of each window is determined by the label of the last sample in the window. §.§.§ Training The ensemble model consists of Z (Z=5 in our case) independently trained classifier models. During each training iteration of each classifier model, a subset comprising a proportion of λ sequences (λ=40% in our case) is randomly sampled with replacement from the entire training set and used for NN training. The final layer of the estimator utilizes a two-class softmax activation function, with its outputs interpreted as probabilities for the occurrence of incipient slip and other. Our chosen loss function is binary cross-entropy. §.§.§ Decision making We aggregate the output probability from each classifier model in the ensemble to convert the continuous probability to binary prediction: f:=1[∑_z=1^ZM_z(x=[F_(n-1)T+1,···,F_nT])/Z >P_th], where 1[·] is an indicator function, M_z donates the z^th classifier model in the ensemble, x donates the input vector, and P_th denotes the probability threshold, which is 50% in our work. Z donates the number of classifiers in the ensemble model. § EXPERIMENTS AND RESULTS We first explicitly display our method's high success rate in detecting incipient slip, including offline and online scenarios. Then, we illustrate the practical benefits of our approach by showcasing its ability to stabilize an insecure robotic grasp in a number of practical gripping tasks. §.§ Offline evaluation The entire dataset is randomly split into two subsets: a training set (∼80% of the entire dataset, comprising 160 data sequences of slip event and 23 data sequences of stop event) for model training, and a test set (∼20% of the entire dataset, consisting of 40 data sequences of slip event and 5 data sequences of stop event) for model evaluation. Both subsets are expanded through the symmetry-based augmentation method described in Section <ref>, resulting in a five-fold increase in the size of the training set and test set. Fig. <ref> displays two examples comparing the incipient slip detection results over slip and stop events. As observed, the algorithm's confidence in labeling incipient slip increases rapidly as incipient slip starts and decreases as it progresses toward gross slip. In comparison, the probability in the stop case fluctuates slightly but remains well below the threshold. We define an incipient slip detection as successful if it occurs within a 0.3 second window preceding the true labeled time point of incipient slip (to accommodate the error of the ground truth) and prior to the occurrence of the gross slip. For the stop event, a successful estimation is defined as a classification of the entire sequence as other. Fig. <ref> presents the confusion matrix, displaying the final classification results over the entire test set; our algorithm achieves an overall success rate of ∼95.6%. The results also demonstrate its effectiveness in differentiating between the slip and stop events; this indicates that our algorithm is not simply detecting the changes in the force and yank of the pillars, as mentioned earlier in Section <ref>. Our algorithm can effectively detect incipient slip in its early stages. In Fig. <ref>, we present the latency between the moment of incipient slip detected by the algorithm and the ground truth onset of incipient slip. It is evident that, on average, incipient slip can be detected within 10 ms of its initiation. §.§ Online evaluation In the online evaluation stage, we utilized the full data set for training the final deployed model. Again, to increase the amount of training data, we applied both symmetry-based (see Section <ref>) and advanced data augmentation (see Section <ref>) techniques, resulting in a five-fold increase in data amount (1140 data sequences). The online evaluation was performed on six everyday objects, depicted in Fig. <ref>. We include objects of varying surface materials, curvatures, and hardness to ensure a broad range of conditions are represented in our results. §.§.§ Validating incipient slip detections We cannot easily validate incipient slip occurrences for everyday objects as we cannot independently monitor individual pillar contacts. Hence, we choose to perform the online evaluation based on following well-founded assumptions. The incipient slip detection is considered successful if it can be detected at any time-point between the time when the robot's movement begins (T_m) and the time when gross slip occurs (T_g); the criterion for determining the occurrence of gross slip has been arbitrarily defined as the occurrence of relative translational movement greater than 2 mm or relative rotational movement exceeding 2^∘ between the object and the robot's frame of reference. To induce a slip, the gripper first grips the object with a constant force. Then the robot moves the gripper downwards towards a rigid and stationary table surface, eliciting the slip between the sensor attached to the gripper tip and the object. In each trial, the gripping force is selected from a range of 8 N to 30 N. The robot movement can be either translational, rotational or a combination of translational and rotational. The velocity (v) and acceleration (a) of the robot movement have three different levels: low (v = 4 mm.s^-1, a = 10 mm.s^-2), medium (v = 10 mm.s^-1, a = 50 mm.s^-2), and high (v = 40 mm.s^-1, a = 100 mm.s^-2). All robot movements were performed using the built-in movel function of the UR script. The tool center position and orientation are obtained using the built-in getl function of the UR robot. This function employs forward kinematics calculations based on the read joint angles. In accordance with the offline evaluation, control trails are also conducted here for each v and a combination and movement type. The purpose is to validate that the identified behavior is indeed the incipient slip, rather the event with similar pattern like the stop event we mentioned above. The control data involves lifting the robot arm while maintaining a secure grip using a pre-determined grip force that is sufficient to prevent any slippage. As a result, when lifting an object, the pillars in contact undergo downward deformation due to the force of gravity; subsequently, once the object is securely held by the gripper and remains relatively motionless, these pillars will remain stationary. Here, for the sake of convenient explanation, we will also refer to this event as stop, and we label the sequence as other. To ensure a fair experiment, we add extra weight to lightweight objects to enhance their downward motion when being lifted, aiming to make the pattern of the output data sequence more like a slip event. In total, our experiment consisted of 216 trials, including 162 sequences of slip event (6 objects × 3 movements × 3 forces × 3 velocity/acceleration combinations) and 54 sequences of stop event (6 objects × 3 movements × 1 force × 3 velocity/acceleration combinations). Fig. <ref> illustrates the final validation results. Fig. <ref> shows a confusion matrix, highlighting the high success rate (∼96.8%) of our method in detecting incipient slip and its ability to differentiate between slip and stop events. Fig. <ref> demonstrates that our algorithm can detect incipient slip almost immediately upon the initiation of the movement that induces slip, with a normalized displacement D_norm range of 0.2 - 0.4, within which the incipient slip can be detected (refer to the caption for the definition of D_norm). These results provide comprehensive validation of the effectiveness of our approach in detecting incipient slipping in real-world gripping tasks. §.§.§ Ablation study This study aims to showcase the effectiveness of our advanced augmentation method in bridging the domain gap between the idealized data collected with the hexapod and more realistic data encountered with the robotic gripper. To accomplish this, we employed the model training approach described in Section <ref>. However, instead of splitting the data into separate train and test sets, we trained the model using the entire dataset here, given the different objective. Subsequently, we conducted online gripping experiments, as described in Section <ref>, using this trained model. Our findings, as illustrated in Fig. <ref>, indicate that the model trained without our advanced augmentation method exhibits a notably high false positive rate in the subsequent online gripping task when compared to the results shown in Fig. <ref> where the model was trained using our advanced augmentation method. In other words, the model trained without our advanced augmentation is unable to effectively distinguish patterns between slip and stop events. As a result, it incorrectly detects incipient slip in many stop events. §.§ Grasp stabilization after incipient slip detection This experiment aims to show the benefit of using our incipient slip detection method in practical gripping tasks. This involve lifting the robot arm while gripping the object with a pre-determined small force to ensure that slip occurs. We applied our incipient slip detection method and adjusted the grip when incipient slip was first detected to prevent the object from slipping further. In this experiment, we simulate two common scenarios that can trigger slips. The first involves gripping an object at its center of gravity with insufficient force and lifting it, causing a translational slip between the gripper and the object. The second involves gripping an object away from its center of gravity and lifting it, where rotational slip is likely to occur. We implemented a simple grip force adaptation that responds to incipient slip detection as follows: if incipient slip is detected, the robot immediately stops, and the gripper applies a pre-determined secure force to the object. The objects used in the experiment are the same as those shown in Fig. <ref>. The experiment was conducted 36 times (6 objects × 2 scenarios (translation or rotation) × 3 repetitions). We fix ArUco markers on the objects and gripper and use Python OpenCV to track the positions and orientations of all. We report the results in Table <ref>, which demonstrate the quickly and effective detection of incipient slip using our algorithm. On average, our algorithm can timely detect incipient slip and prevent the object from slipping when the relative translation between the object and the gripper reaches 2.5 mm and the relative rotation reaches 1.9 ^∘. Our algorithm showcases its ability to facilitate timely corrective action, preventing object falls; a demonstration video can be seen at our project website given in the abstract. § DISCUSSION Our developed algorithm enables the NN to effectively learn the incipient slip pattern from offline data and demonstrates high accuracy in both offline and online test sets. Furthermore, our algorithm enhances the security of robotic gripping. Compared to previous related works <cit.>, our algorithm offers several advantages. Firstly, our incipient slip detection algorithm incorporates a data-driven learning-based approach, minimizing the need for extensive human involvement in investigating the complex patterns of incipient slip. Secondly, the improved robustness of our algorithm enables the NN to effectively adapt to diverse domains with various types of PapillArray sensors and robotic gripping systems, despite being trained solely on data lacking heterogeneity. Therefore, our algorithm is more practical and possesses greater potential for maximizing the utilization of valuable tactile data in real-world scenarios. Thirdly, our algorithm has the ability to distinguish between incipient slip and a closely related tactile pattern that we refer to as a stop event. Notably, previous related work <cit.> has not adequately considered or addressed the stop event; however, our investigation has revealed the importance of including stop events when developing incipient slip detection algorithms due to their similar patterns but entirely different consequences. There are limitations to our work that need consideration. Firstly, the incipient slip detection could be improved by transitioning from a binary signal to a continuous warning signal. For instance, if incipient slip is detected in a small portion of the contact surface, the remaining area may still possess sufficient fraction to prevent significant slippage. In such cases, the warning level of incipient slip is low and corrective actions may not be necessary. Conversely, if a significant portion of the contact surface exhibits incipient slip, the warning level should escalate and it becomes important to for appropriate corrective actions. Moreover, our current choice of force adaptation method for reacting to incipient slip falls short when compared to the state-of-the-art gripping control work <cit.>. However, it is important to note that force adjustment is not the primary focus of our research in this paper, which is focused on improving the incipient slip detection. In future work, we will develop a more sophisticated force adaptation technique that incorporates our incipient slip detection method. § CONCLUSION In conclusion, this paper presents an incipient slip detection method that employs deep learning and several data augmentation techniques to improve the robustness of the trained NN. Our method is highly effective and reaches the state-of-art performance, it enable a single pre-trained NN model to be applied across various domains and tasks. In addition, our method has the potential to be extended to other approaches that use compliant tactile sensors. To train the NN parameters, we use stochastic gradient descent with a momentum of 0.95 and a learning rate of 10^-3, with a batch size of 512. We also incorporate a weight decay of 10^-3 using L_2 regularization during training. The encoder NN consists of one hidden layer with 1024 units, and the output dimension is 128. The GRU cell has a hidden layer dimension of 128. The predictor network comprises two hidden layers with 256 and 128 units, respectively. To all hidden layers, we apply rectified non-linearity <cit.> and batch normalization <cit.>. We implement our NN using PyTorch (Version 1.12.1, Meta, USA). All our experiments are conducted on a PC with an Intel 7-10875H CPU and an NVIDIA 2060 GPU. During the online evaluation stage, e utilise ROS <cit.> to facilitate communication between various components in our system. ieeetr
http://arxiv.org/abs/2307.07298v1
20230714122111
3D Shape-Based Myocardial Infarction Prediction Using Point Cloud Classification Networks
[ "Marcel Beetz", "Yilong Yang", "Abhirup Banerjee", "Lei Li", "Vicente Grau" ]
cs.CV
[ "cs.CV", "cs.LG", "eess.IV" ]
IEEEexample:BSTcontrol Similarity-based Memory Enhanced Joint Entity and Relation Extraction Witold Kościukiewicz1,20009-0001-0192-8850 Mateusz Wójcik 1,20009-0008-0547-9467 Tomasz Kajdanowicz20000-0002-8417-1012 Adam Gonczarek1 August 12, 2023 =========================================================================================================================================== empty empty Myocardial infarction (MI) is one of the most prevalent cardiovascular diseases with associated clinical decision-making typically based on single-valued imaging biomarkers. However, such metrics only approximate the complex 3D structure and physiology of the heart and hence hinder a better understanding and prediction of MI outcomes. In this work, we investigate the utility of complete 3D cardiac shapes in the form of point clouds for an improved detection of MI events. To this end, we propose a fully automatic multi-step pipeline consisting of a 3D cardiac surface reconstruction step followed by a point cloud classification network. Our method utilizes recent advances in geometric deep learning on point clouds to enable direct and efficient multi-scale learning on high-resolution surface models of the cardiac anatomy. We evaluate our approach on 1068 UK Biobank subjects for the tasks of prevalent MI detection and incident MI prediction and find improvements of ∼13% and ∼5% respectively over clinical benchmarks. Furthermore, we analyze the role of each ventricle and cardiac phase for 3D shape-based MI detection and conduct a visual analysis of the morphological and physiological patterns typically associated with MI outcomes. Clinical relevance— The presented approach enables the fast and fully automatic pathology-specific analysis of full 3D cardiac shapes. It can thus be employed as a real-time diagnostic tool in clinical practice to discover and visualize more intricate biomarkers than currently used single-valued metrics and improve predictive accuracy of myocardial infarction. Myocardial Infarction, Point Cloud Networks, Cine MRI, 3D Cardiac Shape Analysis, Ejection Fraction, Geometric Deep Learning. § INTRODUCTION Myocardial infarction (MI) is a common manifestation of coronary artery disease, the deadliest pathology in the world <cit.>. In current clinical practice, its diagnosis and treatment typically involve the acquisition of cardiac cine magnetic resonance (MR) images as the gold standard imaging modality for cardiac anatomy and function assessments <cit.>. However, current clinical decision-making is often guided by single-valued biomarkers, such as ejection fraction, which are directly calculated from 2D MR imaging (MRI) slices to evaluate cardiac anatomy and physiology on a purely global level <cit.>. Consequently, considerable research efforts have focused on developing methods that can take more comprehensive image information into account <cit.>. However, all these approaches still only approximate the true 3D structure of the heart based on 2D images or image-derived features and therefore neglect more complex and localized changes in 3D cardiac shapes, which play a crucial role in improving the understanding, prediction, and management of MI outcomes <cit.>. In this work, we investigate the utility of full 3D cardiac shape representations in the form of point clouds for the detection and prediction of MI events. To this end, we propose a novel fully-automatic MI detection pipeline, which first reconstructs 3D cardiac anatomy point clouds from raw cine MR images and then employs targeted point cloud networks for the MI classification task. The network architectures of its individual components are based on recent advances in point cloud-based deep learning to enable efficient multi-scale feature learning directly on anatomical surface data. Deep learning approaches for point cloud data have recently been increasingly used in the field of cardiac image analysis for a variety of applications, such as 3D surface reconstruction <cit.>, image segmentation <cit.>, pathology classification <cit.>, or 3D anatomy and function modeling <cit.>. In this work, we specifically study 3D anatomical representations of the left and right ventricles at both ends of the cardiac cycle and their effect on prior and future MI. To the best of our knowledge, this is the first MRI-based point cloud deep learning approach to focus on MI prediction directly from 3D cardiac shapes. § METHODS §.§ Dataset Our dataset consists of 1068 subjects of the UK Biobank study for which cine MR images were acquired using a balanced steady-state free precession (b-SSFP) protocol <cit.>. An MI event after the imaging date (incident MI) was recorded for 235 subjects, while 294 subjects suffered an MI event prior to imaging (prevalent MI). The remaining 539 subjects were selected to be free of any diseases associated with the cardiovascular system and are used as normal control cases for our analysis. We follow the disease definition as proposed in previous work <cit.> and use the UK Biobank field ID 42,000 to identify both incident and prevalent MI subjects. §.§ Infarction Detection Pipeline Our proposed 3D anatomy-based MI detection pipeline consists of multiple fully automatic steps as illustrated in Fig. <ref>. It receives the cine MRI acquisitions at both the end-diastolic (ED) and end-systolic (ES) phases of the cardiac cycle as inputs. Based on these inputs, we reconstruct the corresponding 3D biventricular anatomy models at both phases using a multi-step cardiac surface reconstruction approach <cit.> (Fig. <ref>-A). It first segments the left ventricular (LV) endocardium, LV epicardium, and right ventricular (RV) endocardium in the short-axis and four-chamber long-axis (LAX) slices of the MRI acquisition with separate pre-trained fully convolutional neural networks <cit.> and in the two-chamber LAX images using a U-Net with adversarial training. The resulting segmentation contours of all image slices are then placed into 3D space as sparse point clouds <cit.> before a point cloud completion network is employed to correct the motion-induced slice misalignment and output dense point cloud representations of the 3D cardiac anatomy. These 3D cardiac anatomies are then used as inputs to point cloud classification networks for the tasks of prevalent MI classification (Fig. <ref>-B) and incident MI prediction (Fig. <ref>-C). For each task, we study both ES only and combined ED and ES anatomy inputs as implicit and explicit representations of 3D shape-based cardiac function. In the latter case, we concatenate the ED and ES point clouds before feeding them into the point cloud classification network, giving it direct access to all available anatomical information at both phases. In addition, we investigate the utility of the RV as part of a biventricular representation of 3D cardiac shape for MI, by using first only LV anatomies, and then combined LV and RV anatomies as network inputs. We analyze the effect of these two different shape inputs for both MI classification tasks and for each of the two temporal input types, resulting in a total of 8 different experiments. §.§ Point Cloud Classification Network We choose PointNet <cit.> as the architectural basis of our point cloud classification network and adjust it for the task of binary MI classification of 3D point cloud representations of cardiac anatomy and function. To this end, we first use a sigmoid activation layer at the end of PointNet's classification branch to obtain binary prediction probabilities as the network's output. We then tune the drop-out probabilities in the last multi-layer perceptron part of the network based on a grid search procedure. We train our network using the binary cross entropy loss and the Adam optimizer with a mini-batch size of 20 and a learning rate of 1E-6 for 200 epochs on an RTX 2060S GPU with 8 GB memory. § EXPERIMENTS AND RESULTS §.§ Prevalent Infarction Detection In our first experiment, we assess whether the high-resolution 3D point cloud representations of the cardiac anatomy contain more information about prevalent MI events than corresponding global clinical benchmarks and whether a point cloud-based deep learning network is able to successfully extract them without any manual intervention. Furthermore, we analyze the importance of different cardiac substructures and cardiac phases for this task. To this end, we train four separate point cloud classification networks using the ES LV anatomy, the combined ED and ES LV anatomies, the ES biventricular anatomy, and the combined ED and ES biventricular anatomies as inputs to the respective networks. We then select widely used clinical metrics (ES volume, ejection fraction) for the LV and RV as our comparative benchmarks and input them as independent variables in four separate logistic regression models, each trained on the same dataset and task as the corresponding point cloud networks. We conduct a four-fold cross validation experiment in each case and report the results in terms of area under the receiver operating characteristic (AUROC) scores in Table <ref>. We find that 3D shape-based point cloud classification networks outperform the respective clinical benchmarks for all cardiac phases and anatomical substructures with an average relative difference of ∼10% in terms of AUROC. As expected, the combined biventricular input at ED and ES achieves the highest score and a ∼13% outperformance of its respective clinical benchmark. Following this quantitative evaluation, we further investigate which 3D anatomical shape features are typically associated with prevalent MI cases by the network. To this end, we select two representative cases corresponding to good and poor network predictions on the test dataset for both MI and normal cases and visualize them in Fig. <ref>. We observe that good network predictions for prevalent MI subjects are more likely to occur in cases of reduced myocardial thickening and smaller overall volume changes between the ED and ES phases, and vice versa for normal cases. Bad predictions more commonly happen when these associations are weakened or reversed. §.§ Incident Infarction Prediction In addition to detecting prevalent MI events, we investigate whether 3D anatomy-based patterns learned by point cloud networks are also beneficial for the prediction of incident MI events. We follow a similar experimental setup as for prevalent MI classification in Sec. <ref> and train four separate networks and their corresponding clinical regression benchmarks for the binary prediction of incident MI events using four-fold cross validation and the AUROC evaluation metric (Table <ref>). We again use the full 3D shapes (LV ES, LV ED+ES, LV+RV ES, LV+RV ED+ES) as neural network inputs and the respective clinical metrics (LV ES volume, LV ejection fraction, LV+RV ES volume, LV+RV ejection fraction) as independent regression variables. We find that the 3D shape-based point cloud network is able to outperform the respective clinical benchmark for both cardiac phases and ventricles by ∼4% on average. The best score is achieved by the combined ventricular anatomy at ES with a ∼5% improvement. When visually examining the results, we observe similar patterns as in our prevalent MI detection experiments (Sec. <ref>) with a generally higher probability of accurate MI prediction for smaller changes in myocardial thickness between ED and ES phases. § DISCUSSION AND CONCLUSION We have presented a novel end-to-end point cloud-based deep learning pipeline for the detection of both prior and future MI events based on 3D cardiac shapes. In our experiments, the method has been able to outperform corresponding clinical benchmarks for both classification tasks using a variety of different inputs. On the one hand, this indicates that full 3D cardiac shapes contain more infarction-related information than current single-valued clinical biomarkers, which is in line with previous works <cit.> and promises to improve both patient risk stratification and the implementation of preventive measures. On the other hand, it shows that the architectural design of our pipeline is adequately chosen to successfully extract relevant biomarkers directly from the 3D anatomical shapes. Hereby, the selected point cloud representation of cardiac surface data considerably reduces the memory requirements compared to previous voxelgrid-based approaches. Combined with the fully automatic pipeline design, this allows for faster execution speeds, wider applicability, and easy scaling to both higher resolutions and large numbers of patients in real time. In our experiments, we also find better predictive performance for prevalent compared to incident MI cases. We hypothesize that this is primarily caused by the more easily visible morphological changes of post-MI cardiac remodeling, which the network is able to capture. While the addition of RV information achieved mixed results, the inclusion of anatomies at both ED and ES phases generally improved predictive accuracy, which corroborates previous findings on the importance of 3D LV contraction information for MI detection <cit.>. While we focused on the role of 3D shapes in this study, we believe that the pipeline can be easily extended to include other patient-specific information with a potential to further improve the understanding of MI events. -12cm § ACKNOWLEDGMENT This research has been conducted using the UK Biobank Resource under Application Number ‘40161’. The authors express no conflict of interest. The work of M. Beetz was supported by the Stiftung der Deutschen Wirtschaft (Foundation of German Business). A. Banerjee is a Royal Society University Research Fellow and is supported by the Royal Society Grant No. URF\R1\221314. The work of A. Banerjee was partially supported by the British Heart Foundation (BHF) Project under Grant PG/20/21/35082. The works of V. Grau and L. Li were supported by the CompBioMed 2 Centre of Excellence in Computational Biomedicine (European Commission Horizon 2020 research and innovation programme, grant agreement No. 823712). L. Li was partially supported by the SJTU 2021 Outstanding Doctoral Graduate Development Scholarship. IEEEtran
http://arxiv.org/abs/2307.04903v1
20230710210822
Negative electrohydrostatic pressure between superconducting bodies
[ "Thomas J. Maldonado", "Dung N. Pham", "Alessio Amaolo", "Alejandro W. Rodriguez", "Hakan E. Türeci" ]
cond-mat.supr-con
[ "cond-mat.supr-con", "quant-ph" ]
APS/123-QED [email protected] Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA Department of Chemistry, Princeton University, Princeton, NJ 08544, USA Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA Despite being largely limited to bulk phenomena, well-known theoretical models of superconductivity like the Bardeen–Cooper–Schrieffer and Ginzburg–Landau theories have played a key role in the development of superconducting quantum devices. In this letter, we present a hydrodynamic non-relativistic scalar electrodynamic theory capable of describing systems comprising superconducting materials of arbitrary shape and apply it to predict the existence of a negative (attractive) pressure between planar superconducting bodies. For conventional superconductors with London penetration depth λ_L≈ 100 nm, the pressure reaches tens of N/mm^2 at angstrom separations. Negative electrohydrostatic pressure between superconducting bodies Hakan E. Türeci July 10, 2023 =================================================================== In conventional superconductors, steady-state bulk phenomena are accurately described by both the Bardeen-Cooper-Schrieffer (BCS) <cit.> and Ginzburg-Landau (GL) <cit.> theories. The former provides a microscopic origin for superconductivity via the phonon-mediated pairing of electrons into bosonic quasiparticles known as Cooper pairs, while the latter provides a phenomenological description of the resulting Bose-Einstein condensate <cit.> with a macroscopic order parameter representing its mean-field wave function. The two theories were shown to be equivalent near the superconducting critical temperature <cit.>, and both reproduce the London theory <cit.>. Though the BCS theory is sufficiently general to predict time-dependent bulk phenomena, an effective macroscopic theory is desirable when such effects are triggered by electromagnetic sources in spatially inhomogeneous domains. To this end, generalized GL equations have been proposed to capture boundary and wave effects present in complex geometries <cit.>, but a consensus has not been reached on their validity far below the critical temperature, a regime all too familiar to the burgeoning area of superconducting quantum devices <cit.>. In this letter, we present and explore predictions offered by a hydrodynamic representation of non-relativistic scalar electrodynamics applied to the superconducting order parameter at zero temperature. Few attempts have been made to solve this model's equations of motion (EOM) exactly <cit.>, but simplified versions have been considered via relaxations of minimal coupling <cit.> and can be credited as the underpinning of Josephson phenomena and circuit quantum electrodynamics <cit.>. Such approximate descriptions of light-matter interactions have enabled coveted numerical analyses of superconducting circuits embedded in electromagnetic resonant structures <cit.>, but they rely on London-like boundary conditions between superconducting and non-superconducting domains that seem to harbor serious inconsistencies <cit.>. Our goal is not to provide a rigorous derivation of the theory (the literature contains some attempts <cit.>), but rather to demonstrate that its un-approximated form circumvents spatial partitioning and implies a pressure between planar superconducting bodies that can be measured to determine its validity. While our model shares similarities with the GL theory in that it describes the superconducting condensate with an order parameter, it differs in at least four important ways. First, in contrast to the diffusive time-dependent GL equations, our model entails wave-like dynamics implied by Schrödinger's equation. Second, we employ minimal coupling to all electromagnetic degrees of freedom, including the electric field via Gauss's law and Maxwell's correction to Ampere's law. Third, we incorporate arbitrary arrangements of both external drives and ionic backgrounds via normal (non-superconducting) source distributions. We take the latter to be static in nature, akin to the Jellium model of a metallic conductor <cit.>, but generalizable to include dynamical fluctuations for effective descriptions of phononic excitations. Fourth, in considering regimes far below the critical temperature, we omit the self-interaction term that governs the GL phase transition. In our model, nonlinear phenomena arise instead from our more general treatment of light-matter interactions, and the Higgs mechanism that yields the condensate's equilibrium number density via spontaneous symmetry breaking of the U(1) gauge group is replaced by requirement from the EOM that the bulk superconducting charge density cancels the prescribed ionic background. Below, we present the Lagrangian and corresponding EOM at the heart of our model, along with an electrohydrodynamic representation of the Hamiltonian. Limiting our focus to electrostatic systems, we derive an electrohydrostatic condition arising from a self-consistent statement of Gauss's law and solve it numerically in the context of two planar superconducting bodies separated by vacuum. By considering variations in the system's electrohydrostatic energy with respect to the separation length, we find a negative (attractive) pressure between the two bodies that peaks at an emergent healing length. We conclude with a discussion of the length's significance. Throughout the text, we employ the covariant formulation of electromagnetism with the Minkowski metric η^μν = diag(+,-,-,-)^μν, and we refer to the components of a four-vector as . Though the model describes non-relativistic charged superfluids, we find that a relativistic notation provides useful physical insight. We assume the effective Lagrangian governing the evolution of the order parameter ψ≡√(n)e^iθ and the electromagnetic four-potential A^μ is given by the non-relativistic theory of scalar electrodynamics under minimal light-matter coupling, ℒ = ψ^*(iħ∂/∂ t - qcA_0 - 1/2m(ħ/i - qA⃗)^2)ψ                 -1/4μ_0F^μνF_μν - A^μ j_μ, where F^μν≡∂^μ A^ν - ∂^ν A^μ is the electromagnetic tensor, j^ν is the four-current generated by normal charges, and q and m are the charge and mass of the superconducting charge carriers, respectively. The resulting set of EOM for the light-matter field arising from this Lagrangian couple Maxwell's equations for the four-potential and Schrödinger's equation for the order parameter, ∂_μ F^μν = μ_0 (𝒥^ν + j^ν) iħψ̇ = ((ħ/i - qA⃗)^2 + qcA_0)ψ, where is the four-current generated by superconducting charges with number density n and fluid velocity . As derived in the Supplemental Material (SM) <cit.>, the system's Hamiltonian can be expressed in an electrohydrodynamic form as ℋ = ϵ_0/2E⃗^2 + 1/2μ_0B⃗^2 + n(1/2mv^2) + ħ^2/8mnln n^2, with E⃗ = -c A_0 - Ȧ⃗̇ the electric field, the magnetic field, n the superconducting number density, and v ≡v⃗ the fluid speed. Eq. (<ref>) represents a decomposition of the total energy density into electric, magnetic, kinetic, and elastic components, respectively <cit.>. We now limit our focus to electrostatic systems, which are recovered by enforcing that all currents vanish . We first introduce the bulk superconducting number density n_s and two important length scales: the London penetration depth λ_L = √(m/(μ_0 q^2 n_s)) and the Compton wavelength λ_C = h/(mc). In terms of the normalized number densities and , Eqs. (<ref>) reduce to the electrohydrostatic condition, n̅ + 2ξ^4√(n̅)/√(n̅) = n̅_src, revealing the healing length ξ given by ξ≡√(λ_Lλ_C/4π). As shown in the SM <cit.>, Eq. (<ref>) is a self-consistent statement of Gauss's law that expresses the balance between electric and elastic forces in the electrostatic distribution of the fluid: qE⃗ = Q with the well-known quantum potential <cit.>. Because of the nonlinear term, proving the existence or uniqueness of solutions n̅ is nontrivial and remains an open problem. Moreover, for general source distributions n̅_src, solutions are most attainable by numerical methods, which can exhibit instabilities stemming from the potential divergence of the nonlinear term as the density approaches zero. We may nonetheless make some qualitative observations regarding solutions to Eq. (<ref>). First, we anticipate the asymptotic behavior n̅→n̅_src = 1 in the bulk. Second, spatial derivatives in the nonlinear term ensure C^4 continuity of n̅ over all spatial coordinates. To avoid introducing additional length scales, we focus here on piecewise-constant sources n̅_src that take values zero outside and one inside the superconducting material. To obtain the electrohydrostatic pressure between two planar superconducting bodies, we first solve the electrohydrostatic condition sourced by two finitely separated ionic backgrounds. For each separation length L ∈ [0,20ξ], we then integrate the resulting electrohydrostatic energy density, ℋ = ϵ_0/2ħ^2/2mq√(n)/√(n)^2_u_electric + ħ^2/8m nlnn^2_u_elastic, over all space V and compute the pressure P = -∇_L ∫_V ℋdx. Details of the calculation are provided in Fig. <ref>, with the main conclusion being the existence of a negative (attractive) pressure between plates that vanishes in the limit of zero or infinite separation and reaches a peak for L ≈ξ. Since C^4 continuity of the number density is guaranteed by the nonlocal quantum potential <cit.>, all contributions to the electrohydrostatic energy density are finite. Consequently, unlike other quantum forces such as the Casimir pressure <cit.>, the electrohydrostatic pressure does not exhibit a divergence for infinitesimal separations. Though the total pressure is strictly negative, the electric pressure exhibits a zero-crossing which can be understood perturbatively as a screening effect. As derived in the SM <cit.>, for source distributions representing small perturbations from a uniform background, n̅_src = 1 + δn̅_src with δn̅_src≪ 1, the electrohydrostatic condition reduces to a self-sourced version of the inhomogeneous biharmonic equation arising in linear elasticity theory <cit.>, δn̅ + ξ^4∇^4δn̅≈δn̅_src, with δn̅≡n̅ - 1 the first order perturbation in the number density and ∇^4 ≡ the biharmonic operator. In contrast to the Yukawa potential arising from Thomas-Fermi screening <cit.>, the Green's function for Eq. (<ref>), G(x⃗,x⃗') = 1/2π(ξ√(2))^3(x⃗-x⃗'/ξ√(2))e^-x⃗-x⃗'/ξ√(2), exhibits both decaying and oscillatory behavior on the length scale of the healing length. The oscillatory component of the bulk response to a point source necessarily gives rise to interference effects during the screening of more general defect distributions. We can thus attribute increases (decreases) in electric energy density to constructive (destructive) interference of screening charges. In the electrostatic limit, the healing length represents the scale on which the superconducting number density varies in response to changes in the background. While this interpretation might suggest analogies with the well-known GL coherence length, as seen from Eq. (<ref>), the healing length and the London penetration depth are not independent parameters. As shown in the SM <cit.>, the few known sources tabulating GL parameters from independent experiments indicate that our healing length and the GL coherence length are in poor agreement for most type-I superconductors but only differ by about one order of magnitude for many type-II superconductors <cit.>. This trend lends further support to the notion that the hydrodynamic model is likely most valid at temperatures far below the critical temperature, making type-II superconductors ideal candidates for experimental validation of the theory. Furthermore, since the healing length sets the scale underlying pressure variations, materials with large London penetration depths are desirable. For a conventional (m = 2m_e, q = 2e) superconductor with λ_L≈ 100 nm, the pressure achieves a maximum value of ≈ 40 N/mm^2 for separations on the order of 1Å. The healing length can also be understood as the matter-like counterpart to the London penetration depth. As shown by way of perturbation theory in the SM <cit.>, a uniform medium's first order response to a low-power drive supports the propagation of both longitudinal k⃗_∥ and transverse k⃗_⊥ plane waves in the fluid velocity field v⃗∼k⃗_⃗∥⃗,⃗⊥⃗e^i(k⃗·x⃗-ω_∥,⊥t), with frequencies ω_∥,⊥(k⃗) characterized by two different dispersions, ω_|| = ω_p√(1 + (kξ)^4) ω_⊥ = ω_p√(1 + (kλ_L)^2), where ω_p≡ c/λ_L is the plasma frequency and k ≡k⃗ the wavenumber. The high-frequency limits of these relations manifest the longitudinal plane waves as matter-like polaritons ω_∥≈ħ k^2/(2m) and the transverse plane waves as light-like polaritons ω_⊥≈ ck. With this insight, we can thus identify the quasistatic ω_∥,⊥≪ω_p decay length of matter-like excitations with the healing length ξ≈ 1/[k(ω_∥)] and light-like excitations with the London penetration depth λ_L≈ 1/[k(ω_⊥)]. To summarize the results of this study, we have presented a theory of superconductivity akin to the GL theory that is capable of describing the dynamics of superconducting quantum devices well below the critical temperature, and we have used the theory to predict a negative electrohydrostatic pressure between superconducting bodies. Moreover, we have identified an emergent healing length at which this pressure becomes relevant and shown that it is similar to the GL coherence length but represents the matter-like counterpart to the London penetration depth. This work naturally motivates an experimental demonstration of the pressure to determine the theory's validity, but the viability of such an observation requires understanding the magnitude of other forces present at this scale (e.g., Casimir and van der Waals interactions <cit.>), which is left to future work. Our formulation may also be applied to the analysis of magnetostatic systems, such as vortices, and dynamical systems, such as excited Josephson junctions. Finally, the theory may be further developed via second quantization and expanded to incorporate quasiparticle dynamics. The authors thank Wentao Fan, Zoe Zager, and Terry Orlando for insightful discussions. This work was supported by the US Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering, under Award No. DESC0016011, the National Science Foundation under the Emerging Frontiers in Research and Innovation (EFRI) program, Award No. EFMA164098, the Defense Advanced Research Projects Agency (DARPA) under Agreements No. HR00111820046, No. HR00112090011, and No. HR0011047197, and a Princeton SEAS Innovation Grant.
http://arxiv.org/abs/2307.05233v1
20230711130716
Measurements of dense fuel hydrodynamics in the NIF burning plasma experiments using backscattered neutron spectroscopy
[ "A. J. Crilly", "D. J. Schlossberg", "B. D. Appelbe", "A. S. Moore", "J. Jeet", "S. M. Kerr", "M. S. Rubery", "B. Lahmann", "S. O'Neill", "C. J. Forrest", "O. M. Mannion", "J. P. Chittenden" ]
physics.plasm-ph
[ "physics.plasm-ph" ]
[email protected] Centre for Inertial Fusion Studies, The Blackett Laboratory, Imperial College, London SW7 2AZ, United Kingdom I-X Centre for AI In Science, Imperial College London, White City Campus, 84 Wood Lane, London W12 0BZ, United Kingdom Lawrence Livermore National Laboratory, Livermore, California 94550, USA Centre for Inertial Fusion Studies, The Blackett Laboratory, Imperial College, London SW7 2AZ, United Kingdom Lawrence Livermore National Laboratory, Livermore, California 94550, USA Lawrence Livermore National Laboratory, Livermore, California 94550, USA Lawrence Livermore National Laboratory, Livermore, California 94550, USA Lawrence Livermore National Laboratory, Livermore, California 94550, USA Lawrence Livermore National Laboratory, Livermore, California 94550, USA Centre for Inertial Fusion Studies, The Blackett Laboratory, Imperial College, London SW7 2AZ, United Kingdom Laboratory for Laser Energetics, University of Rochester, Rochester, New York 14623, USA Sandia National Laboratories, Albuquerque, New Mexico 87185, USA Centre for Inertial Fusion Studies, The Blackett Laboratory, Imperial College, London SW7 2AZ, United Kingdom The hydrodynamics of the dense confining fuel shell is of great importance in defining the behaviour of the burning plasma and burn propagation regimes of inertial confinement fusion experiments. However, it is difficult to probe due to its low emissivity in comparison to the central fusion core. In this work, we utilise the backscattered neutron spectroscopy technique to directly measure the hydrodynamic conditions of the dense fuel during fusion burn. Experimental data is fit to obtain dense fuel velocities and apparent ion temperatures. Trends of these inferred parameters with yield and velocity of the burning plasma are used to investigate their dependence on alpha heating and low mode drive asymmetry. It is shown that the dense fuel layer has an increased outward radial velocity as yield increases showing burn has continued into re-expansion, a key signature of hotspot ignition. Comparison with analytic and simulation models show that the observed dense fuel parameters are displaying signatures of burn propagation into the dense fuel layer, including a rapid increase in dense fuel apparent ion temperature with neutron yield. Measurements of dense fuel hydrodynamics in the NIF burning plasma experiments using backscattered neutron spectroscopy J. P. Chittenden August 12, 2023 ======================================================================================================================= Recent inertial confinement fusion (ICF) experiments at the National Ignition Facility (NIF) have entered the `burning plasma'<cit.> and `ignition'<cit.> regimes. In these experiments, deuterium-tritium (DT) fuel is compressed to form a central fusing region, `hotspot', surrounded by a dense fuel layer, or shell. Within a burning plasma hotspot, alpha particle heating from DT fusion reactions dominates the input heating power from compression. Ignition marks the onset of a thermal instability started when alpha heating dominates all energy loss mechanisms. With ignition now achieved in the laboratory, understanding the hydrodynamic behaviour of the confining fuel layer and how the fusion burn propagates into the dense fuel shell is key in realising high energy gain ICF experiments. The properties of the hotspot and dense fuel layer are coupled by hydrodynamics and energy exchange through thermal conduction, radiation transport and mass ablation. Within a burning hotspot, the temperature and fusion reaction rate continues to increase after maximal compression and into the re-expansion phase. The inertia of the dense fuel acts to confine the hotspot as it explodes radially and decompresses. The thermal gradient between hotspot and shell drives heat conduction which results in heating and mass ablation of the shell. It follows that we expect the following characteristics of the dense fuel layer during neutron production in burning plasma and ignition regimes: lower fuel areal density, positive radial velocity (re-expansion) and elevated temperatures – these signatures are unique to burning plasma experiments and will not be present in lower performing implosions. The areal density has been measured using the down-scattered-ratio (DSR) measurements<cit.> but the latter two are difficult to directly measure. Recent theoretical<cit.> and experimental work<cit.> has shown that backscattered neutron spectroscopy can be used to measure the hydrodynamic conditions in the dense fuel layer during fusion burn. In this Letter we show, for the first time, inference of dense fuel velocity and apparent temperature in burning plasma ICF experiments which is critical to understanding the hydrodynamics and energy transport during burn propagation. When 14 MeV primary DT neutrons elastically scatter through 180^o from ions they lose the largest fraction of their energy possible for a single scattering event. This produces an observable kinematic edge in the neutron spectrum. In ICF, ions have kinetic energies of order a keV which has a detectable effect on the scattering kinematics. Crilly et al.<cit.> showed that the shape of the backscatter edges encodes information about the scattering ion velocity distribution. This information can be summarised as the scatter-averaged hydrodynamic quantities (fluid velocity, temperature, fluid velocity variance), analogous to the burn-averaging of the primary neutron spectra<cit.>. Since the scattering rate is proportional to the product of the neutron flux and ion number density, scattering-averaged quantities are strongly weighted towards the dense fuel. Neutron time of flight (ntof) detectors at the OMEGA laser facility routinely measure the backscatter edge from tritium (or nT edge) to infer areal density<cit.>. A recent study<cit.> extended this work to successfully measure scatter-averaged hydrodynamic quantities from the nT edge. With the successful demonstration at OMEGA, the possibility of performing backscatter neutron spectroscopy at the NIF was investigated, specifically on the unique burning plasma experiments. The NIF ntof suite comprises 5 collimated lines of sight<cit.>. Each line of sight includes a `Bibenzyl' scintillator detector which can measure a wide energy range, including the backscatter region. It was determined that the SPEC-SP detector (located at θ = 161.38^o, ϕ = 56.75^o) is gated sufficiently early to return a clean measurement of the nT edge. Backscattered neutron spectroscopy presents a novel challenge on the NIF. Similar to the OMEGA experiments, a forward fit methodology was used based on Mohamed et al.<cit.>. However, the higher areal densities increase the degree of multiple scattering and attenuation requiring a novel model to describe the edge spectral shape. A 6 parameter analytic model was devised for the energy spectrum based on a linear expansion of the cross section about the backscatter energy and a linear background (see supplementary material). Combining with the detector sensitivity, Jacobian and instrument response function allowed a forward fit of nToF data: I(τ') ∝[a(τ) s(τ) dN/dEdE/dτ] ⊗ R(τ'-τ,τ) , τ = t_n c/d = c/v_n , where τ is the neutron arrival time normalised by the photon arrival time, τ' is the normalised recorded signal time, I is the measured detector signal, a is the beamline attenuation model, s is the detector sensitivity model and R is the instrument response function. The detector models that were developed<cit.> for the DD peak can be applied in the nT edge fits. The fit parameters of the energy spectrum, dN/dE, measure the scatter-averaged fluid velocity and apparent ion temperature. The measured hotspot and isotropic velocity, and apparent ion temperature projected along SPEC-SP were used to account for the effect of the shifted and broadened DT peak on the backscatter edge<cit.> and their uncertainties were included in edge parameter inference. A fit region between 2.8 and 4.0 MeV was constrained using synthetic neutron spectra from 1D radiation-hydrodynamics simulations for various levels of alpha heating<cit.>. The lower limit of the fit region is set by the exclusion of the DD peak and the upper limit extends beyond the full width of the edge to capture the edge jump height. The nToF data from two shots (210328 and 210808) are shown in <ref>. These shots have a large difference in yield, ∼ 2 × 10^16 (55 kJ) compared to ∼ 4 × 10^17 (1.35 MJ). Differences in the backscatter edge and DD peak spectra are clearly visible. The nT edge from shot 210808 appears at a later time suggesting an expanding dense fuel layer. It is also broader than 210328 suggesting a higher ion temperature or fluid velocity variance in the shell. Best fits to the data are found by minimising a χ^2 loss function. A total of 9 burning plasma experiments were analysed, using the SPEC-SP nToF data for the backscatter analysis. A Markov Chain Monte Carlo (MCMC) algorithm<cit.> was used to find the optimal fitting parameters and random uncertainties, the results of which are shown in <ref> – systematic uncertainties are discussed in the supplementary material. In the following discussion we investigate correlations between the backscatter parameters, neutron yield and hotspot velocity along SPEC-SP and relate these to the physics of burning plasmas. To aid with interpretation of the experimental results, a set of 30 1D radiation hydrodynamics Chimera simulations were used to investigate the trends due to ignition and burn propagation. Chimera is an Eulerian radiation magnetohydrodynamics code with an alpha heating model, details of ICF implosion modelling with Chimera can be found in the literature<cit.>. The simulation set was performed at a fixed hydrodynamic scale (1014μm inner radius) and HDC design but a uniform mix fraction of carbon in the fuel was varied to modify the radiative losses and consequently the yield. Firstly, it is found that there is a positive correlation between neutron yield and the dense fuel fluid velocity as shown in <ref>. As neutron yields increase above ∼ 5 × 10^16, we see the dense fuel start to explode (positive radial velocity). This reflects two changes introduced by significant alpha heating: a shift of peak neutron production (bang time) to later times and increased hotspot pressure. These can be understood in terms of the hotspot power balance. During implosion mechanical work and alpha heating are energy sources while thermal conduction and radiative losses are energy sinks<cit.>. As alpha heating is increased, the total hotspot power can remain positive towards stagnation when the mechanical work vanishes. With sufficiently high alpha heating, the total hotspot power can remain positive during the explosion phase when mechanical work is an energy sink. The measured dense fuel fluid velocity will correlate with the mechanical work at bang time and thus reflects the changes in hotspot power balance due to alpha heating. Alpha heating both maintains positive hotspot power to later times and increases the hotspot pressure. This increased pressure drives more rapid expansion and decompression of the shell after stagnation. Secondly, it is found that the dense fuel apparent ion temperature, T_nT, correlates with both neutron yield and the hotspot velocity projected along the SPEC-SP line of sight as shown in <ref>. We hypothesize that these correlations are due to two separate physical phenomena. The first of which is due to the effects of burn propagation. With increased yield, we expect increased hotspot heating to drive heat transport in the form of electrons and alpha particles into the dense fuel. The dense fuel is heated and ablated into the hotspot, reaching thermonuclear temperatures. As a larger fraction of the fuel mass is heated by burn propagation<cit.> we expect a corresponding increase in the scatter-averaged thermal temperature. Alpha heating also increases the hotspot pressure, P, and thus produces large acceleration, a, of the dense fuel radially outwards given a = P/ρ R_shell. Over the duration of burn, the large acceleration will cause large fluid velocity variance in the dense fuel. Since the apparent T_nT is sensitive to both the thermal temperature and fluid velocity variance of the dense fuel, we expect T_nT to increase with increased alpha heating and therefore fusion yield. As discussed in previous work<cit.>, this sensitivity makes the backscatter edge a unique diagnostic of burn propagation into the dense fuel. The backscatter edge is also sensitive to anisotropy in the dense fuel hydrodynamic conditions. As we are observing neutrons which scattered through 180^o, the diagnostic is `imaging' the back side of the implosions with respect to the detector line of sight. Low mode asymmetries have been identified as a common yield degradation mechanism in current ICF experiments<cit.>. In particular, mode L=1 asymmetries induce centre of mass motion of the imploding capsule which can be diagnosed by Doppler shifts in the primary neutron spectra<cit.>. This can be reconstructed into a hotspot velocity vector, showing the magnitude and direction of the asymmetry. Radiation hydrodynamics simulations have shown that mode 1 drive asymmetries produce co-aligned areal density asymmetries<cit.>. Following the abstraction of Hurricane et al.<cit.>, we can consider the mode 1 system as two unequal mass pistons compressing a central hotspot and inducing a centre of mass velocity. We can predict the effect on T_nT by considering the qualitative behaviours of the thermal temperature and fluid velocity of the pistons. Firstly, the more massive side will have a higher heat capacity and therefore remain colder when the hotspot begins to transport heat into it. Secondly, the more massive piston will experience a lower acceleration and thus have a lower fluid velocity variance. The opposite is then true of the less massive side, producing anisotropy in the backscatter measurement. Combining these effects, one expects to measure a lower T_nT on a detector where the hotspot is flowing towards it. We can produce a theoretical prediction of the anisotropy in T_nT due to differential acceleration of the thick and thin sides of the shell (see supplementary material for derivation): Δ T_nT ≈ -0.8 v_imp/400 v_HS/100 , where v_imp is the implosion velocity and v_HS is the hotspot velocity. Investigating these arguments further, a 2D radiation hydrodynamics simulation of a capsule driven with a mode 1 drive asymmetry was performed with the code Chimera<cit.>. The N210808 capsule parameters and tuned 1D drive were used with an additional constant mode 1 drive asymmetry included. A summary of the findings are given in <ref>. A simulated burn-averaged hotspot velocity magnitude of 127 km/s was achieved, which is similar to the maximum hotspot velocity observed in the subset of experiments considered in this work. Simplified neutron transport calculations<cit.> calculated scattering ion velocity distributions from which we obtain values for the scatter-averaged quantities parallel and anti-parallel to the hotspot velocity i.e. of the thin and thick sides of the shell. It was found that the T_nT was largest for the thin side of the shell, 5.0 keV compared to 2.8 keV for the thick side of the shell - following the intuition of the asymmetric piston model. Additionally, the thin side of the shell was found to be exploding at 220 km/s compared to 90 km/s of the thick side. From theory, simulation and experiment, it is clear that the measured T_nT depends on both the level of burn propagation and magnitude of mode 1 asymmetry. If we assume these effects are decoupled, we can construct an empirical model of the measured T_nT using the 1D simulation results, the asymmetric piston model of mode 1 anisotropy and an additional degree of freedom to capture higher order effects. We can then fit the following model to the experimental data: T_nT^meas. = T_nT^1D(Y_n^meas.)+α_LMv_HS^meas.+α_HM+sys. , where the best fit parameters are α_LM = -1.5 ± 0.3 keV/(100 km/s), which can be interpreted as the degree of shell anisotropy induced by low mode asymmetries, and α_HM+sys. = 0.8 ± 0.2 keV, which can be interpreted as the effect of unresolved high mode asymmetries and/or systematic uncertainty in the measurements. Previous backscatter spectroscopy results on OMEGA also identified an unresolved high mode component to the apparent dense fuel temperature attributed to hydrodynamic instabilities in the imploding shell<cit.>. For the low mode component, the 2D Chimera simulation gives a α_LM = -0.8 keV/(100 km/s) which agrees with the asymmetric piston model prediction. Experimentally, we infer an increased dense fuel anisotropy due to low mode asymmetries compared to these predictions. A potential physical explanation is that mode 1 anisotropic acceleration produces anisotropic growth of hydrodynamic instabilities<cit.>. Therefore, there will be a directional dependence to the amplitude of high mode asymmetries which are not resolved in the Chimera simulation or asymmetric piston model. In conclusion, we present novel neutron backscatter spectroscopy measurements of the dense fuel hydrodynamic conditions at the NIF. These measurements give unique insight into the hydrodynamics of the dense fuel in burning plasma implosions. As neutron yield increases, it is seen that the dense fuel is burning during re-expansion, is exploding faster and has a raised apparent ion temperature. These measured trends are consistent with simulation predictions of the hotspot ignition and propagating burn regimes. Improving agreement between simulation and experiment can be used to constrain models of burn propagation and identify the dominant hydrodynamic and energy transport phenomena. Daughton et al. showed the enthalpy flux between shell and hotspot is sensitive to both the dense fuel temperature and the relative importance of electron thermal conduction, alpha particle heating and transport, and potentially fusion neutron heating<cit.>. The anisotropy in dense fuel conditions created by mode 1 asymmetries is also seen to affect the backscatter spectra. This was identified through a trend between hotspot velocity and apparent dense fuel temperature. A consistent trend was found with the asymmetric piston model<cit.> and in a 2D radiation hydrodynamic simulation with a mode 1 drive asymmetry. Therefore, a 3D picture of the dense fuel conditions can be achieved when more ntof lines of sight are used to measure the backscatter spectra. Understanding burn propagation into the dense fuel is key in achieving high gain in inertial confinement fusion and the results of this paper show that the backscatter edges encode important information on shell conditions and burn propagation. § ACKNOWLEDGEMENTS Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344. This article (No. LLNL-JRNL-846265-DRAFT) was prepared as an account of work sponsored by an agency of the U.S. government. Neither the U.S. government nor Lawrence Livermore National Security, LLC, nor any of their employees make any warranty, expressed or implied, or assume any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represent that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the U.S. government or Lawrence Livermore National Security, LLC. The views and opinions of authors expressed herein do not necessarily state or reflect those of the U.S. government or Lawrence Livermore National Security, LLC,and shall not be used for advertising or product endorsement purposes. This project is supported by the Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship, a Schmidt Futures program. § SUPPLEMENTARY MATERIAL §.§ Spectral model At NIF scale areal densities both multiple scattering and differential attenuation are non-negligible. Therefore, devising an ab-initio model of the backscatter spectrum is challenging. We must make a number of simplifying assumptions in order to construct a suitable fitting model for the nT backscatter edge: * The attenuated nT single scatter spectrum can be expanded to linear order in the vicinity of the edge. * The distribution of backscatter energies, due to both primary neutron and scattering ion velocity variations, is Gaussian. This has been justified in previous studies<cit.>. * The background from other scattering processes, including multiple scattering, can be sufficiently described by a linear function of energy. Constructing the resulting neutron spectrum using these approximations yields: dN/dE = nT + Background = ∫ (c_1'+c_2' (K_n-K_n^*)) Θ(K_n-K_n^*) exp[-(K_n^*-μ_ E)^2/2σ_E^2] dK_n^* + c_3' K_n + c_4' , where K_n is the scattered neutron kinetic energy, K_n^* is the nT backscatter energy and the Heaviside function, Θ, enforces the kinematic endpoint of the nT scattering. Performing the integration, we arrive at the final model: dN/dE = c_1 erf(K_n-μ_E/√(2)σ_E) + c_2 [√(2/π)σ_Eexp(-(K_n-μ_E)^2/2σ_E^2). .+(μ_E-K_n)erf(K_n-μ_E/√(2)σ_E)] + c_3 K_n + c_4 where μ_E and σ_E are the edge shape parameters and c_i are amplitude coefficients, giving a total of 6 free parameters. The fitting parameters μ_E and σ_E can be related to scatter-average quantities<cit.> by manipulation of the backscatter kinematics equation (assuming non-relativistic ion velocity): K_n ≈(A_i-1)^2/A_i^2+2A_iγ_n' +1K_n' + 2 A_i(A_i^2-1)(A_i+γ_n')/(A_i^2+2A_iγ_n' +1)^2p_n' v_i' where K, γ, p and v are the kinetic energy, Lorentz factor, momentum and velocity with species denoted by subscript, primed and unprimed denoting pre- and post-collision values respectively and A_i is the ratio of the ion to neutron mass. From the above, we find<cit.>: ⟨ v_i' ⟩ ≈(A_i^2+2A_iγ_n' +1)^2/2A_i(A_i^2-1)(A_i+γ_n')μ_E - ⟨ K_nT,0⟩/⟨ p_n'⟩ T_nT ≈(A_i^2+2A_iγ_n' +1)^3/8A_i(A_i+1)^2(A_i+γ_n')^2σ_E,nT^2/⟨ K_nT,0⟩ ⟨ K_nT,0⟩ = (A_i-1)^2/A_i^2+2A_iγ_n' +1⟨ K_n' ⟩ , σ_E,nT^2 = σ_E^2- ((A_i-1)^2/A_i^2+2 A_i γ_n'+1)^2 Var(K_n') In these calculations we neglect the variance in the pre-collision Lorentz factor as this is order Var(K_n')/(M_n^2 c^4). The accuracy of these approximate analytic expressions was tested numerically using Monte Carlo sampling of test distributions and shown to have errors of ∼ 5 km/s and ∼ 50 eV for ⟨ v_i' ⟩ and T_nT respectively. It is important to note that the pre-collisions quantities include any directional changes to the DT primary spectral moments (⟨ K_n' ⟩ and Var(K_n')), e.g. from hotspot velocity flows. §.§ Systematic uncertainty quantification Relevant systematic uncertainties can be separated in model and measurement effects. Absolute timing uncertainty will affect the velocity inference more than temperature, which is a differential measurement. The uncertainty in dense fuel velocity is ≈ 22 (Δ t /) ( / d) for a given timing uncertainty, Δ t, and distance to detector, d. Hatarik et al.<cit.> report a total timing uncertainty of ∼ 0.1 ns on the NIF nToF suite. The model uncertainties include uncertainty in the IRF, sensitivity and spectral models. The peak and FWHM of the IRF at 3.5 MeV are ∼ 3.5 ns and ∼ 6 ns respectively. An assumed 10% uncertainty in these IRF parameters introduce errors of ∼ 8 km/s and ∼ 20 eV to the edge parameters. The spectral model assumes a Gaussian form for the scattering ion velocity distribution which neglects profile effects<cit.> and invokes an empirical form for the background neutron signal. Fits to synthetic neutron spectra with known dense fuel conditions<cit.> show these approximations have dense fuel properties systematic errors of ∼ 45 km/s and ∼ 200 eV at NIF scale areal densities. Additionally, approximations made in the scattering kinematics introduce model errors of ∼ 5 km/s and ∼ 50 eV for the dense fuel velocity and temperature respectively, as discussed in the section above. Combining all contributions in quadrature, the total systematic uncertainties were found to be 50 km/s and 210 eV for the dense fuel velocity and apparent temperature respectively. §.§ Asymmetric piston model prediction of T_nT anisotropy Using the analytic results of Hurricane et al.<cit.>, one can derive the change of velocity over the hotspot confinement time for a piston with areal density ρ R: a_stagΔ t = P_stagΔ t/ρ R , where a_stag is the shell acceleration at stagnation, Δ t is the confinement time and P_stag is the hotspot stagnation pressure. We note that the Lawson criterion appears in the numerator which simplifies in the limit of negligible initial hotspot pressure (equation 17 of Hurricane et al.<cit.>): a_stagΔ t ≈1/√(3)ρ R_ave/ρ R(1-f^2) v_imp , f ≡v_HS/v_imp , where the areal densities of the thick and thin sides of the piston are given by: ρ R_max = ρ R_ave (1+f) , ρ R_min = ρ R_ave (1-f) , and v_HS is the hotspot velocity and v_imp is the implosion velocity. We can use this formula for the change in shell fluid velocity as a prediction of the fluid velocity variance contributing the apparent dense fuel temperature. As in Crilly et al.<cit.>: T_nT = ⟨ T_i ⟩ + m_T (v⃗_f ·v̂_n) , ≈⟨ T_i ⟩ + m_T (a_stagΔ t)^2 . If we assume the anisotropy in the measured T_nT is dominated by the differential acceleration then the following expression can be derived: Δ T_nT = 1/2m_T(a_stag,min^2-a_stag,max^2)Δ t^2 , = -2/3 f m_T v_imp^2 = -2/3 m_T v_imp v_HS . We have defined Δ T_nT here as half the full anisotropy as then the apparent dense fuel velocity is given by T_nT = T_nT^1D±Δ T_nT when measured parallel and anti-parallel to the hotspot velocity direction. § REFERENCES
http://arxiv.org/abs/2307.04398v1
20230710075950
The tt-geometry of permutation modules. Part II: Twisted cohomology
[ "Paul Balmer", "Martin Gallauer" ]
math.RT
[ "math.RT", "math.CT" ]
TTG-Perm II: twisted cohomology]The tt-geometry of permutation modules. Part II: Twisted cohomology We continue our analysis of the derived category of permutation modules over a finite group. In https://arxiv.org/abs/2210.08311Part I, we identified the spectrum of this tensor-triangulated category as a set. Here we describe its topology, by reducing the problem to elementary abelian groups and then using a twisted form of cohomology to realize the spectrum as a Dirac scheme. [2020]20C20; 14C15, 18F99, 18G90 First-named author supported by NSF grant DMS-2153758. Second-named author partially supported by the Max-Planck Institute for Mathematics in Bonn. The authors thank the Hausdorff Institute for Mathematics in Bonn for its hospitality during the 2022 Trimester on “Spectral Methods in Algebra, Geometry, and Topology”. [ Mohamed Ayadi July 8, 2023 ================= -- § INTRODUCTION Unless mentioned otherwise, G stands for a finite group and for a field of positive characteristic p, with p typically dividing the order of G. §.§ Executive summary We study the homotopy category of bounded complexes of permutation -modules, idempotent-completed: (G):=((G;))^♮ = ((G;)^♮). This (G) is a tensor-triangulated category (`tt-category' for short). We determined all the points of its tt-spectrum in Part I of this series; see <cit.>. The present paper aims at elucidating its topology. This knowledge will give us, among other things, the classification of thick ⊗-ideals in (G). Our main results are the following. On the one hand, we show that the space is a colimit of  over a suitable category of elementary abelian p-groups E that appear as subquotients of G. On the other hand, when E is elementary abelian, we describe the spectrum  as a `Dirac scheme' in the sense of Hesselholt-Pstrągowski <cit.>. Combining these results yields a description of the topological space  for all G. Let us now explain these ideas. §.§ The colimit theorem To discuss the tt-geometry of (G), it is instructive to keep in mind the bounded derived category of finitely generated -modules, (), which is a localization of our (G) by <cit.>. A theorem of Serre <cit.>, famously expanded by Quillen <cit.>, implies that (()) is the colimit of the (( E)), for E running through the elementary abelian p-subgroups of G; see <cit.>. The indexing category for this colimit is an orbit category: Its morphisms keep track of conjugations and inclusions of subgroups. In Part I, we proved that is set-theoretically partitioned into spectra of derived categories ( ()) for certain subquotients of G, namely the Weyl groups =(N_G K)/K of p-subgroups K≤ G. It is then natural to expect a more intricate analogue of Quillen's result for the tt-category (G), in which subgroups are replaced by subquotients. This is precisely what we prove. The orbit category has to be replaced by a category  whose objects are elementary abelian p-sections E=H/K, for p-subgroups K H≤ G. The morphisms in  keep track of conjugations, inclusions and quotients. See <Ref>. This allows us to formulate our reduction to elementary abelian groups: [<Ref>] There is a canonical homeomorphism _E∈ ((E)). The category  has been considered before, in Bouc-Thévenaz <cit.>. Every morphism in  is the composite of three special morphisms (<Ref>) E! E' → E”≃ E”' where E=E'/N is a quotient of E' (sic!), where E'≤ E” is a subgroup of E” and where E”' is a G-conjugate of E”. The tt-category (E) is contravariant in E∈ and the tt-functors corresponding to (<ref>) @C=2em(E”') [r]^-≃ (E”) [r]^- (E') [r]^-Ψ^N (E) yield the standard conjugation isomorphism, the standard restriction functor, and the less standard modular N-fixed-points functor Ψ^N introduced in Part I. The latter is a type of Brauer quotient that makes sense on the homotopy category of permutation modules. Such functors Ψ^N do not exist on derived or stable categories and they distinguish our results and their proofs from the classical theory. It is an open question whether they will also play a role in the generality of <cit.>. §.§ Twisted cohomology The above discussion reduces the analysis of  to the case of elementary abelian p-groups E. As often in modular representation theory, this case is far from trivial and can be viewed as the heart of the matter. So let E be an elementary abelian p-group. Our methods will rely on ⊗-invertible objects u_N in (E) indexed by the set (E)=N E[E:N]=p of maximal subgroups. These objects are of the form u_N=(0→(E/N)→(E/N)→→ 0) for p odd and u_N=(0→(E/N)→→ 0) for p=2. See <Ref>. We use these ⊗-invertibles u_N to construct a multi-graded ring (E) = ⊕_s∈ ⊕_q∈^(E)_(E)(,(q)[s]), where (q) is the ⊗-invertible ⊗_N∈(E) u_Nq(N) for every tuple , that we refer to as a `twist'. Without these twists we would obtain the standard -graded endomorphism ring ^():=⊕_s∈(,[s]) of  which, for ( E), is identified with the cohomology ^(E,k), but for (E) is reduced to the field  and therefore rather uninteresting. We call (E) the (permutation) twisted cohomology of E. Some readers may appreciate the analogy with cohomology twisted by line bundles in algebraic geometry, or with Tate twists in motivic cohomology. We can employ this multi-graded ring (E) to describe : [<Ref>] The space ((E)) identifies with an open subspace of the homogeneous spectrum of (E) via a canonical `comparison map'. The comparison map in question generalizes the one of <cit.>, which landed in the homogenous spectrum of ^() without twist. We also describe in <Ref> the open image of this map by explicit equations in (E). §.§ Dirac geometry If the reader is puzzled by the multi-graded ring (E), here is another approach based on a special open cover {H}_H≤ E of  indexed by the subgroups of E and introduced in <Ref>. Its key property is that over each open H all the ⊗-invertible objects u_N are trivial: (u_N)H≃[s] for some shift s∈ depending on H and N. For the trivial subgroup , the open 1 is the `cohomological open' of Part I, that corresponds to the image under (-) of the localization (E)( E). See <Ref>. At the other end, for H=E, we show in <Ref> that the open E is the `geometric open' that corresponds to the localization of (E) given by the geometric fixed-points functor. Compare <cit.>. For E cyclic, these two opens 1 and E are all there is to consider. But as the p-rank of E grows, there is an exponentially larger collection {H}_H≤ E of open subsets interpolating between 1 and E. This cover {H}_H≤ E allows us to use the classical comparison map of <cit.> locally. It yields a homeomorphism between each H and the homogeneous spectrum of the -graded endomorphism ring ^_H() in the localization (E)H. In compact form, this can be rephrased as follows (a Dirac scheme is to a usual scheme what a -graded ring is to a non-graded one): [<Ref>] The space ((E)), together with the sheaf of -graded rings obtained locally from endomorphisms of the unit, is a Dirac scheme. §.§ Elementary abelian take-home Let us ponder the -graded endomorphism ring of the unit ^() for a moment. Recall from <cit.> that the spectrum of the derived category, (( E)), is homeomorphic to the homogeneous spectrum of the cohomology ring ^(E,)≅^_( E)(), see <cit.>. Such a result cannot hold for (E) since the ring ^_(E)()= is too small to provide geometric information. So we have developed two substitutes. Our first approach is to replace the usual -graded ring ^() by a richer multi-graded ring involving twists. This leads us to twisted cohomology (E) and to <Ref>. The second approach is to hope that the endomorphism ring ^(), although useless globally, becomes rich enough to control the topology locally on , without leaving the world of -graded rings. This is what we achieve in <Ref> thanks to the open cover {H}_H≤ E. As can be expected, the two proofs are intertwined. §.§ Touching ground Combining <Ref> ultimately describes the topological space  for all G, in terms of homogeneous spectra of graded rings. In <Ref> we improve and apply these results as follows. In <Ref>, we explain how to go from the `local' rings ^_H() over the open H, for each subgroup H≤ E, to the `global' topology of . In <Ref>, we give a finite presentation by generators and relations of the reduced -algebra (^_H())_, generalizing the usual one for cohomology. In <Ref>, we express  for a general finite group G as the quotient of a disjoint union of  for the maximal elementary abelian p-sections E of G by maximal relations. In <Ref>, we prove that the irreducible components of  correspond to the maximal elementary abelian p-sections of G up to conjugation. It follows that the Krull dimension of  is the sectional p-rank of G, the maximal rank of elementary abelian p-sections. (For comparison, recall that for the derived category these irreducible components correspond to maximal elementary abelian p-subgroups, not sections, and the Krull dimension is the usual p-rank.) And of course, we discuss examples. Using our techniques, we compute  for some notable groups G, in particular Klein-four (<Ref>) and the dihedral group (<Ref>). The latter will lead us to the following picture, whose precise meaning will be explained in <Ref>. Hopefully, its beauty will entice the reader to proceed beyond the present introduction. -- §.§ The toolbox The outline of the paper should now be clear from the above discussion and the #TOCtable of content presented upfront. It would be an oversell to pretend that Part II is a stand-alone paper. We import several technical results from Part I, as black boxes. The ones invoked most often are gathered in <Ref>. Here is some standard notation used throughout the text. We write for the tt-spectrum of a tt-category . For an object x∈, we write (x)=∈x∈ to denote the open complement of (x). We write p(G) for the set of p-subgroups of G and p(G)/_G for its G-orbits under G-conjugation. For each subgroup H≤ G, its Weyl group is =(N_G H)/H where N_GH=g∈ GH^g=H. Let us remind the reader of the essentials of Part I <cit.>. The canonical localization Υ_G(G)() gives us an open piece :=(())≅(^(G,)) of the spectrum, that we call the `cohomological open'. We write υ_G=(Υ_G)G for the inclusion. For every H∈p(G) we denote by Ψ^H(G)→() the modular H-fixed-points tt-functor constructed in <cit.>. It is characterized by Ψ^H((X))≃(X^H) on permutation modules and by the same formula degreewise on complexes. We write Ψ̌^H=Υ_∘Ψ^H for the composite (G)→()(()) all the way down to the derived category of . For every H∈p(G), the tt-prime (H)=(Ψ̌^H) is a closed point of . It is also (H)=(^H) where ^H=^_1∘Ψ^H(G)→()→(). All closed points of  are of this form by <cit.>. We write ψ^H=(Ψ^H)(())→ for the continuous map induced by Ψ^H and ψ̌^H=(Ψ̌^H)υ(())ψ^H for its restriction to the cohomological open of . If we need to specify the ambient group we write ψ^H G for ψ^H, etc. We saw in <cit.> that ψ^H is a closed map, and a closed immersion if H G is normal. Every prime ∈ is of the form =_G(H,):=ψ̌^H() for a p-subgroup H≤ G and a point ∈ in the cohomological open of the Weyl group of H, in a unique way up to G-conjugation; see <cit.>. Hence the pieces G(H):=ψ̌() yield a partition =⊔_H∈p(G)/_GG(H) into relatively open strata G(H), homeomorphic to . The crux of the problem is to understand how these strata G(H)≃ attach together topologically, to build the space . The authors thank Ivo Dell'Ambrogio and Beren Sanders for comments and suggestions. § THE COLIMIT THEOREM To reduce the determination of to the elementary abelian case, we invoke the category of elementary abelian p-sections of a finite group G. Recall that a section of G is a pair (H,K) of subgroups with K normal in H. We denote by the category whose objects are pairs (H,K) where K H are p-subgroups of G such that H/K is elementary abelian. Morphisms (H,K)→ (H',K') are defined to be elements g∈ G such that H^g∩ K'≤ K^g≤ H^g≤ H'. Composition of morphisms is defined by multiplication in G. Let us highlight three types of morphisms in . * We have an isomorphism g (H,K) (H^g,K^g) in  for every g∈ G. Intuitively, we can think of this as the group isomorphism c_g H/K H^g/K^g. * For every object (H',K') in  and every subgroup H≤ H', we have a well-defined object (H,H∩ K') and the morphism 1 (H,H∩ K')→ (H',K'). Intuitively, we think of it as the inclusion H/(H∩ K') H'/K' of a subgroup. * For (H,K) in  and a subgroup L̅=L/K of H/K, for K≤ L≤ H, there is another morphism in  associated to , namely 1(H,L)→ (H,K). This one does not correspond to an intuitive group homomorphism H/L H/K, as K is smaller than L. Instead, H/L is the quotient of H/K by L̅ H/K. This last morphism will be responsible for the modular L̅-fixed-points functor. Every morphism g (H,K)→ (H',K') in  is a composition of three morphisms of the above types <ref>, <ref> and <ref> in the following canonical way: @R=0em@C=1em H H ^gH' H' ∇ [r]^-<ref> ∇ [r]^-<ref> ∇ [r]^-<ref> ∇ K H∩^gK' ^gK' K' where the first two are given by 1∈ G and the last is given by g. In particular, the rank of the elementary abelian group H/K increases or stays the same along any morphism (H,K)→ (H',K') in this category, as this is true with <ref>, <ref> and <ref>. To every object (H,K) in , we associate the tt-category (H/K)=((H/K;)). For every morphism g (H,K)→ (H',K') in , we set K̅=K/(H∩^gK') and we define a functor of tt-categories: (g)(H'/K')c_g^*(^gH'/^gK')(H/(H∩^gK'))Ψ^K̅(H/K) using that H/(H∩^gK') is a subgroup of ^gH'/^gK' for the restriction, and using that (H/(H∩^gK'))/K̅=H/K for the modular fixed-points functor Ψ^K̅. It follows from <cit.> that (-) is a contravariant (pseudo) functor on  with values in tt-categories: . We can compose this with (-), which incidentally makes the coherence of the 2-isomorphisms accompagnying (<ref>) irrelevant, and obtain a covariant functor from  to topological spaces. Let us compare this diagram of spaces (and its colimit) with the space . For each (H,K)∈, we have a tt-functor (G)^G_H(H)Ψ^K(H/K) which yields a natural transformation from the constant functor (H,K)↦(G) to the functor → of (<ref>). The above Ψ^K is Ψ^K H. Since H≤ N_G K, the tt-functor (<ref>) is also ^_H/K∘Ψ^K G(G)→()→(H/K). Applying (-) to this observation, we obtain a commutative square: @C=4em@R=2em((H/K)) [r]^-ψ^K H[d]_-ρ_H/K[rd]^(.6)φ_(H,K) ((H)) [d]^-ρ_H (()) [r]_-ψ^K G whose diagonal we baptize φ_(H,K). In summary, we obtain a continuous map φ_(H,K)∈((H/K))→ whose component φ_(H,K) at (H,K) is the diagonal map in (<ref>). * Each of the maps ((g))((H/K))→((H'/K')) in the colimit diagram (<ref>) is a closed immersion. * Each of the components φ_(H,K)((H/K))→ of (<ref>) is closed and preserves the dimension of points (the Krull dimension of their closure). These statements follow from two facts, see <Ref>: When N G is normal the map ψ^N((G/N))((G)) is a closed immersion. When H≤ G is any subgroup, the map ρ_H((H))→((G)) is closed, hence lifts specializations, and it moreover satisfies `Incomparability' by <cit.>. We are now ready to prove <Ref>: For any finite group G, the map φ in (<ref>) is a homeomorphism. Each component φ_(H,K) is a closed map and thus φ is a closed map. For surjectivity, by <Ref>, we know that is covered by the subsets ψ^K(), over all p-subgroups K≤ G. Hence it suffices to know that the (ρ_E) cover =(()) as E≤ runs through all elementary p-subgroups. (Such an E must be of the form H/K for an object (H,K)∈.) This holds by a classical result of Quillen <cit.>; see <cit.>. The key point is injectivity. Take ∈((H/K)) and '∈((H'/K')) with same image in . Write =_H/K(L/K,) for suitable arguments (K≤ L≤ H, ∈H/L) and note that the map induced by 1(H,L)→ (H,K) in  sends _H/L(1,)∈((H/L)) to . So we may assume L=K. By <cit.>, the image of =_H/K(1,) in is _G(K,ρ̅()) where ρ̅H/K→ is induced by restriction. Similarly, we may assume '=_H'/K'(1,') for '∈H'/K' and we have _G(K,ρ̅())=_G(K',ρ̅'(')) in  and need to show that  and ' are identified in the colimit (<ref>). By <cit.>, the relation _G(K,ρ̅())=_G(K',ρ̅'(')) can only hold because of G-conjugation, meaning that there exists g∈ G such that K'=K^g and ρ̅'(')=ρ̅()^g in GK'. Using the map g(H,K)→ (H^g,K^g) in we may replace H,K, by H^g,K^g,^g and reduce to the case K=K'. In other words, we have two points =_H/K(1,)∈((H/K)) and '=_H'/K(1,')∈((H'/K)) corresponding to two p-subgroups H,H'≤ G containing the same normal subgroup K and two cohomological primes ∈H/K and '∈H'/K such that ρ̅()=ρ̅'(') in  under the maps ρ̅ and ρ̅' induced by restriction along H/K≤ and H'/K≤ respectively. If we let G̅==(N_G K)/K, we have two elementary abelian p-subgroups H̅=H/K and H̅'=H'/K of G̅, each with a point in their cohomological open, ∈H̅ and '∈H̅', and those two points have the same image in the cohomological open G̅ of the `ambient' group G̅. By Quillen <cit.> (or <cit.>) again, we know that this coalescence must happen because of an element g̅∈G̅, that is, a g∈ N_G K, and a prime ∈H̅∩^gH̅' that maps to  and to ' under the maps H̅∩^gH̅'̅→H̅ and H̅∩^gH̅'̅→H̅'̅ respectively. But our category  contains all such conjugation-inclusion morphisms coming from the orbit category of G. Specifically, we have two morphisms 1(H∩^gH',K)→ (H,K) and g(H∩^gH',K)→ (H',K) in , under which the point _(H∩^gH')/K(1,) maps to _H/K(1,)= and _H'/K(1,')=' respectively. This shows that =' in the domain of (<ref>) as required. By <cit.>, the space  is noetherian. Hence the topology is entirely characterized by the inclusion of primes. Now, suppose that  is the image under φ_(H,K)→ of some '∈ for an elementary abelian subquotient E=H/K corresponding to a section (H,K)∈. Then the only way for another prime ∈ to belong to the closure of  is to be itself the image of some point ' of  in the closure of '. This follows from <Ref>. In other words, the question of inclusion of primes can also be reduced to the elementary abelian case. § INVERTIBLE OBJECTS AND TWISTED COHOMOLOGY In this section we introduce a graded ring whose homogeneous spectrum helps us understand the topology on ((G)), at least for G elementary abelian. This graded ring, called the twisted cohomology ring (<Ref>), consists of morphisms between  and certain invertible objects. It all starts in the cyclic case. Let C_p=σ|σ^p=1 be the cyclic group of prime order p, with a chosen generator. We write C_p=[σ]/(σ^p-1) as [τ]/τ^p for τ=σ-1. Then the coaugmentation and augmentation maps become: η:1↦τ^p-1 C_pand: C_pτ↦ 0. For p odd, we denote the first terms of the `standard' minimal resolution of by u_p=(0→ C_pτ C_p→ 0). We view this in (C_p) with  in homological degree zero. One can verify directly that u_p is ⊗-invertible, with u_p-1=u_p^∨≅(0→η C_pτ C_p→ 0). Alternatively, one can use the conservative pair of functors ^H(C_p)→() for H∈{C_p , 1}, corresponding to the only closed points (C_p) and (1) of ((C_p)). Those functors map u_p to the ⊗-invertibles  and [2] in (), respectively. For p=2, we have a similar but shorter ⊗-invertible object in (C_2) u_2=(0→ C_2→ 0) again with  in degree zero. To avoid constantly distinguishing cases, we abbreviate 2':={[ 2 if p>2; 1 if p=2. ]. For any finite group G and any index-p normal subgroup N, we can inflate the ⊗-invertible u_p of <Ref> along π G G/N≃ C_p to a ⊗-invertible in (G). Let N G be a normal subgroup of index p. We define u_N:={[ ⋯→ 0→ (G/N)τ (G/N)→ 0 →⋯ if p is odd; ⋯→ 0 → 0 → (G/N) → 0 →⋯ if p=2 ].-1em with in degree zero. We also define two morphisms a_N→ u_N and b_N→ u_N[-2'] as follows. The morphism a_N is the identity in degree zero, independently of p: @C=1em@R=1em[d]_-a_N@[r]|-= ⋯[r] 0 [r] [d] [r] [d]^-1 0 [r] ⋯ u_N @[r]|-= ⋯[r] (G/N) [r]_- [r] 0 [r] ⋯ The morphism b_N is given by η→(G/N) in degree zero, as follows: @C=1em@R=1em[d]_-b_N@[r]|-= [r] [d]^-η 0 [d] u_N[-1] @[r]|-= (G/N) [r]_- and@C=1em@R=1em[d]_-b_N@[r]|-= [r] [d]^-η 0 [d] [r] 0 [d] u_N[-2] @[r]|-= (G/N) [r]_-τ (G/N) [r]_- where the target u_N is shifted once to the right for p=2 (as in the left-hand diagram above) and shifted twice for p>2 (as in the right-hand diagram). When p is odd there is furthermore a third morphism c_N→ u_N[-1], that is defined to be η→(G/N) in degree zero. This c_N will play a lesser role. In statements made for all primes p, simply ignore c_N in the case p=2 (or think c_N=0). Here is an example of such a statement, whose meaning should now be clear: The morphisms a_N and b_N, and c_N (for p odd), are inflated from G/N. Technically, u_N depends not only on an index-p subgroup N G but also on the choice of a generator of G/N, to identify G/N with C_p. If one needs to make this distinction, one can write u_π for a chosen epimorphism π G C_p. This does not change the isomorphism type of u_N, namely (π)=(π') implies u_π≅ u_π'. (We expand on this topic in <Ref>.) Let N G be a normal subgroup of index p and let q≥ 1. Then there is a canonical isomorphism in (G) u_Nq≅(⋯ 0→(G/N) τ(G/N) τ^p-1⋯τ(G/N) → 0⋯ ) where the first (G/N) sits in homological degree 2'· q and  sits in degree 0. It is an exercise over the cyclic group C_p. Then inflate along G G/N. The morphism b_N→ u_N[-2'] of <Ref> is a quasi-iso­morphism and the fraction ζ_N:=(b_N[2'])∘ a_N→ u_N [2'] is a well-known morphism ζ_N∈_()(,[2'])=^2'(G,) in the derived category (). For G elementary abelian, these ζ_N generate the cohomology -algebra ^(G,), on the nose for p=2 and modulo nilpotents for p odd. We sometimes write ζ^+_N=a_N/b_N for ζ_N in order to distinguish it from the inverse fraction ζ^-_N:=b_N/a_N that exists wherever a_N is inverted. Of course, when both a_N and b_N are inverted, we have ζ^-_N=(ζ^+_N)=ζ_N. The switch of factors (12) u_N⊗ u_N≅ u_N⊗ u_N can be computed directly to be the identity (over C_p, then inflate). Alternatively, it must be multiplication by a square-one element of ()=^×, hence ±1. One can then apply the tensor-functor Ψ^G(G)→(), under which u_N goes to , to rule out -1. It follows that for p odd, u_N[-1] has switch -1, and consequently every morphism → u_N[-1] must square to zero. In particular c_N⊗ c_N=0. This nilpotence explains why c_N will play no significant role in the topology. We can describe the image under modular fixed-points functors of the ⊗-invertible objects u_N and of the morphisms a_N and b_N. (We leave c_N as an exercise.) Let H G be a normal p-subgroup. Then for every index-p normal subgroup N G, we have in (G/H) Ψ^H(u_N)≅{[ u_N/H if H≤ N; if H≰N ]. and under this identification Ψ^H(a_N)={[ a_N/H if H≤ N; 1_ if H≰N ]. and Ψ^H(b_N)={[ b_N/H if H≤ N; 0 if H≰N. ]. Direct from <Ref> and Ψ^H((X))≅(X^H) for X=G/N. For restriction, there is an analogous pattern but with the cases `swapped'. Let H≤ G be a subgroup. Then for every index-p normal subgroup N G, we have in (H) ^G_H(u_N)≅{[ [2'] if H≤ N; u_N∩ H if H≰N ]. and under this identification ^G_H(a_N)={[ 0 if H≤ N; a_N∩ H if H≰N ]. and ^G_H(b_N)={[ 1_ if H≤ N; b_N∩ H if H≰N. ]. Direct from <Ref> and the Mackey formula for ^G_H((G/N)). We can combine the above two propositions and handle Ψ^H for non-normal H, since by definition Ψ^H G=Ψ^H N_G H∘^G_N_G H. Here is an application of this. Let H≤ G be a p-subgroup and N G of index p. Recall the `residue' tt-functor ^H=_1∘Ψ^H(G)→() at the closed point (H). * If H≰N then ^H(a_N) is an isomorphism. * If H≤ N then ^H(b_N) is an isomorphism. We apply <Ref> for N_G H≤ G and <Ref> for H N_G H. For (a), H≰N forces N_G H≰N and H≰N∩ N_G H. Hence Ψ^H(a_N)=Ψ^H N_G H_N_G H(a_N)=Ψ^H N_G H(a_N∩ N_G H)=1_ is an isomorphism. Similarly for (b), if N_G H≤ N then Ψ^H(b_N) is an isomorphism and if N_G H≰N it is the quasi-isomorphism b_(N∩ N_G H)/H. Thus ^H(b_N) is an isomorphism in (). Let us now prove that the morphisms a_N and b_N, and c_N (for p odd), generate all morphisms from the unit  to tensor products of u_N's. This is a critical fact. Let N_1,…,N_ℓ be index-p normal subgroups of G and abbreviate u_i:=u_N_i for i=1,…,ℓ and similarly a_i:=a_N_i and b_i:=b_N_i and c_i:=c_N_i (see <Ref>). Let q_1,…,q_ℓ∈ be non-negative integers and s∈. Then every morphism f→ u_1q_1⊗⋯⊗ u_ℓq_ℓ[s] in (G) is a -linear combination of tensor products of (a `polynomial' in) the morphisms a_i and b_i, and c_i (for p odd). We proceed by induction on ℓ. The case ℓ=0 is just _(G)^()=. Suppose ℓ≥ 1 and the result known for ℓ-1. Up to reducing to ℓ-1, we can assume that the N_1,…,N_ℓ are all distinct. Set for readability v:=u_1q_1⊗⋯⊗ u_ℓ-1q_ℓ-1[s], N:=N_ℓ, u:=u_ℓ=u_Nand q:=q_ℓ so that f is a morphism of the form f→ v⊗ uq . We then proceed by induction on q≥ 0. We assume the result known for q-1 (the case q=0 holds by induction hypothesis on ℓ). The proof will now depend on p. Suppose first that p=2. Consider the exact triangle in (G) @C=.9em@R=1.5em u(q-1)@[r]|-=[d]_-a_N ⋯ 0 [r] 0 [r] [d] (G/N) [r]^-τ@=[d] ⋯[r]^-τ (G/N) [r]^-@=[d] [r] @=[d] 0 ⋯ uq@[r]|-=[d]_- ⋯ 0 [r] (G/N) [r]^-τ@=[d] (G/N) [r]^-τ[d] ⋯[r]^-τ (G/N) [r]^-[d] [r] [d] 0 ⋯ (G/N)[q] @[r]|-= ⋯ 0 [r] (G/N) [r] 0 [r] ⋯[r] 0 [r] 0 [r] 0 ⋯-1em where is in degree zero. (See <Ref>.) Tensoring the above triangle with v and applying _G(,-):=_(G)(,-) we get an exact sequence @C=1.5em@R=.4em_G(,v⊗ u(q-1))[r]^-· a_N _G(,v⊗ uq)[r] _G(,v⊗(G/N)[q])@=[d] f @[r]|-⟼@[u]|-90∈ -1em f'∈_N(,^G_N(v)[q]) -.8em Our morphism f belongs to the middle group. By adjunction, the right-hand term is _N(,^G_N(v)[q]). Now since all N_1,…,N_ℓ=N are distinct, we can apply <Ref> to compute ^G_N(v) and we know by induction hypothesis (on ℓ) that the image f' of our f in this group _N(,^G_N(v)[q]) is a -linear combination of tensor products of a_N_i∩ N and b_N_j∩ N for 1≤ i,j≤ℓ-1, performed over the group N. We can perform the `same' -linear combination of tensor products of a_i's and b_j's over the group G, thus defining a morphism f”∈_G(,v[q]). We can now multiply f” with b_Nq→ u_Nq[-q] to obtain a morphism f” b_N^q in the same group _G(,v⊗ uq) that contains f. Direct computation shows that the image of this f”b_N^q in _N(,_N(v)[s]) is also equal to f'. The key point is that b_Nq is simply η→(G/N) in degree q and this η is also the unit of the ^G_N_N^G adjunction. In other words, the difference f-f” b_N^q comes from the left-hand group _G(,v⊗ u(q-1)) in the exact sequence (<ref>), reading f=f”b_N^q+f”' a_N for some f”'∈_G(,v⊗ u(q-1)). By induction hypothesis (on q), f”' is a polynomial in a_i's and b_j's. Since f” also was such a polynomial, so is f. The proof for p odd follows a similar pattern of induction on q, with one complication. The cone of the canonical map a_N u_N(q-1)→ u_Nq is not simply (G/N) in a single degree as in (<ref>) but rather the complex C:=(⋯→ 0→(G/N) τ (G/N)→ 0→⋯) with (G/N) in two consecutive degrees 2q and 2q-1. So the exact sequence @C=1.5em@R=.4em_G(,v⊗ u(q-1))[r]^-· a_N _G(,v⊗ uq)[r] _G(,v⊗ C) has a more complicated third term than the one of (<ref>). That third term _G(,v⊗ C) itself fits in its own exact sequence associated to the exact triangle (G/N)[2q-1]τ(G/N)[2q-1]→ C →(G/N)[2q]. Each of the terms _G(,v⊗(G/N)[∗])≅_N(,^G_N(v)[∗]) can be computed as before, by adjunction. The image of f in _N(,^G_N(v)[2q]) can again be lifted to a polynomial f'b^q_N:→ v⊗ uq so that the image of the difference f-f'b^q_N in _G(,v⊗ C) comes from some element in _N(,^G_N(v)[2q-1]). That element may be lifted to a polynomial f”b^q-1_Nc_N:→ v⊗ uq, and we obtain f=f'b_N^q+f”b_N^q-1c_N+f”'a_N for some f”'∈_G(,v⊗ u(q-1)) similarly as before. We can now assemble all the hom groups of <Ref> into a big graded ring. We denote the set of all index-p normal subgroups of G by =(G):=N G [G:N]=p. Let ^=^(G)={q(G)→} be the monoid of twists, tuples of non-negative integers indexed by this finite set. Consider the (×^)-graded ring (G)=(G;) := ⊕_s∈ ⊕_q∈^_(G)( , ⊗_N∈u_Nq(N)[s]). Its multiplication is induced by the tensor product in (G). We call (G) the (permutation) twisted cohomology ring of G. It is convenient to simply write (q)=⊗_N∈(u_N)q(N) for every twist q∈^(G) and thus abbreviate ^s,q(G)=(,(q)[s]). The graded ring (G) is graded-commutative by using only the parity of the shift, not the twist; see <Ref>. In other words, we have h_1· h_2= (-1)^s_1· s_2 h_2· h_1whenh_i∈^s_i,q_i(G). For instance, for p odd, when dealing with the morphisms a_N and b_N, which land in even shifts of the object u_N, we do not have to worry too much about the order. This explains the `unordered' notation ζ_N=a_N/b_N used in <Ref>. The critical <Ref> gives the main property of this construction: The twisted cohomology ring (G) of <Ref> is a -algebra generated by the finitely many elements a_N and b_N, and c_N (for p odd), of <Ref>, over all N G of index p. In particular (G) is noetherian. The reader can verify by hand that (C_2)=[a_N,b_N], without relations, and that (C_p)=[a_N,b_N,c_N]/c_N^2 for p odd, where in both cases N=1 is the only N∈(C_p). This example is deceptive, for the {a_N,b_N,c_N}_N∈ usually satisfy some relations, as the reader can already check for G=C_2× C_2 for instance. We systematically discuss these relations in <Ref>. We conclude this section with some commentary. The name `cohomology' in <Ref> is used in the loose sense of a graded endomorphism ring of the unit in a tensor-triangulated category. However, since we are using the tt-category (G) and not (), the ring (G) is quite different from ^(G,) in general. In fact, (G) could even be rather dull. For instance, if G is a non-cyclic simple group then (G)=∅ and (G)=. We will make serious use of (G) in <Ref> to describe  for G elementary abelian. In that case, ^(G,) is a localization of (G). See <Ref>. By <Ref>, there is no `collision' in the twists: If there is an isomorphism (q)[s]≃(q')[s'] in (G) then we must have q=q' in ^ and s=s' in . The latter is clear from ^G(u_N)≅ in (), independently of N. We then conclude from ^N((q))≃[2'q(N)] in (), for each N∈. We only use positive twists q(N) in (<ref>). The reader can verify that already for G=C_p cyclic, the ^2-graded ring ⊕_(s,q)∈^2(,u_pq[s]) is not noetherian. See for instance <cit.> for p=2. However, negatively twisted elements tend to be nilpotent. So the ×^-graded version of (G) may yield the same topological information as our ×^-graded one. We have not pushed this investigation of negative twists, as it brought no benefit to our analysis. § AN OPEN COVER OF THE SPECTRUM In this section, we extract some topological information about  from the twisted cohomology ring (G) of <Ref> and the maps a_N and b_N of <Ref>, associated to every index-p normal subgroup N in =(G). Recall from <cit.> that we can use tensor-induction to associate to every subgroup H≤ G a Koszul object [G]H=_H^G(0→1→ 0). It generates in (G) the tt-ideal (^G_H), see <cit.>: [G]H_(G)=(^G_H(G)→(H)). Let N G be a normal subgroup of index p. Then we have: * In (G), the object (a_N) generates the same thick subcategory as (G/N). In particular, ((a_N))=((G/N)). * In (G), the object (b_N) generates the same thick tensor-ideal as [G]N. In particular, ((b_N))=([G]N)=((^G_N)). For p=2, we have (a_N)=(G/N)[1] so the first case is clear. For p odd, we have (a_N)[-1]≃(0→(G/N)τ(G/N)→ 0)=(τ(G/N)). Hence (a_N)∈((G/N)). Conversely, since τ^p=0, the octahedron axiom inductively shows that (G/N)∈((τ(G/N))). This settles <ref>. For <ref>, the complex s:=(b_N)[2'] becomes split exact when restricted to N since it is inflated from an exact complex on G/N. In degree one we have s_1=(G/N), whereas s_0=. Hence <cit.> tells us that the complex s generates the tt-ideal (^G_N(G)→(N)). We conclude by (<ref>). Let N G be of index p. Then (a_N)⊗(b_N)=0. By <Ref> it suffices to show (G/N)⊗[G]N=0. By Frobenius, this follows from ^G_N([G]N)=0, which holds by (<ref>). We now relate the spectrum of (G) to the homogeneous spectrum of (G), in the spirit of <cit.>. The comparison map of <cit.> is denoted by ρ^ but we prefer a more descriptive notation (and here, the letter ρ is reserved for ()). There is a continuous `comparison' map _G((G)) mapping a tt-prime  to the ideal generated by those homogeneous f∈(G) whose cone does not belong to . It is characterized by the fact that for all f _G(Z(f))=((f))=f is not invertible in (G)/ where Z(f)=f∈ is the closed subset of ((G)) defined by f. The fact that the homogeneous ideal _G() is prime comes from <cit.>. Equation (<ref>) is essentially a reformulation of the definition. The usual notation for Z(f) would be V(f), and D(f) for its open complement. Here, we already use V for G and for G(H), and the letter D is certainly overworked in our trade. So we stick to Z(f) and Z(f)^c. In view of <Ref>, for any f, the open subset of  (f):=((f))=f is invertible in (G)/ is the preimage by _G→((G)) of the principal open Z(f)^c=f∉. It is the open locus of  where f is invertible. In particular, our distinguished elements a_N and b_N (see <Ref>) give us the following open subsets of , for every N∈(G): (a_N)=_G(Z(a_N)^c), the open where a_N is invertible, and (b_N)=_G(Z(b_N)^c), the open where b_N is invertible. Since (c_N)^2=0 by <Ref>, we do not have much use for (c_N)=∅. With notation as above, we have for every N G of index p (a_N)∪(b_N)=. We compute (a_N)∪(b_N)=((a_N))∪((b_N))=((a_N)⊗(b_N))=(0_(E))=, using <Ref>. Every object u_N is not only ⊗-invertible in (G) but actually locally trivial over , which is a stronger property in general tt-geometry. Indeed, <Ref> tells us that around each point of , either u_N becomes isomorphic to  via a_N, or u_N becomes isomorphic to [2'] via b_N. This holds for one invertible u_N. We now construct a fine enough open cover of  such that every u_N is trivialized on each open. Let H≤ G be a p-subgroup. Define an open of  by H=[G]H:=⋂_N∈H≰N(a_N) ∩⋂_N∈H≤ N(b_N). Then the closed point (H)∈ belongs to this open H. Consequently {H}_H∈p(G) is an open cover of . The point (H)=(^H) belongs to H by <Ref>. It follows by general tt-geometry that {H}_H is a cover: Let ∈; there exists a closed point in , that is, some (H) that admits  as a generalization; but then (H)∈H forces ∈H since open subsets are generalization-closed. For a p-group, we now discuss H at the two extremes H=1 and H=G. Let G be a p-group and F=F(G)=∩_N∈(G)N be its Frattini subgroup. So F G and G/F is the largest elementary abelian quotient of G. Let G be a p-group with Frattini subgroup F. The closed complement of the open [G]1 is the support of [G]F, the closed support of the tt-ideal (^G_F) of (G). In particular, if G is elementary abelian then [G]1 is equal to the cohomological open G=(( G))≅(^(G,)). By definition, 1=∩_N∈(b_N). By <Ref>, its closed complement is ∪_N∈([G]N). By <cit.>, for every K≤ G ([G]K)=(H,)H≰_G K (taking all possible ∈). It follows that our closed complement of 1 is ∪_N∈(G)([G]N) (<ref>)(H,)∃ N∈(G) such that H≰_G N =(H,)H≰∩_N∈(G)N =(H,)H≰F(<ref>)([G]F). The statement with (^G_F) then follows from (<ref>). Finally, if G is elementary abelian then F=1 and (^G_1)=(G) is the tt-ideal of acyclic complexes. The complement of its support is ((G)/(G))=(())=G. In the above proof, we showed that ∪_N∈(N)=(F) thanks to the fact that ∩_N∈N=F. So the very same argument gives us: Let G be a p-group and let N_1,…,N_r∈(G) be some index-p subgroups such that N_1∩⋯∩ N_r is the Frattini subgroup F. (This can be realized with r equal to the p-rank of G/F.) Then [G]1=∩_i=1^r(b_N_i) already. Hence if ∈(b_N_i) for all i=1,…,r then ∈(b_N) for all N∈(G). Let us turn to the open [G]H for the p-subgroup at the other end: H=G. Let G be a p-group. Then the complement of the open [G]G is the union of the images of the spectra ((H)) under the maps ρ_H=(_H), over all the proper subgroups H≨ G. By <Ref>, the closed complement of G=∩_N∈(a_N) equals ∪_N∈((G/N)). For every H≤ G, we have ((G/H))=(ρ_H); see <cit.> if necessary. This gives the result because restriction to any proper subgroup factors via some index-p subgroup, since G is a p-group. Let G be a p-group. This open complement G of ∪_H≨ G(ρ_H) could be called the `geometric open'. Indeed, the localization functor Φ^G(G)(G)/(G/H)| H≨ G corresponding to G is analogous to the way the geometric fixed-points functor is constructed in topology. For more on this topic, see <cit.>. For G not a p-group, the open G is not defined (we assume H∈p(G) in <Ref>) and the `geometric open' is void anyway as we have (ρ_P)= for any p-Sylow P≨ G. The strategy to analyze non-p-groups is to first descend to the p-Sylow, using that _P is faithful. We saw in <Ref> that the complement of G is covered by the images of the closed maps ρ_H=(_H) for H≨ G. We could wonder whether another closed map into  covers G itself. The answer is the closed immersion ψ^F((G/F))((G)) induced by the modular fixed-points functor Ψ^F with respect to the Frattini subgroup F G. This can be deduced from the results of <Ref> or verified directly, as we now outline. Indeed, every prime =_G(K,) for K≤ G and ∈ comes by Quillen from some elementary abelian subgroup E=H/K≤=(N_G K)/K. One verifies that unless N_G K=G and H=G, the prime  belongs to the image of ρ_G' for a proper subgroup G' of G. Thus if  belongs to G, we must have E=H/K=G/K for K G. Such a K must contain the Frattini and the result follows. § TWISTED COHOMOLOGY UNDER TT-FUNCTORS Still for a general finite group G, we gather some properties of the twisted cohomology ring (G) introduced in <Ref>. We describe its behavior under specific tt-functors, namely restriction, modular fixed-points and localization onto the open subsets [G]H. Recall that =(G)=N G[G:N]=p. Twisted cohomology (G) is graded over a monoid of the form . The ring homomorphisms induced by the above tt-functors will be homogeneous with respect to a certain homomorphism γ on the corresponding grading monoids, meaning of course that the image of a homogeneous element of degree (s,q) is homogeneous of degree γ(s,q). The `shift' part (in ) is rather straightforward. The `twist' part (in ^ℓ) will depend on the effect of said tt-functors on the u_N. Let us start with modular fixed-points, as they are relatively easy. Let H G be a normal subgroup. By <Ref>, the tt-functor Ψ^H(G)→(G/H) maps every u_N for N≱H to , whereas it maps u_N for N≥ H to u_N/H. This defines a homomorphism of grading monoids γ=γ_Ψ^H×^(G)→×^(G/H) given by γ(s,q)=(s,q̅) where q̅(N/H)=q(N) for every N/H∈(G/H). In other words, q↦q̅ is simply restriction ^(G)^(G/H) along the canonical inclusion (G/H)(G). By <Ref>, for every twist q∈^(G), we have a canonical isomorphism Ψ^H((q))≅(q̅). Therefore the modular fixed-points functor Ψ^H defines a ring homomorphism also denoted @R=.1emΨ^H-2em (G) [r]^- (G/H) (f(q)[s]) @|->[r] (Ψ^H(f)Ψ^H((q)[s]) ≅(q̅)[s]) which is homogeneous with respect to γ_Ψ^H in (<ref>). Restriction is a little more subtle, as some twists pull-back to non-trivial shifts. Let α G'→ G be a group homomorphism. Restriction along α defines a tt-functor α^*=^α_G'∘^G_α(G)→(α)→(G'). Combining <Ref> for ^G_α with the obvious behavior of the u_N under inflation (by construction), we see that α^*(u_N)≅[2'] if N≥α and α^*(u_N)≅ u_α(N) if N≱α (which is equivalent to α(N)∈(G')). Hence for every (s,q)∈×^(G) we have a canonical isomorphism α^*((q)[s])≅(q')[s'] where s'=s+2'∑_N≥αq(N) and q'(G')→ is defined for every N'∈(G') as q'(N')=∑_N∈(G) s.t. α(N)=N'q(N). (In particular q'(N')=0 if N'≱(α).) These formulas define a homomomorphism (s,q)↦ (s',q') of abelian monoids that we denote γ=γ_α^*×^(G)→×^(G'). The restriction functor α^* defines a ring homomorphism @R=.1emα^*-2em (G) [r] (G') (f(q)[s]) @|->[r] (α^*(f)α^*((q)[s]) ≅(q')[s']) which is homogeneous with respect to γ_α^* in (<ref>). For instance, α G G/H can be the quotient by a normal subgroup H G. In that case α^* is inflation, which is a section of modular fixed-points Ψ^H. It follows that the homomorphism Ψ^H in (<ref>) is split surjective. (This also means that the composed effect on gradings γ_Ψ^H∘γ_α^*=𝕀 is trivial.) Without changing the group G, we can also localize the twisted cohomology ring (G) by restricting to an open H of , as defined in <Ref>. Recall the elements a_N,b_N∈(G) from <Ref>. Let H≤ G be a p-subgroup. Let S_H⊂(G) be the multiplicative subset of the graded ring (G) generated by all a_N such that H≰N and all b_N such that H≤ N, for all N∈(G). Recall that the a_N and b_N are central by <Ref>. We define a -graded ring :=((G)[S_H]) as the twist-zero part of the localization of (G) with respect to S_H. Explicitly, the homogeneous elements of  consist of fractions f/g where f,g∈(G) are such that g→(q)[t] is a product of the chosen a_N,b_N in S_H, meaning that (q)[t] is the ⊗-product of the corresponding u_N for a_N and u_N[-2'] for b_N, whereas f→(q)[s] is any morphism in (G) with the same -twist q as the denominator. Thus is -graded by the shift only: The degree of f/g is the difference s-t between the shifts of f and g. It follows from <Ref> (and <Ref>) that the -graded ring is generated as a -algebra by the elements ζ^+_N, ξ^+_NH≤ N∪ζ^-_N, ξ^-_NH≰N where ζ^+_N=a_N/b_N is of degree +2' and ζ^-_N=b_N/a_N of degree -2' as in <Ref>, and where (only for p odd) the additional elements ξ^±_N are ξ^+_N:=c_N/b_N of degree +1, and ξ^-_N:=c_N/a_N of degree -1. (For p=2, simply ignore the ξ^±_N.) In general, all these elements satisfy some relations; see <Ref>. Beware that here ξ^-_N is never the inverse of ξ^+_N. In fact, both are nilpotent. In fact, we can perform the central localization of the whole category (G) (H)=_G(H):=(G)[S_H] with respect to the central multiplicative subset S_H of <Ref>. The tt-category (H)=(G)[S_H] has the same objects as (G) and morphisms x→ y of the form f/g where g→ u belongs to S_H, for u a tensor-product of shifts of u_N's according to g (as in <Ref>) and where f x→ u⊗ y is any morphism in (G) with `same' twist u as the denominator g. This category (G)[S_H] is also the Verdier quotient of (G) by the tt-ideal (g)g∈ S_H and the above fraction f/g corresponds to the Verdier fraction x [r]^-f u ⊗ y y. [l]_-g⊗ 1 See <cit.> if necessary. The -graded endomorphism ring ^_(H)() of the unit in (H)=(G)[S_H] is thus the -graded ring (S_H(G))= of <Ref>. There is a general localization U of a tt-category  over a quasi-compact open U⊆ with closed complement Z. It is defined as U=(/_Z)^♮. If we apply this to U=H, we deduce from (<ref>) that U=∩_g∈ S_H(g) has closed complement Z=∪_g∈ S_H((g)) whose tt-ideal (G)_Z is the above (g)g∈ S_H. In other words, the idempotent-completion of our _G(H)=(G)[S_H] is exactly (G)H. As with any localization, we know that (_G(H)) is a subspace of , given here by U=∩_g∈ S_H(g)=H. For G=E elementary abelian and the subgroup H=1, the category _E(1)=(E)1 in <Ref> is simply the derived category _E(1)=(E), by <Ref>. In that case, E(1)≅^(E;) is the actual cohomology ring of E. Since H=1≤ N for all N, we are inverting all the b_N and no a_N. As noted in <Ref>, we obtain the same ring (the cohomology of E) as soon as we invert enough b_N_1,…,b_N_r, namely, as soon as N_1∩⋯∩ N_r=1. We again obtain an induced homomorphism of multi-graded rings. Let H≤ G be a p-subgroup and consider the above central localization (-)H(G)_G(H). As explained in <Ref>, the morphisms a_N and b_N give us explicit isomorphisms (u_N)H≅ if N≱H and (u_N)H≅[2'] if N≥ H. This yields a homomorphism on the grading γ=γ_H×^(G)→ defined by γ(s,q)=s+2'∑_N≥ Hq(N) and we obtain a ring homomorphism (-)H(G) ^__G(H)()= which is homogeneous with respect to the homomorphism γ_H of (<ref>). It is easy to verify that the continuous maps induced on homogeneous spectra by the ring homomorphisms constructed above are compatible with the comparison map of <Ref>. In other words, if F(G)→(G') is a tt-functor and if the induced homomorphism F(G)→(G') is homogenous with respect to γ=γ_F×^(G)→×^(G'), for instance F=Ψ^H or F=α^* as in <Ref>, then the following square commutes: @C=4em((G')) [r]^-(F)[d]^-_G' ((G)) [d]^-_G ((G')) @->[r]^-(F) ((G)). This follows from F((f))≃(F(f)) in (G') for any f∈(G). Similarly, for every H∈p(G) the following square commutes -2em [G]H=(_G(H)) @^(->[r] [d]_-_(H) ((G)) [d]^-_G () @^(->[r] ((G)) where the left-hand vertical map is the classical comparison map of <cit.> for the tt-category _G(H) and the ⊗-invertible [1]. The horizontal inclusions are the ones corresponding to the localizations with respect to S_H, as in <Ref>. In fact, it is easy to verify that the square (<ref>) is cartesian, in view of [G]H=⋂_g∈ S_H(g)=⋂_g∈ S_H_G(Z(g)^c) by <Ref> and (<ref>). We can combine the above functors. Here is a useful example. Let H G be a normal subgroup such that G/H is elementary abelian. Then we have a commutative square @C=1.5em-2emG/H=(((G/H))) [r]^-ψ̌^H[d]_-_((G/H))^-≃ ((G)) [d]^-_G (^(G/H,k)) @^(->[r] ((G)) and in particular, its diagonal _G∘ψ̌^H is injective. The functor Ψ̌^H(G)→((G/H)) is the modular fixed-points functor Ψ^H(G)→(G/H) composed with Υ_G/H(G/H)((G/H)), which is the central localization (-)1 over the cohomological open, by <Ref>; see <Ref>. Thus we obtain two commutative squares (<ref>) and (<ref>): @C=1.5em(((G/H))) @^(->[r]^-υ_G/H[d]_-_((G/H))^-≃ ((G/H)) [d]^-_G/H@->[r]^(.5)ψ^H ((G)) [d]^-_G (^(G/H,k)) @^(->[r]^- ((G/H)) @^(->[r]^- ((G)) the left-hand one for the central localization of (G/H) over the open [G/H]1=G/H, and the right-hand one for the tt-functor Ψ^H(G)→(G/H). Note that the bottom-right map is injective because the ring homomorphism in question, Ψ^H(G)→(G/H) defined in (<ref>), is surjective by <Ref>. § THE ELEMENTARY ABELIAN CASE In this central section, we apply the general constructions of <Ref> in the case of G=E elementary abelian. We start with a key fact that is obviously wrong in general (for a non-cyclic simple group, the target space is just a point). Let E be an elementary abelian group. The comparison map _E((E))→((E)) of <Ref> is injective. Let H,N≤ E with [E:N]=p. Suppose first that H≰N. We use the map ψ̌^H=(Ψ̌^H)E/H→ of <Ref>. Then [ (ψ̌^H)((b_N)) = (ψ̌^H)(((b_N))) by definition, see (<ref>); = ((Ψ̌^H(b_N))) by general tt-geometry; = ((0→)) by <Ref>; = (⊕[1]) = ∅. ] Thus (ψ̌^H) does not meet (b_N) when H≰N. Suppose now that H≤ N. A similar computation as above shows that (ψ̌^H)((b_N))=(((E/H))) since in that case Ψ̌^H(b_N) is an isomorphism in ((E/H)). Therefore (ψ̌^H)⊆(b_N) when H≤ N. Combining both observations, we have (ψ̌^H)∩(b_N)≠∅ ⟺ H≤ N. Let now ,∈((E)) be such that _E()=_E() in ((E)). Say =_E(H,) and =_E(K,) for H,K≤ E and ∈E/H and ∈E/K. (See <Ref>.) The assumption _E()=_E() implies that ∈(f) if and only if ∈(f), for every f∈(E). In particular applying this to f=b_N, we see that for every index-p subgroup N E we have ∈(b_N) if and only if ∈(b_N). By (<ref>), this means that for every N∈(E) we have H≤ N ⟺ K≤ N. Since E is elementary abelian, this forces H=K. So we have two points ,∈E/H that go to the same image under E/Hψ̌^H((E))_E((E)) but we know that this map in injective by <Ref> for G=E. In fact, we see that the open H of  defined in <Ref> matches perfectly the open () of ((E)) in <Ref>. Let E be an elementary abelian p-group. Let H≤ E be a subgroup. Then the comparison map of <Ref> restricts to a homeomorphism _EH() where is the -graded endomorphism ring of the unit  in the localization _E(H) of (E) over the open H. Recall the tt-category (H)=_E(H):=(E)[S_H] of <Ref>, where S_H⊂(E) is the multiplicative subset generated by the homogeneous elements a_NH≰N∪b_NH≤ N of <Ref>. In view of <Ref>, it suffices to show that the map _(H)((H))→() is a homeomorphism. We have injectivity by <Ref>. We also know that is noetherian by <Ref>. It follows from <cit.> that _(H) is surjective. Hence it is a continuous bijection and we only need to prove that it is a closed map. We claim that (H) is generated by its ⊗-unit . Namely, let =_(H)() be the thick subcategory of (H) generated by  and let us see that =(H). Observe that is a sub-tt-category of (H). Let N∈ be an index-p subgroup. We claim that (E/N) belongs to . If N≱H, then a_N is inverted in (H), so (E/N)=0 in (H) by <Ref> <ref>. If N≥ H, then b_N→ u_N[-2'] is inverted, so u_N∈ and we conclude again by <Ref> <ref> since a_N→ u_N is now a morphism in . For a general proper subgroup K<E, the module (E/K) is a tensor product of (E/N) for some N∈. (Here we use E elementary abelian again.) Hence (E/K) also belongs to  as the latter is a sub-tt-category of (H). In short contains all generators (E/H) for H≤ E. Therefore (H)= is indeed generated by its unit. It follows from this and from noetherianity of ^_(H)()= that ^_(H)(x,y) is a finitely generated -module for every x,y∈(H). We conclude from a general tt-geometric fact, observed by Lau <cit.>, that the map  must then be closed. Let E be an elementary abelian p-group. Let E be the sheaf of -graded rings on  obtained by sheafifying U↦^_(E)U(). Then (,E) is a Dirac scheme in the sense of <cit.>. We identified an affine cover {H}_H≤ E in <Ref>. This result further justifies the notation for the ring E(H) in <Ref>. Indeed, this (H) is also the ring of sections E(H) of the -graded structure sheaf  over the open H of <Ref>. Let E be an elementary abelian p-group. Then the comparison map of <Ref> is an open immersion. More precisely, it defines a homeomorphism between ((E)) and the following open subspace of ((E)): ∈((E))for all N E of index p either a_N∉ or b_N∉.-1em By <Ref>, the (continuous) comparison map is injective. Therefore, it being an open immersion can be checked locally on the domain. By <Ref>, the open H form an open cover of ((E)). <Ref> tells us that each H is homeomorphic to the following open of ((E)) U'(H):=⋂_N≱HZ(a_N)^c∩⋂_N≥ HZ(b_N)^c (recall that Z(f)^c=f∉ is our notation for a principal open). So it suffices to verify that the union ∪_H≤ E U'(H), is the open subspace of the statement (<ref>). Let ∈ U'(H) for some H≤ E and let N∈(E); then clearly either N≱H in which case a_N∉, or N≥ H in which case b_N∉. Conversely let  belong to the open (<ref>) and define H=∩_M∈ s.t. b_M∉M. We claim that ∈ U'(H). Let N∈. If N≱H then b_N∈ by construction of H and therefore a_N∉. So the last thing we need to prove is that N≥ H implies b_N∉. One should be slightly careful here, as H was defined as the intersection of the M∈ such that b_M∉, and certainly such M's will contain H, but we need to see why every N≥ H satisfies b_N∉. This last fact follows from <Ref> applied to E/H. Consider the spectrum of (C_p) for the cyclic group C_p of order p. By <Ref>, the reduced ring C_p(1)_ is k[ζ^+] with ζ^+=a/b in degree 2' while C_p(C_p)_=k[ζ^-] with ζ^-=b/a. (The former is also <Ref>.) Each of these has homogeneous spectrum the Sierpiński space and we easily deduce that ((C_p))= @R=1em@C=.5em ∙@-@[Brown][rd]_-C_p ∙ .6-0.78em@-@[ForestGreen][ru]_-1 confirming the computation of ((C_p^n)) in <cit.> for n=1. We can also view this as an instance of <Ref>. Namely, still by <Ref>, the reduced ring (C_p)_ is [a,b] with a in degree 0 and b in degree -2'. Its homogeneous spectrum has one more point at the top: ((C_p))= @R=1em@C=.5em@R=.5em ∙@-[ld] @-[rd] ∙@-@[Brown][rd]_-Z(a)^c ∙ .6-0.78em@-@[ForestGreen][ru]_-Z(b)^c and this superfluous closed point ⟨ a,b⟩ lies outside of the open subspace (<ref>). Let K≤ H≤ E. The functor Ψ^K(E)→(E/K) passes, by <Ref>, to the localizations over [E]H and [E/K]H/K, respectively. On the -graded endomorphisms rings, we get a homomorphism Ψ^K→E/K(H/K) that on generators a_N,b_N is given by the formulas of <Ref>. By <Ref> this homomorphism Ψ^K→E/K(H/K) is surjective. For every elementary abelian group E, the spectrum  admits a unique generic point η_E, namely the one of the cohomological open E. We proceed by induction on the p-rank. Let us write η_E=_E(1,√(0)) for the generic point of E, corresponding to the ideal √(0) of nilpotent elements in ^(E;). Similarly, for every K≤ E, let us write η_E(K)=_E(K,η_E/K) for the generic point of the stratum E(K)≃E/K. We need to prove that every point η_E(K) belongs to the closure of η_E=η_E(1) in . It suffices to show this for every cyclic subgroup H<E, by an easy induction argument on the rank, using the fact that ψ^H((E/H)) is closed. So let H≤ E be cyclic. Note that inflation ^E/H_E(E/H)→(E) passes to the localization of the former with respect to all b_N/H for all N∈(E) containing H (which is just the derived category of E/H) and of the latter with respect to the corresponding b_N: ^E/H_E ((E/H)) (E)[b_NN≥ H]. This being a central localization of a fully-faithful functor with respect to a multiplicative subset in the source, it remains fully-faithful. One can further localize both categories with respect to all non-nilpotent f∈^(E/H;) in the source, to obtain a fully-faithful ^E/H_E ((E/H))[ff∉√(0)] where is obtained from (E) by first inverting all b_N for N≥ H as in (<ref>) and then inverting all ^E/H_E(f) for f∈^(E/H;)√(0). At the level of spectra, () is a subspace of . By construction, it meets the closed subset (ψ^H)≅((E/H)) of  only at the image of the generic point η_E(H). Indeed, inverting all b_N for N≥ H on (ψ^H) corresponds to inverting all b_N/H in (E/H), hence shows that ()∩(ψ^H) is in the image under ψ^H of the cohomological open E/H. Similarly, inverting all f∉√(0) removes all non-generic points of E/H. In particular, the generic point η_E(H) of E(H) is now a closed point of the subspace () of . Using that (<ref>) is fully-faithful and that the endomorphism ring of the source is the cohomology of E/H localized at its generic point (in particular not a product of two rings), we see that is not a product of two tt-categories and therefore () is not disconnected. Also η_E belongs to () and is distinct from η_E(H). Hence the closed point η_E(H)∈() cannot be isolated. Thus η_E(H) belongs to the closure of some other point in (). Let then ∈ be a point in the subspace (), such that ≠η_E(H) and η_E(H)∈, which reads ⊊η_E(H). We know by <cit.> that this can only occur for =(H',) with H'≤ H, that is, either H'=H or H'=1 since here H was taken cyclic. The case H'=H is excluded, as in the subspace () the only prime of the form (H,) that remained was η_E(H) itself, and  is different from η_E(H). Thus H'=1, which means that ∈E=η_E(1) and we therefore have η_E(H)∈⊆η_E(1) as claimed. We can now determine the Krull dimension of the spectrum of (E). Let E be a elementary abelian p-group. Then the Krull dimension of is the p-rank of E. By <Ref>, the dimension of is the maximum of the dimensions of the open subsets H, for H≤ E. Each of these spaces has the same generic point η_E (by <Ref>) and a unique closed point (H) by <Ref> (and the fact that (K)∈H forces K and H to be contained in the same subgroups N∈(G) by <Ref>, which in turn forces K=H because E is elementary abelian). Using <Ref> we translate the problem into one about the graded ring . Let η_E=_0⊊_1⊊⋯⊊_n=(H) be a chain of homogeneous prime ideals in . Note that _n-1 belongs to the open Z(f)^c of () for some f=ζ^+_N, H≤ N, or some f=ζ^-_N, H≰N. Each of these has non-zero degree so the graded ring [f] is periodic. We deduce that (()) is the maximum of 1+(R) where R ranges over the ungraded rings R=[f]_(0) for f as above. The reduced ring R_ is a finitely generated -algebra with irreducible spectrum, hence a domain. Therefore (R)=(R_) is the transcendence degree of the residue field at the unique generic point. As observed above, this generic point is the same for all H≤ E, namely the generic point of 1=(( E)). We conclude that ()=((( E))) which is indeed the p-rank of E. In fact, the proof shows that all closed points (H)∈ have the same codimension (height), namely the p-rank of E. Thus for E elementary abelian, the Krull dimension of  is the same as the Krull dimension of the classical cohomological open (( E))≅(^(E,)). In other words, the spectrum of (E) is not monstrously different from that of ( E), at least in terms of dimension, or `vertical complexity'. There is however `horizontal complexity' in : each H has its own shape and form, and there are as many H as there are subgroups H≤ E. We give a finite presentation of the corresponding -algebras  in <Ref>. § CLOSURE IN ELEMENTARY ABELIAN CASE In this section, E is still an elementary abelian p-group. Following up on <Ref>, we can now use <Ref> to analyze inclusion of tt-primes , in (E), which amounts to asking when  belongs to  in . Using again that every ψ^H((E/H)) is a closed immersion, induction on the p-rank easily reduces the above type of questions to the case where the `lower' point  belongs to [E]1=E. More generally, given a closed piece Z of the cohomological open E, we consider its closure Z̅ in =⊔_H≤ EE(H) and we want to describe the part Z̅∩E(H) in each stratum E(H)≅E/H for H≤ E. Let H≤ E be a subgroup of our elementary abelian group E. Consider the open subsets [E]H of <Ref>, the cohomological open [E]1=E and their intersection [E]H∩E. Consider also the stratum E(H)=ψ̌^H(E/H), that is a closed subset of [E]H homeomorphic to E/H via ψ̌^H: @C=2em [E]H E E/H@^(->[ru]_-ψ̌^H [E]H∩E@_(->[lu] @^(->[ru] On graded endomorphism rings of the unit (<Ref>) this corresponds to @C=1em @->>[ld]_-Ψ^H[rd]_(.4)Q ^(E;) [ld]^(.4)Q' ^(E/H;) -2em ([E]H∩E)-2em where Q is the localization of  with respect to ζ^-_N=b_N/a_N for all N≱H, where Q' is the localization of E(1)=^(E,) with respect to ζ^+_N=a_N/b_N for all N≱H and where Ψ^H is the epimorphism of <Ref> for K=H. With above notation, let I⊆^(E,) be a homogeneous ideal of the cohomology of E. Define the homogeneous ideal J=Ψ^H(Q(Q'(I))) in the cohomology ^(E/H;) of E/H by `carrying around' the ideal I along (<ref>): @C=1em -1em Q(Q'(I)) -1em @|->[ld] I @|->[ld] -2em J:=Ψ^H(Q(Q'(I))) -4em Q'(I) @|->[lu] Let Z be the closed subset of E defined by the ideal I. Then the closed subset of E/H defined by J is exactly the intersection Z̅∩E(H) of the closure Z̅ of Z in ((E)) with the subspace E/H, embedded via ψ̌^H. Once translated by <Ref>, it is a general result about the multi-graded ring A=(E). We have two open subsets, H=∩_s∈ S_HZ(s)^c and E=1=∩_s∈ S_1Z(s)^c for the multiplicative subsets S_H and S_1 of <Ref>. These open subsets are `Dirac-affine', meaning they correspond to the homogenous spectra of the -graded localizations S_H(A)= and S_1 (A)=E(1)=^(E;), where (-) refers to `zero-twist', as before. The intersection of those two affine opens corresponds to inverting both S_H and S_1, that is, inverting b_N/a_NN≱H from  and a_N/b_NN≱H from ^(E;). This explains the two localizations Q and Q' and why their targets coincide. The intersection H∩Z̅ coincides with the closure in H of H∩ Z. The latter is a closed subset of H∩E defined by the ideal Q'(I). The preimage ideal Q(Q'(I)) then defines that closure H∩Z̅ in H. Finally, to further intersect this closed subset of H with the closed subset E/H=((Ψ^H)), it suffices to project the defining ideal along the corresponding epimorphism Ψ^H^(E/H;). Before illustrating this method, we need a technical detour via polynomials. Let I be a homogeneous ideal of the cohomology ^(E,) and let 1≠ H≨ E be a fixed non-trivial subgroup. Suppose that the only homogeneous ideal containing I and all the elements ζ_N for N≥ H (<Ref>) is the maximal ideal ^+(E,). Then there exists in I a homogeneous ([ The grading is the usual -grading in which all the ζ_N have the same degree 2'. In particular, the first term ∏_M≱Hζ_M^d in f has degree 2'· d· |M∈M≱H|.]) polynomial f of the form f = ∏_M≱Hζ_M^d + ∑_mλ_m ·∏_N∈ζ_N^m(N) for some integer d≥ 1 and scalars λ_m∈ and finitely many exponents m∈^ that satisfy the following properties: m(N)≥ 1 for at least one N≥ H and m(N')<d for all N'≱H. For simplicity, we work in the subring ^∗⊆^(E,) generated by the ζ_N. (For p=2, this is the whole cohomology anyway and for p odd we only miss nilpotent elements, which are mostly irrelevant for the problem, as we can always raise everything in sight to a large p-th power.) Let us denote the maximal ideal by =ζ_N| N∈. It is also convenient to work in the quotient -graded ring A^∗:=^∗(E,)/I which is generated, as a -algebra, by the classes ζ̅_N of all ζ_N modulo I. The assumption about Z(I+ζ_NN≥ H)={} implies that has some power contained in I+ζ_NN≥ H. In other words when N'≱H we have (ζ̅_N')^d∈ζ̅_N| N≥ H_A^∗ for d≫1 that we take large enough to work for all the (finitely many) N'≱H. Consider this ideal J= ζ̅_N| N≥ H of A^∗ more carefully. It is a -linear subspace generated by the classes θ̅_m of the following products in ^* θ_m:=∏_N∈(ζ_N)^m(N) with m∈^ such that m(N)≥ 1 for at least one N≥ H. We claim that J is in fact generated over  by the subset of the θ̅_m for the special m∈^ satisfying (<ref>). Indeed, let J'⊆ J be the -subspace generated by the θ̅_m for the special m. Then we can prove that the class θ̅_m of each product (<ref>) belongs to J', by using (<ref>) and (descending) induction on the number ∑_N≥ Hm(N). We conclude that J=J'. By (<ref>), the monomial ∏_M≱H(ζ̅_M)^d belongs to J and therefore to J': It is a -linear combination of monomials of the form θ̅_m for m∈^ satisfying (<ref>). Returning from A^∗=^∗(E,)/I to ^∗(E,), the difference between ∏_M≱H(ζ_M)^d and the same -linear combination of the lifts θ_m in ^∗(E,) is an element of I, that we callf and that fulfills the statement. Let Z⊂E be a non-empty closed subset of the cohomological open and let 1≠ H≨ E be a non-trivial subgroup. Suppose that in E, the subset Z intersects the image of the cohomological open of H in the smallest possible way: Z∩ρ_H(H) = {(1)}. Consider the closure Z̅ of Z in the whole spectrum . Then Z̅ does not intersect the stratum E(H)=ψ^H(E/H). Hence (H) does not belong to Z̅. Let I⊂^(E,) be the homogeneous ideal that defines Z. The closed image ρ_H(H) is given by the (partly redundant) equations ζ_N=0 for all N≥ H. It follows that the intersection Z∩ρ_H(H) is defined by the ideal I+ ζ_N| N≥ H. So our hypothesis translates exactly in saying that I satisfies the hypothesis of <Ref>. Hence there exists a homogeneous element of I f = ∏_M≱Hζ_M^d + ∑_mλ_m ∏_N∈ζ_N^m(N) for scalars λ_m∈ and finitely many exponents m∈^ satisfying (<ref>). We can now use <Ref> and follow Diagram (<ref>) with the ideal I and particularly with its element f. The element Q'(f) is just f seen in ([E]H∩E). But it does not belong to the image of  under Q because f contains some b_M with M≱H in denominators in the ζ_M's. Still, we can multiply Q'(f) by ∏_M≱H(b_M/a_M)^d=∏_M≱H(ζ_M)^-d to get a degree-zero homogeneous element f̃=1 + ∑_mλ_m ∏_N∈ζ_N^m'(N) in the ideal Q'(f), where we set the exponent m'(N):=m(N)-d if N≱H and m'(N):=m(N) if N≥ H. Note that by (<ref>) the exponent m'(N) is negative if N≱H and is non-negative if N≥ H and strictly positive for at least one N≥ H. Both types of exponents of ζ_N are allowed in , namely, when N≱H, the element ζ^-_N=b_N/a_N exists in . In other words, the element f̃∈Q'(f) satisfies f̃ = Q(1 + g̃) where g̃∈ belongs to the ideal ζ_N| N≥ H in  and must be of degree zero by homogeneity. Now, for N≥ H, we have Ψ^H(ζ_N)=ζ_N/H by <Ref>. It follows that Ψ^H(g̃) belongs to the maximal ideal ζ_N̅|N̅∈(E/H)⊆^+(E/H,) of ^(E/H,) and still has degree zero. This forces Ψ^H(g̃)=0 and therefore Ψ^H(1+g̃)=1 in ^(E/H,). In the notation of <Ref>, we have shown that J contains 1, which implies that Z̅∩E(H)=∅. Let Z⊂E be a closed subset of the cohomological open, strictly larger than the unique closed point (1) of E. Suppose that in E, the subset Z intersects the images of all proper subgroups trivially,  Z∩(⋃_H≨ Eρ_H(H)) = {(1)}. Then the closure Z̅ of Z in the whole spectrum  has only one more point, namely Z̅=Z∪{(E)}. By <Ref>, we see that Z̅ does not meet any stratum E(H) for H≠ E. Thus the only point of  outside of Z itself, hence outside of E, that remains candidate to belong to Z̅ must belong to ((E))∪_H≨ EE(H)=E(E)={(E)}. We know that (E)=(E/H)| H≨ E in (E), by <cit.>. Take ∈ Z different from (1). Since  does not belong to any (ρ_H)=((E/H)) by assumption, it must contain (E/H). Consequently, (E)⊆, meaning that (E)∈⊆Z̅. Let E be an elementary abelian group of rank r. Let be a point of height r-1 in the cohomological open E, that is, a closed point of the classical projective support variety 𝒱_E():=E{(1)}≅(^(E,))≅^r-1_. Suppose that  does not belong to the image ρ_H(H) of the support variety of any proper subgroups H≨ E. Then the closure of  in  is exactly the following {}={(E),,(1)}. Apply <Ref> to Z={,(1)}, the closure of  in E. We can review the proof of <Ref> in the special case of <Ref>, to see how elements like f∈(1) and f̃∈ come into play. We do it in the special case where  is a -rational point ( if is algebraically closed). Let 1≠ H≨ E be a non-trivial subgroup (the case r=1 being trivial). Choose N_0,N_1 E index-p subgroups with H≤ N_0 and H≰N_1. They define coordinates ζ_0,ζ_1 in ^r-1 (where ζ_i=ζ_N_i as in <Ref>). There exists a hyperplane of ^r-1 λ_0ζ_0+λ_1ζ_1=0, [λ_0:λ_1]∈^1(), going through the rational point . Note that λ_1≠ 0 as ∉ Z(ζ_0)=ρ_N_0(N_0), by assumption. As in <Ref>, the following two localizations agree E(H)[(ζ^-_N)| H≰N]=E(1)[(ζ^+_N)| H≰N] where N E ranges over the index-p subgroups as usual. We find a lift f̃:=λ_0 ζ_0ζ^-_N_1+λ_1 ∈(H) of the element f=λ_0ζ_0+λ_1ζ_1∈(1) of (<ref>) suitably multiplied by ζ_1=ζ^-_N_1 in the localization (<ref>). Then we have Ψ^H(ζ^-_N_1)=0 since H≰N_1, by <Ref>, so Ψ^H(f̃)=λ_1∈^× is an isomorphism. We deduce that (f̃) belongs to (H), which shows that does not specialize to (H). Let E=C_2× C_2 be the Klein-four group. Let us justify the description of ((E)) announced in <cit.> in some detail: @C=.0em@R=.4em(E)∙@-@[Gray][rrdd] @-@[Gray][rrrrdd] @-@[Gray][rrrrrrdd] @ @<.1em>@[Red][rrrrrrrrdd] (N_0)∙@-@[Gray][ldd] @-@<.1em>@[Gray][rrrrrrdd] (N_1)∙@-@[Gray][ldd] @-@[Gray][rrrrrrdd] (N_∞)∙@-@[Gray][ldd] @-@[Gray][rrrrrrdd] (1)∙@ @[Gray][ldd] @-@[Gray][dd] @-@[Gray][rrdd] @-@[Gray][rrrrdd] η_E(N_0)∙@-@<-.4em>@[Gray][rrrrrrrdd] η_E(N_1)∙@-@<-.1em>@[Gray][rrrrrdd] η_E(N_∞)∙@-@[Gray][rrrdd] @.@[RoyalBlue][r] @ @[Gray][rdd] @.@[RoyalBlue][rrrrrrr] 0∙@-@[Gray][dd] 1∙@-@[Gray][lldd] ∞∙@-@[Gray][lllldd] ∙_η_E-1em -1em By <Ref>, we have a partition of the spectrum as a set =E(E) ⊔ E(N_0) ⊔ E(N_1) ⊔ E(N_∞) ⊔ E, where we write N_0,N_1,N_∞ for the three cyclic subgroups C_2 and where E=E(1) is the cohomological open as usual. Let us review those five parts E(H)=ψ^H(E/H) separately, in growing order of complexity, from left to right in (<ref>). For H=E, the stratum E(E)=ψ^E(E/E)={(E)} is just a closed point. For each cyclic subgroup N_i<E, the quotient E/N_i≃ C_2 is cyclic, so (E/N_i) is the space of <Ref>. Its image under ψ^N_i is {_E(E),η_E(N_i),_E(N_i)}, defining the (brown) point η_E(N_i):=ψ^N_i(η_E/N_i), as in the proof of <Ref>. The stratum E(N_i) is the image of the cohomological open E/N_i only, that is, the Sierpiński space {η_E(N_i),(N_i)}, whose non-closed point η_E(N_i) is the generic point of the irreducible {_E(E),η_E(N_i),_E(N_i)} in . Finally, for H=1, the cohomological open E=(( E))≅([ζ_0,ζ_1]) is a ^1_ with a closed point (1) on top. We denote by η_E the generic point of  as in <Ref> and by 0,1,∞ the three _2-rational points of ^1_ (in green). The notation  refers to all remaining points of ^1_. The undulated lines indicate that all points of  have the same behavior. Namely, η_E specializes to all points of  and every point of  specializes to (1) and the (red) undulated line towards (E) indicates that all points of  specialize to (E), as follows from <Ref>. (Note that the latter was rather involved: Its proof occupies most of this section, and relies on technical <Ref>.) We have described the closure of every point in , except for the _2-rational points 0,1,∞. For this, we use the closed immersion ρ_N_i((N_i)) induced by restriction _N_i. The point i is the image of the generic point η_N_i of the V-shaped space ((N_i)) of <Ref>. Hence its closure is (ρ_N_i)={(E_i),i,(1)}. So specializations are exactly those of (<ref>). We revisit this picture in more geometric terms in <Ref>. It is possible to extend <Ref> to a general finite group G by means of the Colimit <Ref>. Let Z⊆ be a one-dimensional irreducible closed subset. Write its generic point as =(K,) for (unique) K∈p(G)_/G and ∈. By Quillen applied to G̅=, there exists a minimal elementary abelian subgroup E≤G̅ such that ∈(ρ_EE→G̅), also unique up to G̅-conjugation. This E≤G̅=(N_GK)/K is given by E=H/K for H≤ N_G K containing K. Then =φ_(H,K)() where ∈ is given by =_E(1,) for some ∈E. By <Ref>, the map φ_(H,K)→ is closed and preserves the dimension of points. It follows that is also the generic point of a one-dimensional irreducible in . By minimality of E, the point ∈E does not belong to H' for any proper subgroup H'<E. By <Ref>, we have ={_E(E),,_E(1)} in . The map φ_(H,K) sends this subset to {_G(H), , _G(K)}. In summary, every one-dimensional irreducible subset of  is of the form Z={(H),,(K)}, where H and K are uniquely determined by the generic point  via the above method. § PRESENTATION OF TWISTED COHOMOLOGY We remain in the case of an elementary abelian group E. In this section we want to better understand the local -graded rings that played such an important role in <Ref>. Thankfully they are reasonable -algebras. Recall that we write C_p= σ|σ^p=1 for the cyclic group of order p with a chosen generator σ. For brevity we call an -linear surjection π E C_p a coordinate. For two coordinates π,π' we write π∼π' if (π)=(π'). Finally, for a subgroup H, we often abbreviate H|π to mean H≤(π). Recall from <Ref> and <Ref> that each coordinate π yields an invertible object u_π=π^*u_p in (E). It comes with maps a_π,b_π,c_π→ u_π[∗]. If π∼π' then there exists a unique λ∈^× such that π'=π^λ. Hence, if p=2 then necessarily π=π' and u_π=u_π'. On the other hand, if p>2 is odd then we still have u_π≅ u_π' as already mentioned. Explicitly, consider the automorphism λ C_p→ C_p that sends σ to σ^λ. The isomorphism u_π=π^*u_pπ^*λ^*u_p=(π^λ)^*u_p=u_π' will be the pullback π^*Λ along π of an isomorphism of complexes Λ u_pλ^* u_p. This isomorphism Λ can be given explicitly by the identity in degree 0 and by the C_p-linear maps C_p→λ^* C_p in degree 1 (resp. 2) determined by 1↦ 1 (resp. 1↦ 1+σ+⋯σ^λ-1). One checks directly that Λ∘ a_p=a_p and Λ∘ b_p=λ· b_p. By applying π^* we obtain (π^*Λ)∘ a_π=a_λπand (π^*Λ)∘ b_π=λ· b_λπ. Given coordinates π_1≁π_2 set π_3=π_1π_2. Write u_i, a_i and b_i for u_π_i, a_π_i and b_π_i in (E). Then we have the relation a_1b_2b_3+b_1a_2b_3+b_1b_2a_3 =0 as a map from  to (u_1⊗ u_2⊗ u_3)[-2'· 2] in (E). (See <Ref> for 2'.) Let N_i=(π_i) for i=1,2,3, which are all distinct. Let N=N_1∩ N_2∩ N_3 be the common kernel, which is of index p^2 in E. By inflation along E E/N, it suffices to prove the lemma for E=C_p× C_p and π_1 and π_2 the two projections on the factors. We abbreviate u for the complex of permutation E-modules u:=u_1⊗ u_2⊗ u_3. Consider the permutation module M:=kC_p⊗ kC_p⊗ kC_p≅ k(E/N_1)⊗ k(E/N_2)⊗ k(E/N_3) which appears as a summand in various degrees of the complex u. One element in M is of particular interest: m :=∑_i_1,i_2=0^p-1σ^i_1⊗σ^i_2⊗σ^-i_1-i_2. It is easy to check that m is E-invariant, thus defines a E-linear map m̃ k→ M, that can be used to define the required homotopies. This depends on p. If p=2, the homotopy is given by m̃ when viewed from  to the only M-entry of u[-2] in degree one. If p>2, the homotopy is given by (m̃,m̃,m̃) as a map from to the three M-entries of u[-4] in degree one. Verifications are left to the reader. We construct a commutative -algebra E(H) by generators and relations. Its generators are indexed by coordinates π E C_p (<Ref>) ζ_π^+π s.t. H≤(π) ∪ ζ_π^-π s.t. H≰(π). These generators come equipped with a degree in : If the generator ζ^+_π is set to have degree 2', whereas if the generator ζ^-_π is set to have degree -2'. We impose the following four families of homogeneous relations. First for every coordinate π and every λ∈^× (for p odd), we have a rescaling relation * ζ_π^λ^+=λζ^+_π if H|π and ζ_π^λ^-=λ^-1ζ^-_π if H∤π' and whenever π_3=π_1π_2 and π_1≁π_2, writing ζ_i^±:=ζ_π_i^±, we impose one of the following relations, inspired by <Ref>: * ζ_1^++ζ_2^++ζ^+_3=0, if H|π_1 and H|π_2 (and therefore H|π_3) * ζ^-_1+ζ^-_2+ζ^-_1ζ^-_2ζ_3^+=0, if H∤π_1 and H∤π_2 but H|π_3 * ζ^-_1ζ^-_2+ζ^-_2ζ^-_3+ζ^-_3ζ^-_1=0 if H∤π_i for all i=1,2,3. Since these relations are homogeneous, the ring E(H) is a -graded ring. We could also define a multi-graded commutative -algebra E generated by all a_π,b_π subject to the relations in (<ref>) and <Ref>. This algebra E would be ×^-graded with a_π in degree (0,1_(π)) and b_π in degree (-2',1_(π)). Then E(H) is simply the `zero-twist' part of the localization of E with respect to the a_π,b_π that become invertible in U(H), that is, those a_π such that H∤π and those b_π such that H|π, following the pattern of <Ref>. By (<ref>) and <Ref>, there exists a canonical homomorphism E(H)→ mapping ζ^+_π to a_π/b_π and ζ^-_π to b_π/a_π. Let H=1. Recall from <Ref> that E(1) is the cohomology ring. Then the homomorphism (<ref>) is the standard one E(1)→^(E;), that maps ζ^+_π to the usual generator ζ_π=π^*(ζ_C_p). Note that here H=1|π for all π, so there is no ζ^-_π. For E elementary abelian, it is well-known that this homomorphism E(1)→^(E;) is an isomorphism modulo nilpotents. See for instance <cit.>. For two subgroups H,K≤ E, the open subsets [E]H and [E]K can intersect in . Similarly, we can discuss what happens with the rings E(H). Let H,K≤ E be two subgroups. Define S=S(H,K)⊂E(H) to be the multiplicative subset generated by the finite set ζ^+_π for π with H|π and K∤π∪ζ^-_π for π with H∤π and K|π and similarly, swapping H and K, let T=S(K,H)⊂E(K) be the multiplicative subset generated by ζ^+_πH∤π and K|π∪ζ^-_πH|π and K∤π. Then we have a canonical isomorphism of (periodic) -graded rings SE(H)≅ TE(K) and in particular of their degree-zero parts. Thus the open of (E(H)) defined by S is canonically homeomorphic to the open of (E(K)) defined by T. The left-hand side SE(H) is the (`zero-twist' part of the) localization of the multi-graded ring E of <Ref> with respect to a_πH∤π∪b_πH|π∪a_πH|π, K∤π∪b_πH∤π, K|π =a_πH∤π or K∤π∪b_πH|π or K|π which is symmetric in H and K. This completes the proof. The above isomorphism is compatible with the homomorphism (<ref>), namely the obvious diagram commutes when we perform the corresponding localizations on E(H) and E(K). Let K≤ H≤ E. There is a canonical split epimorphism `Ψ^KE(H)E/K(H/K) whose kernel is ζ^-_π|K∤π. It is compatible with the homomorphism Ψ^K of <Ref>, in that the following diagram commutes @C=3em@R=2emE(H) @->>[d]_-`Ψ^K[r]^-(<ref>) E(H) @->>[d]^-Ψ^K E/K(H/K) [r]^-(<ref>) E/K(H/K). Set H̅:=H/K≤E̅:=E/K. Similarly, for every coordinate π E C_p such that K|π, let us write π̅ E/K C_p for the induced coordinate. The morphism `Ψ^K will come from a morphism “Ψ^KE→E/K, with respect to the homomorphism of gradings (<ref>). As these algebras are constructed by generators and relations (<Ref>), we need to give the image of generators. Inspired by <Ref> we define “Ψ^KE→E/K on generators by a_π↦ a_π̅ if K|π 1 if K∤π b_π↦ b_π̅ if K|π 0 if K∤π. It is easy to see that the relations in E are preserved; thus the map “Ψ^K is well-defined. Let ϖ E E/K and for every π̅E̅ C_p consider the coordinate π=π̅∘ϖ E C_p. Then H̅|π̅ if and only if H|π. It follows that the morphism passes to the localizations `Ψ^KE(H)E/K(H/K) as announced. The statement about its kernel is easy and commutativity of the square follows from the fact (<Ref>) that Ψ^K treats the a_π and b_π according to the same formulas. The section of `Ψ^K is inspired by inflation. Namely, a_π̅↦ a_π and b_π̅↦ b_π defines a map of graded -algebras E̅→E that is already a section to “Ψ^K and passes to the localizations. The canonical homomorphism (<ref>) induces an isomorphism E(H)__ of reduced -graded -algebras. It follows from <Ref> that the map is surjective. We will now show that the closed immersion ()(E(H)) is surjective—this will complete the proof, by the usual commutative algebra argument, which can be found in <cit.> for the graded case. By <Ref>, this is equivalent to showing the surjectivity of the composite with _E, that we baptize β^H @C=3emβ^H -2em [E]H[r]_-_E^-≃ () @^(->[r] (E(H)). We proceed by induction on the order of the subgroup H. If H=1 the result follows from <Ref>. So suppose that H≠ 1 and pick a homogeneous prime ∈(E(H)). We distinguish two cases. If for every coordinate π E C_p such that H∤π we have ζ^-_π∈ then belongs to V(ζ^-_πH∤π), which we identify with the image of (E/H(1)) by <Ref> applied to K=H. Namely, we have a commutative square @C=5em@R=2em[E]H[d]_β^H [E/H]1[d]^β^1[l]_ψ^H (E(H)) (E/H(1)) [l]_(`Ψ^H) and since the right-hand vertical arrow is surjective by the case already discussed, we conclude that belongs to the image of β^H in (<ref>) as well. Otherwise, there exists a coordinate π_1 such that H∤π_1 and ζ^-_π_1∉. Let K:=H∩(π_1) and let S=S(H,K) be defined as in <Ref>: S=ζ^-_πfor π with H∤π and K|π. We claim that belongs to the open of (E(H)) defined by S. Indeed, let ζ_π_2^-∈ S, that is for π_2 with H∤π_2 and K|π_2, and let us show that ζ_π_2^-∉. If π_2∼π_1 this is clear from ζ^-_π_1∉ and the relation <ref> in E(H). If π_2≁π_1, let h∈ H K (so that h generates the cyclic group H/K≅ C_p). As π_1(h)≠ 1 and π_2(h)≠ 1 we may replace π_1 by an equivalent coordinate π̃_1:=π_1^λ such that π̃_1(h)=π_2(h) and therefore H|π_3:=π̃_1π_2. Then relation <ref> exhibits ζ_π̃_1^- as a multiple of ζ_π_2^-. As the former does not belong to  (by the previous case), neither does ζ_π_2^-. At this point we may apply <Ref> for our subgroups H and K. By <Ref>, we have a commutative triangle: @C=2em@R=2em [E]H∩[E]K[dl]_β^H[dr]^β^K (E(H)[S]) @<->[rr]^≈ (E(K)[T]) We just proved that belongs to the open subset in the bottom left corner. As K is a proper subgroup of H, we know that β^K is surjective by induction hypothesis and we conclude that belongs to the image of β^H as well. In <Ref> we have proved something slightly more precise, namely that the map E(H)→/⟨ξ^±_π⟩ (where π ranges over all coordinates) is surjective with nilpotent kernel. We expect that E(H) is already reduced, which would imply that (<ref>) is in fact an isomorphism of graded rings. In particular, for p=2 we expect that E(H)E(H). § APPLICATIONS AND EXAMPLES In this final section, we push our techniques further and compute more examples. For an elementary abelian group E, <Ref> allow us to think of the geometry of , beyond its mere topology, by viewing as a Dirac scheme. Consider further the `periodic' locus of , which is the open complement of the closed points (H)H≤ E; see <Ref>. This is analogous to considering the projective support variety (^(E,))≅^r-1_ by removing the `irrelevant ideal' (1)=^+(E,) from (^(E,)). To avoid confusion with the phrase `closed points', we now refer to the (H) as very closed points, allowing us to speak of closed points of ^r-1_ in the usual sense (as we did in <Ref>). Removing those finitely many `irrelevant' points allows us to draw more geometric pictures by depicting the (usual) closed points of the periodic locus, as in classical algebraic geometry. In fact, for any finite group G, we can speak of the periodic locus of  to mean the open '((G)):=(H)H∈p(G) obtained by removing the `irrelevant' very closed points. However, we do not endow these spectra with a scheme-theoretic structure beyond the elementary abelian case, since we do not have <Ref> in general. We postpone a systematic treatment of the periodic locus to later work. For now we focus on examples. Let us revisit Klein-four, with the notation of <Ref>. From the picture in (<ref>) we see that the union of the open subsets [E]1 and [E]E only misses (three) very closed points hence covers the periodic locus. We have E(1) =[ζ^+_N_0,ζ^+_N_1,ζ^+_N_∞]/⟨ζ^+_N_0+ζ^+_N_1+ζ^+_N_∞⟩ (=^*(E;)), E(E) =[ζ^-_N_0,ζ^-_N_1,ζ^-_N_∞]/⟨ζ^-_N_0ζ^-_N_1+ζ^-_N_1ζ^-_N_∞+ζ^-_N_∞ζ^-_N_0⟩ and their homogeneous spectra are both a projective line with a unique closed point added. (For E(E), the coordinate transformation for i=0,1, ζ^-_N_i↦ζ̃^-_i:=ζ^-_N_0+ζ^-_N_∞, identifies the ring with [ζ̃^-_0,ζ̃^-_1,ζ^-_N_∞]/⟨ζ̃^-_0ζ̃^-_1+(ζ^-_N_∞)^2⟩, which corresponds to the image of a degree-two Veronese embedding of ℙ^1 in ℙ^2.) Removing the very closed points (<Ref>), it is a straightforward exercise to check that the two lines are glued along the open complement of the _2-rational points, according to the rule (ζ^+_N_i)=ζ_N_i^-. In other words, we obtain the following picture of '((E)): To translate between this picture and the one in (<ref>), think of the blue part as , the three green points as the _2-rational points i=0,1,∞ in [E]1=E and the brown points as the η_E(N_i) in [E]E. In view of later applications let us consider the action induced on spectra by the involution on E=C_2× C_2 that interchanges the two C_2-factors. Let us say that the two factors correspond to the subgroups N_0 and N_1. On the generators ζ^±_N_i of E(1) and E(E) in (<ref>), the effect of the involution is ζ^±_N_0↦ζ^±_N_1 ζ^±_N_1↦ζ^±_N_0 ζ^±_N_∞↦ζ^±_N_∞. The subrings of invariants in E(1) and E(E) are, respectively, [e_1^+, e_2^+,ζ^+_N_∞]/⟨e_1^++ζ^+_N_∞⟩≅[e_2^+,ζ^+_N_∞] and [e_1^-,e_2^-,ζ^-_N_∞]/⟨e_1^-ζ^-_N_∞+e_2^-⟩≅[e_1^-,ζ^-_N_∞] where e_1^±=ζ^±_N_0+ζ^±_N_1 and e_2^±=ζ^±_N_0ζ^±_N_1 are the first and second symmetric polynomials in ζ^±_N_0 and ζ^±_N_1. Thus e_i^± has degree ± i. The homogeneous spectra of these rings (with unique very closed point removed) are again two projective lines ([ More precisely, as already in <Ref>, we are dealing with weighted projective spaces which happen to be isomorphic to projective lines.]) and they are glued together along the complement of two points. In other words, the quotient of '((E)) by the involution is a ℙ^1_ with two doubled points: Alternatively, the topological space underlying this quotient may be obtained more directly at the level of <Ref>. Indeed, this involution fixes the two colored points corresponding to ∞, fixes no other points, and swaps the points corresponding to 0 with the points corresponding to 1, respecting the color. So, again, the quotient can be pictured as a ℙ^1_ with only two doubled points as in <Ref>. Let us return to general finite groups. We want to optimize the Colimit <Ref> by revisiting the category of elementary abelian p-sections . In <Ref>, we gave a `raw' version of the morphisms in the indexing category , which could be fine-tuned without changing the colimit (<ref>). As with any colimit, we can quotient-out the indexing category by identifying any two morphisms that induce the same map by the functor under consideration, here ((-)). We then still have _(H,K)∈((H/K)). The same holds for any intermediate quotient pG. For instance if Z(G) denotes the center of G, we can consider the category pG obtained from  by modding out the obvious right action of the group Z(G)· H' on each hom set _((H,K),(H',K')). Let us illustrate how such reductions can be used in practice. Let G=C_p^n be the cyclic group of order p^n. As with any abelian group, using Z(G)=G, the reduced category discussed in <Ref> just becomes a poset. Here, if we denote by 1=H_n<H_n-1<⋯<H_1<H_0=G the tower of subgroups of G then the poset looks as follows: @C=.3em@R=.7em (H_1,H_0) ⋯ (H_n,H_n-1) (H_0,H_0)[ru] (H_1,H_1)[lu][rru] ⋯ (H_n-1,H_n-1)[ru][llu] (H_n,H_n)[lu] From <Ref> we deduce that is the colimit of the diagram @C=.7em@R=.7em V V V ∗[ru] ∗[lu][ru] [lu] ⋯ [ru] ∗[lu] with ∗=((1)) and V=((C_p)) the V-shaped space in (<ref>). In the above diagram, the arrow to the right (resp. left) captures the left-most (resp. right-most) point of V. We conclude that the spectrum of (C_p^n) is equal to @R=1em@C=.7em_0-1em ∙@-@[Gray] '[rd] '[rr] '[drrr] ∙ -1em_1 _n-1-1em ∙ ∙ -1em_n _1-1em ∙ ⋯ @-@[Gray] '[ru] '[rr] '[rrru] ∙ -1em_n This example reproves <cit.>. It will provide the starting point for our upcoming work on the tt-geometry of Artin motives over finite fields. The category of elementary abelian p-sections  is a finite EI-category, meaning that all endomorphisms are invertible. The same is true of its reduced versions pG and in <Ref>. <Ref> then implies formally that is the quotient of the spectra for the maximal elementary abelian p-sections by the maximal relations. Let us spell this out. Let I be a finite EI-category. The (isomorphism classes of) objects in I inherit a poset structure with x≤ y if _I(x,y)≠∅. Maximal objects (I)⊆ I are by definition the maximal ones in this poset. Now, let x_1,x_2 be two objects in I, possibly equal. The category (x_1,x_2) of spans x_1← y→ x_2 (or `relations') between x_1 and x_2, with obvious morphisms (on the y part, compatible with the spans), is also a finite EI-category and we may consider its maximal objects. We denote by (G) the set of maximal objects in . A word of warning: In general, there can be more maximal elementary abelian p-sections than just the elementary abelian p-sections of maximal rank. Let G be a finite group. The components φ_(H,K) of (<ref>) induce a homeomorphism between the following coequalizer in topological spaces coeq( @C=5em∐_E_1g_1Lg_2E_2-.5cm((L))@<2pt>[r]^-((g_1))@<-5pt>[r]_-((g_2)) ∐_E∈(G)-.5cm) and , for `maximal relations' in  or any variant of <Ref>. Applying <Ref> we obtain ≃coeq( @C=5em∐_E_1g_1Lg_2E_2-.5cm((L))@<2pt>[r]^-((g_1))@<-5pt>[r]_-((g_2)) ∐_E∈) where E ranges over all elementary abelian p-sections and (g_1,g_2) over all relations. There is a canonical map from the coequalizer in the statement to this one and it is straightforward to produce an inverse, as with any finite EI-category. We can apply <Ref> to find the irreducible components of . The set of irreducible components of  is in bijection with the set (G) of maximal elementary abelian p-sections of G up to conjugation, via the following bijection with generic points: (G)_/G ∼⟷^0 (H,K) ⟼φ_(H,K)(η_H/K). In particular, ()=p-rank_(G) is the sectional p-rank of G. We use coequalizer (<ref>). Recall from <Ref> that for an elementary abelian p-group E is always irreducible. We get immediately that the map (G)_/G^0 is a surjection. Assume now that φ_E(η_E)=φ_E'(η_E') for E,E'∈(G) and let us show that E and E' are conjugate p-sections. By <Ref>, there exists a finite sequence of maximal relations responsible for the identity φ_E(η_E)=φ_E'(η_E') and we will treat one relation at a time. More precisely, assuming that the generic point in ((E_1)) is in the image of (the map on spectra induced by) some relation E_1Lg_2E_2, with E_1,E_2∈(G), we will show below that both g_i are conjugation isomorphisms (type <ref> in <Ref>). In particular, E_1,E_2 are conjugate. And as conjugation identifies the unique generic points in the spectra for E_1 and E_2 one can apply induction on the number of relations to conclude. As the map induced by g_1 is a closed immersion (<Ref>) it must be a homeomorphism once its image contains the generic point. From this, we deduce that g_1 itself must be an isomorphism. (Indeed, the map induced by restriction to a proper subgroup of E_1 is not surjective, already on the cohomological open. And similarly, the map induced by modular fixed-points with respect to a non-trivial subgroup of E_1 does not even meet the cohomological open.) Hence L≃ E_1 is maximal too and therefore g_2 is also an isomorphism. The only isomorphisms in  are conjugations (<Ref>) and we conclude. The second statement follows from this together with <Ref>. For G not elementary abelian, we already saw with the example of G=Q_8 in <cit.> that can have larger Krull dimension than (()). And indeed, Q_8 has sectional p-rank two and p-rank one. For every maximal (H,K)∈(G), since φ_(H,K) is a closed map, it yields a surjection φ_(H,K)φ_H,K(η_H/K) from the spectrum of the elementary abelian E=H/K onto the corresponding irreducible component of . We illustrate this with G=D_8 in <Ref> below, where said surjection coincides with the folding of <Ref>. We will now explain the meaning of <Ref> and in effect compute ((D_8)) for G=D_8=⟨ r,s| r^4=s^2=1, rs=sr ⟩ the dihedral group of order 8. We label its subgroups as follows ([ The two Klein-four subgroups are called K and K'. The names L_0 and L_1 for the cyclic subgroups of K (resp. L'_0 and L'_1 in K') are chosen to evoke N_0 and N_1 in <Ref>. The third cyclic subgroup, N_∞, corresponds to C_2=Z(D_8) and is common to K and K'.]): @C=1.5em@R=1em D_8 K_=⟨ r^2,s⟩@-[ru] C_4=⟨ r⟩@-[u] K'_=⟨ r^2,r^3s⟩@-[lu] L_0=⟨ s⟩@-[ru] L_1=⟨ r^2s⟩@-[u] C_2=⟨ r^2⟩@-[lu]@-[u]@-[ru] L'_0=⟨ rs⟩@-[u] L'_1=⟨ r^3s⟩@-[lu] 1@-[llu]@-[llu]@-[lu]@-[u]@-[ru]@-[rru] Since L_0 and L_1 (resp. L'_0 and L'_1) are G-conjugate, by the element r, we have exactly eight very closed points (H) for H∈p(G)_/G. We shall focus on the open complement of these very closed points, the periodic locus '((D_8)) of <Ref>, which is of Krull dimension one. Since all maps in the coequalizer diagram (<ref>) preserve the dimension of points (<Ref>) we may first remove these very closed points and then compute the coequalizer. Let us describe (D_8) and the maximal relations. In addition to the maximal elementary abelian subgroups K and K' there is one maximal elementary abelian subquotient D_8/C_2. So we have three maximal sections: (D_8)={(K,1),(K',1),(D_8,C_2)}. We compute the relations in the category 2D_8 which is obtained from 2D_8 by quotienting each hom-set ((H,M),(H',M')) by the action of H', as in <Ref>. One then easily finds by inspection five non-degenerate ([ that is, not of the form x𝕀x𝕀x (which would not affect the coequalizer (<ref>) anyway)]) maximal relations up to isomorphism, pictured as follows: @C=.5em@R=.5em (D_8,C_2) (K_,C_2)@[OliveGreen][rru]@[Brown][ld] @[OliveGreen][llu](K'_,C_2)@[Brown][rd] (K_,1)@(l,lu)^r @[OliveGreen][lll](C_2,1)@[OliveGreen][rrr] (K'_,1)@(ru,r)^r Here, the loops labeled r represent the relations (K_,1)1(K_,1)r(K_,1), and similarly for K'. All unlabeled arrows are given by 1∈ D_8, as in <Ref> <ref>-<ref>. We explain below the brown/green color-coding in the other three relations. Hence the space '((D_8)) is a quotient of three copies of the space '((E)) for E the Klein-four group, equal to ^1_ with three doubled points as in <Ref>. Let us discuss the relations. We start with the self-relation corresponding to the loop r on (K_,1). As the conjugation by r on K_ simply swaps the subgroups L_0 and L_1, we deduce from <Ref> that the quotient of '((K_)) by this relation is a ℙ^1_ with two doubled points, as in <Ref>. The same is true for K'. At this stage we have identified the three irreducible components (see <Ref>) and the three remaining relations will tell us how to glue these components. The three sides of the `triangle' (<ref>) display maximal relations that identify a single point of one irreducible component with a single point of another. Indeed, each of the middle sections K/C_2, K'/C_2 and C_2/1 is a C_2, whose periodic locus is a single point η_C_2 (<Ref>). Each edge in (<ref>) identifies the image of that single point η_C_2 in the two corresponding irreducible components in <Ref>. The color in (<ref>) records the color of that image: A brown point or a green point in the ^1_ with doubled points. Let us do all three. First, the relation between the two Klein-fours, K and K', at the bottom of (<ref>), identifies the two green points corresponding to C_2, as we are used to with projective support varieties. Then, the last two relations in (<ref>), on the sides, identify a brown point in the K- or K'-component with the green point in the D_8/C_2-component corresponding to K_/C_2 and K'_/C_2, respectively. This is a direct verification, for instance using that (ψ^C_2)((ρ^D_8_K))=(ψ^C_2)(_D_8((D_8/K)))=_D_8/C_2(Ψ^C_2((D_8/K)))=_D_8/C_2((D_8/K))=(ρ^D_8/C_2_K/C_2) in ((D_8/C_2)). Thus we obtain '((D_8)) from these three identifications on the space of <Ref>. The result was depicted in <Ref>. alpha
http://arxiv.org/abs/2307.06291v1
20230712163737
Lattice dynamics related properties of Nickel: A comparative DFT and DFT+U study
[ "Shivani Bhardwaj", "Sudhir K. Pandey" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mtrl-sci" ]
Electronic mail: [email protected] Electronic mail: [email protected] ^1School of Basic Sciences, Indian Institute of Technology Mandi, Kamand - 175075, India ^2School of Mechanical and Materials Engineering, Indian Institute of Technology Mandi, Kamand - 175075, India The simultaneous influence of electronic correlations and magnetic ordering on the theoretical estimation of phonons and related properties of Ni is investigated. The work includes a comparative DFT and DFT+U study, where on-site Coulomb interaction parameter for 3d electrons, U(U_full)= 0.516 eV obatined from constarined random phase approximation (cRPA) calculations, is considered for DFT+U calculations. The analysis of phonon frequency estimates along high symmetric k-directions and sampled full-BZ (Brillouin zone) using Frozen phonon displacement method suggests the importance of both on-site Coulomb correlations and magnetism to account for the experimental frequencies. Further, prominent role of both the aspects is observed in the derived thermodynamic properties - Free-energy, specific heat & entropy, within quasi-harmonic approximation (QHA) specially at high temperatures. The temperature dependent evaluation of thermal expansion coefficient(α) and phonon density of states is performed together with the equilibrium elastic constants. The results obtained for Ni, suggest the significance of electronic energy correction due to both on-site Coulomb correlations and magnetic phase incorporation, to account for realistic description of experimental findings. This study realizes the inevitable role of correlation effects in studying the phononic properties of a correlated transition metal, hence directing a way to explore various other correlated electron systems for their lattice dynamics. Lattice dynamics related properties of Nickel: A comparative DFT and DFT+U study Sudhir K. Pandey^2, August 12, 2023 ================================================================================= § INTRODUCTION Lattice dynamics of a solid is said to be mainly dictated by its ionic degrees of freedom. However, on theoretical grounds, the consideration of electronic degrees of freedom is found to have significant impact on the estimation of lattice dynamic properties along with spin-degrees of freedom, especially in systems with magnetic ordering <cit.>. Evidently, to an extent such considerations have been investigated for their cruciality in correlated electron systems where profound interplay between correlation effects and magnetism could be expected to directly affect their lattice dynamic properties. Essentially, in numerical respects, the determination of phononic and related thermal properties which in essence are derived from electronic energy can be well suspected to have primary dependence on the modification in electronic energy inhabiting correlation effects' correction <cit.>. One such work by Corso et. al. <cit.> on Fe and Ni to study the effect of magnetization on phonon frequencies conclude by regarding the effect of magnetization to be quite small, suggesting the implementation of combined use of ultrasoft pseudopotentials, spin polarized generalized gradient approximations (GGA) and non-linear core corrections within Density functional perturbation theory (DFPT) technique to account for the experimental frequencies. On the contrary, Lee et. al. <cit.> show explicit dependence of phonon frequencies in Ni, on its magnetic moment within density-functional-based linear-response framework. The effect of magnetization on thermal properties has also been reported by Hatt et. al. <cit.> through study on the thermal expansion coefficient and bulk modulus of Ni and Fe. In similar line an attempt to study the effect of correlations by Łażewski et. al. <cit.> present the effect of local Coulomb interaction U on lattice dynamics in Fe including phonon density of states, equilibrium lattice constant and phonon frequencies, using the GGA+U method and report an upper bound for the value of effective U. Seemingly, the literature lacks consensus on the stretch to which the influence of electronic correlations and magnetization to be held important in the lattice dynamics studies of correlated materials. The individual attempts to study either of the aspects within different theoretical frameworks i.e. DFT, DFPT has been seen to create more arbitrariness in the interpretation of results, as evident from few aforementioned studies which rather regard the findings to have huge exchange-correlation functional dependence than the inadequate account of correlation effects, suggesting implementation of combination of approximations. Notably, the available studies lack simultaneous account of correlation effects and magnetism to mark a comprehensive take on the estimations of phononic and related thermal properties per se. The conclusions, drawn from the study of selected properties, could be viewed as property-specific and not generalizations to the material as a whole. For instance few of the above studies put remark on the effect of magnetism while studying the phonon frequencies along certain high-symmetric k-directions which might not bring about the true picture to disregard or regard the absolute effect, for other related lattice dynamic properties. Furthermore, the shortfall of substantial consistent efforts towards establishment of promising theoretical approach in this direction for correlated systems as "simple" as elemental transition metals raise the necessity for the course of research. We note that not much has been said about Ni metal in this regards ( on collective effect of correlations and magnetization) and being a correlated magnetic system, it serves the purpose of such study. In this direction, this work aims to address the arbitrariness by studying collective explicit dependence of correlation effects through on-site Coulomb interaction parameter U and magnetism by analysis of difference in estimates obtained in both NM and FM phases of Ni. Consequently, in order to account for the experimental lattice dynamic properties of Ni, an attempt to seek for suitable theoretical approach is presented through a comparative DFT and DFT+U study. Apparantly, the bench-marking attempts for the U parameter, by Sihi et. al. <cit.> report the cRPA calculated U_full (wherein overall (effective/total) screening is constituted by including 3d transitions along with other transitions in solid), commonly referred to as fully screened coulomb interaction parameter as the relevant choice of U_eff to be used in DFT+U study. In this work the (U_full=0.516 eV) obtained from cRPA <cit.> is used while carrying out DFT+U calculations. The FM phase of Ni corresponding to 0K magnetization available in both the DFT and DFT+U formalisms is dealt with to understand the results pertaining to magnetic effects opposed to NM phase. Here, our results indicate prominent role of both correlations and magnetism in the estimation of phonon frequencies of Ni, visibly clear from phonon dispersion spectra and phonon density of states calculations. Similarly, the reflection of interplay of the effects is also found in estimates of derived thermal properties i.e. phononic free energy, specific heat and entropy. In addition to this, temperature dependent phonon density of states (at 410 & 940K) and thermal expansion coefficient are calculated for their comparative study and further equilibrium elastic constants and compressibilty factor are evaluated using both the techniques i.e. DFT and DFT+U. § COMPUTATIONAL DETAILS In this work, the electronic structure calculations are carried out for Ni, wherein full-potential linearized-augmented plane-wave method is used to carry out the non-magnetic (NM) and ferromagnetic (FM) calculations. The volume-optimized lattice parameter value of 3.513 Å is used with space group of 225 . Here, DFT and DFT+U calculations with PBE exchange-functional<cit.> are carried out using WIEN2k code <cit.>. Phonon properties are calculated using PHONOPY code <cit.> based on finite displacement method (FDM) and supercell approach <cit.>. A supercell of size 2 × 2 × 2 is used for calculating the total forces on each atom in WIEN2k code. The k-mesh size of 5 × 5 × 5 is used in the full-BZ for force calculation. The convergence criterion for force calculations is set to 0.01 mRy/bohr. The forces are then used for calculating second-order force constants extracted using PHONOPY code to calculate the phonon frequencies. Thermal expansion coeffcient and related phononic thermal properties of Ni are also calculated under QHA as implemented in PHONOPY code. The process of calculating the phonon frequencies and finite-temperature phononic properties in this work, briefly to say includes calculation of force/atom using finite displacement method also referred to as "frozen phonon" approximation on the supercell generated corresponding to the optimized lattice structure of Ni. The method involves displacement of an atom from its symmetrically allowed position in the generated supercell and thus the force on all other atoms is calculated which is numerically available from the converged self-consistent WIEN2k calculations. The force/atom thus calculated is further used to calculate force constants and resultant dynamical matrix which upon diagonalization yeilds phonon modes or phonon frequencies using PHONOPY. The equations below provide the resulting computed expressions for Force constant Φ and dynamical matrix D as the function of atomic positions. F_α(jl) = -δ V/δ r_α(jl) Φ_αβ(jl, j^'l^') = δ^2V/δ r_α(jl)δ r_β(j^'l^') = -δ F_β(j^'l^')/δ r_α(jl) Where V stands for V(r(j_1l_1)......r(j_nl_N)) which is to be inferred as general potential energy expression of a phonon system. where r(jl) is the position of the j^th atom in the l^th unit cell with total n atoms in a unit cell and the total no. of unit cells being N. Further dynamical matrix is evaluated as- D_αβ(jj^i,q)= Σ_l^'exp(iq.[r(j^'l^')-r(j0)])/√(m_jm_j^') further, reduces to eigenvalue problem- Σ_j^'β D_αβ(jj^i,q) e_β(j^',qν) = [ω(qν)]^2 e_α(j,qν) where m - atomic mass, q - wave vector, ν - band index, e are the eigenvectors for corresponding band index and wave vector q, obatined by diagonalization of D(q). The related thermodynamic properties are computed using the theoretical expressions provided below- Harmonic phonon energy - E = Σ_qνħω(qν) [1/2+ 1/exp(ħω(qν)/k_BT)-1] Specific heat - C_V = (δ E/δ T)_V = Σ_qνk_B [ħω(qν)/k_BT]^2exp(ħω(qν)/k_BT)/[exp(ħω(qν)/k_BT)-1]^2 Partition function - Z = exp(-ϕ/k_BT) Π_qνexp(-ħω(qν)/2k_BT)/1-exp(-ħω(qν)/k_BT) Helmhotz Free energy - F= -k_BTln Z F= 1/2Σ_qνħω(qν) + k_BTΣ_qνln [1-exp(-ħω(qν)/k_BT) Entropy - S = -δ F/δ T S = 1/2TΣ_qνħω(qν)(ħω(qν)/2k_BT) -k_BΣ_qνln[2sinh(ħω(qν)/2k_BT)] The elastic constants are calculated using long wave method as provided by Barker et al. <cit.> Wave velocities v are calculated for the longitudinal and transverse waves in the <100> and <110> directions using the equation - v = δν/δ q Where ν is phonon frequency and q the phonon wave vector, evaluated at q = 0. The elastic constants c_ij are calculated as: c_11 = ρ v_1^2 c_44 = ρ v_2^2 c_12 = c_11 - 2ρ v_3^2 where v_1,v_2,v_3 are wave velocities along longitudional <100>, transverse <100> & slow transverse <110> waves. Compressibilty factor is found as - K = 3/c_11+2 c_12 § RESULTS AND DISCUSSION To study the effect of electronic correlations and magnetization on the lattice dynamics of Ni, the calculations have been carried out at DFT and DFT+U level in both NM and FM phases. In order to see the effect, phonon dispersion and phonon density of states (DOS) have been computed and compared with the available experimental results. Fig.1 shows the calculated phonon dispersion curves with the room temperature experimental curve<cit.> obtained along Γ to X and Γ to L high symmetric k-directions. It contains three acoustic (one longitudinal and two-degenrate transverse) branches. The evident difference in the curves belonging to NM and FM phases, at both the DFT and DFT+U levels, suggests the influence of magnetization on the phonon frequencies along the directions. The frequency values obtained in two phases differ more in case of longitudinal branch than transverse. At DFT (DFT+U) level, maximum frequency deviation of about 4meV ( 2meV) corresponding to the two phases has been found in the midway region (towards X point) in Γ to X direction and ( towards L point) in Γ to L direction. Calculated phonon dispersion curves indicate the effect of electronic correlations and magnetization on the estimation of phonon frequencies. It becomes important to note that within the experimental error bar (around 0.5meV) DFT+U-FM and DFT+U-NM both can be regarded in good agreement with the room temperature experimental data for both the branches. Further, for having closely whole picture of full-BZ, phonon DOS has been calculated. Fig.2 shows the calculated phonon DOS obtained from the same set of formulations subjected to the broadening of around 1.13meV along with the available experimental data at 10K <cit.> . The characteristic features visible in experimental data include peaks A and B at frequencies 33meV and 24meV, respectively and a hump like feature C at around 22meV, along with a dip D at 30meV. The data show highest phonon frequency cut off at around 40meV, which is seen to be appreciably accounted by DFT-NM curve. The DFT curves of both the phases can be seen to account for the peak A but fail to account for the experimental features B, C and D. Whereas, DFT+U in NM phase improves on to account for the dip as well along with peak A while gets more worse around the frequency region of the features B and C. The DFT+U-FM curve comes out to be in overall good agreement with the experimental curve accounting for the visible features as well as qualitative behavior in both the low and high frequency limits. As mentioned before the study <cit.> done on Fe and Ni within DFPT framework along high-symmetric k-directions in magnetic phases, shows weak role of magnetization on phonon frequencies. In contrast to this study we find significant effect of magnetization on frequencies along with electronic correlations, which also becomes evident from phonon DOS plot. Other such study by Lee et al. <cit.> using DFPT at fixed magnetic moment (FSM) employing LDA functional, suggest the importance of consideration of magnetization in calculating phonon frequencies meanwhile showing an opposite trend between increasing magnetic moments to what our study reflects for NM and FM phase, notably their FSM calculations depict still overestimation of experimental frequencies. Yet another attempt of study on Fe <cit.> suggests the influence of Coulomb interaction parameters in calculating the phonon frequencies, which is also evident from our study. From the above results where, DFT+U-FM considerably gives realistic estimation of phonon frequencies at low temperatures, it could be expected to account for the lattice dynamic properties as well, which follow from phonon frequencies i.e. thermal properties especially in low temperature region. Further part of work involves the calculations of thermal properties pertaining to the lattice dynamics- free energy, entropy and specific heat. The phononic free energy given in Fig.3 shows relatively negligible difference in calculated curves in low temperature region ranging from 0 - 200K. The curves start to deviate from each other towards high temperature range i.e. at 600K, DFT-NM and DFT+U-NM lie close to each other (at ∼ -11.4 KJ/mol) showing less significant effect of U in NM phase, Whereas, comparatively in FM phases DFT-FM and DFT+U-FM, curves lie apart by nearly 0.4KJ/mol at 600K. The curves in FM phase differ slightly more around 1200K region by ∼1KJ/mol, while the NM phase curves differ by ∼0.5 KJ/mol. It could also be noted that free energy is estimated at relatively higher values by the FM phase in comparison to their respective NM phase. The calculated entropy curves deviate from each other with seemingly constant difference in both the NM and FM phases in the temperature range of 500K to 1200K. Both the NM curves maintain a difference of around 0.2J/K mol, whereas the FM curves lie slightly below the NM curves, estimating entropy by 0.4J/K mol difference. Further, the specific heat calculations suggest no significant difference in the calculated curves for the phononic energy in high temperature region ranging from 500-1200K. The constant values achieved by the curves around 500K correspond roughly to the excitation of highest phonon mode with frequency (∼40meV) whereafter further significant contribution to the specific heat ceases ( consistent with the Dulong-petit's law). The curves differ in low temperature region ∼200K with DFT-FM and DFT+U-NM showing maximum deviation among all curves by ∼ 1.5J/K mol. In line of studying the thermal properties, thermal expansion coefficient α values over a temperature range of 0-1000K have been calculated within the above mentioned frameworks, as given in Fig.4. In the low temperature region roughly till 150K all the curves show finely close estimates of values, while remarkable difference in values (by maximum of ∼0.25×10^-6 units) is found between DFT+U-NM and DFT+U-FM in the temperature region around 200K. In comaparison to DFT+U curves, DFT curves in both the phases are seen to consistently overestimate, in the region roughly ranging from 200K-400K. In temperature regime from 600K to 1000K, DFT+U-NM curve can be observed to exhibit notable change in behaviour around 600K, followed by a dramatic rise in values. Comparative analysis suggests agreeable correpondence of DFT+U-FM curve, together with the DFT-NM and DFT-FM curves with the experimental results in the temperature range extending up to 300K. The 300K estimates are found to be 1.2×10^-5, 1.19×10^-5 and 1.11×10^-5 by DFT-FM, DFT-NM and DFT+U-FM respectively, where the experimental<cit.> room temperature value lies at around 1.28×10^-5 units. The failure of DFT+U-FM above 600K and subsequently the behaviour of DFT+U-NM can be recognized in qualitative respects given that Ni exists in the FM state below 631K. Additionally, at high temperatures, the increased deviation in calculated and experimental α values, does not seem surprising as here QHA has been used, where the electronic ground state energies are taken corresponding to different volumes, while calculating α. Further course of this work deals with investigating the thermal effects on the phonon DOS. Since, as already discussed DFT+U-FM provides better description of experimental phonon DOS at 10K, and to an extent DFT+U-NM is able to produce its charachterstic features as well, the figure here includes curves corresponding to two finite temperatures i.e. 410 and 994K pertaining to 1% and 2% percent volume expansion, respectively within DFT+U framework in both the phases based on temperature value below or above T_c, along with 0K curve. The figure clearly indicates the effect of temperature showing the shift in peaks A and B towards low frequency region, as we go from 0- 410 - 994K, togetherwith resulting in significant broadening as reflected from the reduced intensity of peak especially visible around B peak. The calculated curves depict the effect of temperature consistent with the experimental observtions provided by Delaire et al.<cit.> Finally the elastic constants along with compressibility factor have also been calculated and provided in the table along with available room temperature experimental<cit.> and theoretical<cit.> results. Evidently, DFT in FM phase is found to estimate elastic constants at relatively higher values than other calculated estimates. Seemingly, the methods tend to overestimate the elastic constants when compared to the available experimental and theoretical study by almost 50% in case of DFT+U-FM c_11's estimation. The compressibilty factor resulting from DFT+U-FM (0.558× 10^-12) seems to be in close agreement with experimental and theoretical findings. The obtained calculated values could be considered to an extent reasonably well while regarding deviation to the prominence of thermal effects taking over at finite temperature. § CONCLUSION We present a comprehensive study of phonon and related phononic thermal properties for their electronic correlations and magnetic phase dependence through a step-by-step account of both the aspects realized via a comparative DFT and DFT+U study on Ni. The calculations carried out for the phonon-dispersion spectra and phonon density of states suggest considerable difference in obtained frequency values at the DFT and DFT+U level for both the NM and FM phases, thus indicating the effect of both the aspects. We find DFT+U-FM estimates appreciably close to the experimental frequencies suggesting the role of both the aspects ( U and magnetic ordering). The relative success of DFT+U in FM phase to account for the realistic description of experimental phonon frequencies in Full-BZ (which being a broader picture of k-dependent properties' evaluation) suggest the importance of simultaneous account of correlations and magnetic ordering in Ni. Notable deviation in all the four formulations (DFT-NM & FM, DFT+U-NM & FM) in case of thermodynamic properties again depicts the role of both the effects, specially towards high-temperature region. Conclusively, we establish the prominence of effects further after studying the equilibrium elastic constants and compressibility factor and the temperature dependent phonon density of states & thermal expansion coefficient. 150 kim D. J. Kim, New Perspectives in Magnetism of Metals, (Springer, New York, 1999). Martin R.M. Martin, L. Reining and D. Ceperley, Interacting Electrons Theory and Computational Approaches, Cambridge University Press,Cambridge, (2016). prakash P. Pandey, V. Pandey and S. K. Pandey, arXiv preprint:arXiv.2211.14504 (2022) PhysRevB.62.273 A.D. Corso and S. de Gironcoli, Phys. Rev. B 62, 273 (2000). Lee Joo-Hyoung Lee, Young-Chung Hsue, and A. J. Freeman, Phys. Rev. B 73, 172405 (2006). PHYSICAL REVIEW B 82 A. J. Hatt and B. C. Melot, Phys. Rev. B 82, 134418 (2010). PHYSICAL REVIEW B 74 Jan Lażewski, P. Piekarz, A. M. Oleś and K. Parlinski, Phys. Rev. B 74, 174304 (2006) . AntikV A. Sihi and S. K. Pandey, Eur. Phys. J. B 93, 9 (2020). AntikFe A. Sihi and S. K. Pandey, Physica B: Cond. Mat. 636, 413785 (2022). shivani B. Shivani, A. Sihi and S. K. Pandey, arXiv preprint:arXiv.2208.12060 (2022). PBE J. P. Perdew, S. Kurth, A. Zupan, and P. Blaha, Phys. Rev. Lett. 82, 2544 (1999). wien2k P. Blaha, K. Schwarz, F. Tran, R. Laskowski, G. Madsen and L. Marks, J. Chem. Phys. 152, 074101 (2020). phonopy A. Togo and I. Tanaka , Scr. Mater., 108, 1 (2015). fdm G. Kresse, J. Furthmller and J. Hafner , EPL, 32, 729 (1995). PhysRevB.2.4176 J. A. Barker, M. L. Klein, and M. V. Bobetic, Phys. Rev. B 2, 4176 (1970). expt_band R. J. Birgeneau, J. Cordes, G. Dolling, and A. D. B. Woods, Phys. Rev. 136, A1359 (1964). expt_dos M. Kresch, O. Delaire, R. Stevens, J. Y. Y. Lin, and B. Fultz, Phys. Rev. B 75, 104301 (2007). expt_alpha F. C. Nix and D. MacNair, Phys. Rev. 60, 597 (1941). neighbours J. R. Neighbours, F. W. Bratten, C. S. Smith, J. Appl. Phys., 23, 389–393 (1952). theory_elastic T. Çain and B. M. Pettitt, Phys. Rev. B, 39, 12484 (1989).
http://arxiv.org/abs/2307.06038v1
20230712093321
Pyramid Deep Fusion Network for Two-Hand Reconstruction from RGB-D Images
[ "Jinwei Ren", "Jianke Zhu" ]
cs.CV
[ "cs.CV" ]
Journal of Class Files, Vol. 14, No. 8, August 2021 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals Pyramid Deep Fusion Network for Two-Hand Reconstruction from RGB-D Images Jinwei Ren, and Jianke Zhu, Senior Member, IEEE Jinwei Ren and Jianke Zhu are both with the College of Computer Science and Technology, Zhejiang University, Zheda Rd 38th, Hangzhou, China. Email: {zijinxuxu, jkzhu}@zju.edu.cn; Jianke Zhu is the Corresponding Author. August 12, 2023 ==================================================================================================================================================================================================================================================================================== Accurately recovering the dense 3D mesh of both hands from monocular images poses considerable challenges due to occlusions and projection ambiguity. Most of the existing methods extract features from color images to estimate the root-aligned hand meshes, which neglect the crucial depth and scale information in the real world. Given the noisy sensor measurements with limited resolution, depth-based methods predict 3D keypoints rather than a dense mesh. These limitations motivate us to take advantage of these two complementary inputs to acquire dense hand meshes on a real-world scale. In this work, we propose an end-to-end framework for recovering dense meshes for both hands, which employ single-view RGB-D image pairs as input. The primary challenge lies in effectively utilizing two different input modalities to mitigate the blurring effects in RGB images and noises in depth images. Instead of directly treating depth maps as additional channels for RGB images, we encode the depth information into the unordered point cloud to preserve more geometric details. Specifically, our framework employs ResNet50 and PointNet++ to derive features from RGB and point cloud, respectively. Additionally, we introduce a novel pyramid deep fusion network (PDFNet) to aggregate features at different scales, which demonstrates superior efficacy compared to previous fusion strategies. Furthermore, we employ a GCN-based decoder to process the fused features and recover the corresponding 3D pose and dense mesh. Through comprehensive ablation experiments, we have not only demonstrated the effectiveness of our proposed fusion algorithm but also outperformed the state-of-the-art approaches on publicly available datasets. To reproduce the results, we will make our source code and models publicly available at <https://github.com/zijinxuxu/PDFNet>. RGB-D fusion, 3D reconstruction, hand pose, end-to-end network. § INTRODUCTION Recovering the 3D pose and shape of human hands from a single viewpoint plays a pivotal role in a multitude of real-world applications, such as human-computer interaction <cit.>, mixed reality <cit.>, action recognition <cit.>, and simulation. Over the past two decades, extensive research <cit.> has emerged in the field of hand reconstruction with various inputs including single color images, RGB-D images with depth maps, multi-view images, and video sequences. Due to the inherent complexity of finger joints, self-occlusions, and motion blur, an ongoing endeavor is to effectively address the challenges in 3D hand reconstruction. At present, the prevailing methods <cit.> for hand reconstruction predominantly focus on directly estimating both hands from a single RGB image. However, these methods encounter difficulties in real-world scenarios characterized by cluttered backgrounds, lighting variations, and motion blur, which limit their performance to environments similar to the training data. Generally, a conventional framework <cit.> separates detection and reconstruction, which requires extracting the hand region from the image by an off-the-shelf detector before feeding it to the reconstruction model. Consequently, these models only predict root-aligned 3D hand meshes. Instead, depth map-based methods <cit.> often incorporate range maps as auxiliary supervisory information to compensate for inherent noise and limited resolution. Additionally, certain approaches <cit.> employ depth maps to predict sparse 3D keypoints. The absolute scale information in depth maps is not affected by background changes as well as the rich foreground features in RGB maps, which is crucial to hand reconstruction. Fig. <ref> presents a visual comparison between utilizing solely RGB input and augmenting it with depth map input. The previous fusion methods <cit.> have primarily relied on RGB-D cameras, which leverage both rich image information and depth measurements to accomplish tasks such as object detection and semantic segmentation. Despite extensive research efforts over an extended period, an effective fusion scheme for hand reconstruction remains elusive. This challenge can be attributed to the highly nonlinear nature of gestures <cit.> and the inherent variations between hands, making it arduous to achieve satisfactory results through a straightforward combination of color images and depth maps. In certain scenarios, utilizing depth maps alone can actually yield superior outcomes <cit.>. Hence, it becomes imperative to ascertain an effective fusion strategy specifically tailored for hand reconstruction tasks. The simplest and most rudimentary fusion method entails directly incorporating the depth map as an additional channel alongside the RGB image <cit.>. This approach merely requires modifying the input channel of the model from three to four channels. However, the performance enhancement achieved by this simple fusion method remains quite limited. An alternative fusion approach that has gained popularity is operating at the feature level <cit.>. It is important to note that directly concatenating the two features obtained from shallow CNNs does not yield any performance improvements <cit.>. Accordingly, researchers have made endeavors to extract multi-scale depth features <cit.> or perform cross-fusion at intermediate feature layers <cit.>. The aforementioned fusion methods are primarily employed for the cropped single-hand images, which are limited to predicting sparse 3D keypoints rather than dense meshes. Furthermore, these methods process depth maps into 2D images, disregarding their inherent 3D characteristics. Inspired by previous work in 3D object detection <cit.> and 6-DoF estimation <cit.>, we adopt a different approach by converting the depth map into an unordered point cloud, and then extract point features to fuse them with RGB features derived by CNNs. Experimental results indicate that this method yields the improved performance due to the more effective feature. Additionally, we argue that it is insufficient for learning local features by relying solely on fixed-sized point feature regression. Motivated by the architecture of PointNet++ <cit.>, we introduce a pyramid feature fusion module that enable the integration of point cloud features and RGB features at their corresponding positions across multiple scales. Moreover, existing frameworks based on sparse 3D keypoints or root-aligned mesh estimation may fall short when attempting to achieve two-hand reconstruction in real-world interactive scenarios. In order to address the aforementioned challenges, we present an end-to-end framework that incorporates RGB and depth information to accurately reconstruct a 3D mesh of both hands from RGB-D inputs. Unlike HandPointnet <cit.>, our approach eliminates the need for normalizing the point cloud using oriented bounding boxes, thereby avoiding misalignment between the point cloud and color image while simplifying the process. To tackle the difficulty in learning local features, we suggest a pyramid structure feature fusion module called PDFNet, which facilitates the fusion of two features at different scales in order to enable the effective integration of information. Furthermore, we introduce an adaptive weight allocation module to achieve more robust and accurate fusion, which allocates weights to different features to mitigate interference from local unreliable regions. To attain a more comprehensive representation of hand structures, as opposed to merely sparse 3D keypoints, we opt to employ a graph convolutional network (GCN) as our decoder as in <cit.>. Instead of directly using the image-wide features, we introduce a center map for precise hand localization. Additionally, we conduct experiments using the parameterized model MANO <cit.> and multiple fully connected layers as alternative decoders across various two-hand datasets. The results demonstrate the convincing performance enhancement achieved through our fusion algorithm. From above all, the main contributions of our work can be summarized as follows. * We propose an efficacious end-to-end single-stage framework that reconstructs 3D hand meshes from a solitary RGB-D input. To the best of our knowledge, this is the first RGB-D fusion framework for two-hand reconstruction. * We devise a novel fusion module named PDFNet that effectively harnesses both color information and depth maps. Empirical studies validate the substantial enhancement that this module imparts upon the baseline model. * Both quantitative and qualitative evaluations clearly demonstrate that our proposed approach achieves state-of-the-art performance on publicly available two-hand datasets <cit.>. § RELATED WORK Rapid progress has been made on hand pose estimation <cit.> and 3D hand mesh <cit.> reconstruction over recent years, giving rise to various categories such as single-handed <cit.> and multi-handed reconstruction <cit.>, fully supervised  <cit.> and weakly supervised methods <cit.>, etc. In this paper, our primary research focus lies in exploring different types of inputs. Consequently, previous studies can be classified into three distinct groups, namely color image, depth map, and RGB-D image. §.§ Hand Reconstruction from Color Image Due to the lack of depth information, it is very challenging to recover 3D hand pose from a single color image. Zimmermann et al. <cit.> trained a deep neural network to learn the 3D articulation prior of hands on a synthetic dataset. Guo et al. <cit.> proposed a feature interaction module to enhance the joint and skeleton feature. In addition to predicting 3D pose, Boukhayma et al. <cit.> further predicted the shape of the hand and optimized the 3D parameterized model MANO <cit.> through a re-projection module. To improve performance, the subsequent model-based methods introduced iterative optimization <cit.>, neural rendering <cit.>, spatial mesh convolution <cit.>, adaptive 2D-1D registration <cit.>, etc. In addition, novel image-to-pixel prediction networks <cit.>, graph-convolution-reinforced transformer <cit.>, and contrastive learning <cit.> have also been applied in this field. In order to address the scarcity of 3D annotations for real hands, Zimmermann et al. <cit.> proposed the first single-hand dataset containing 3D pose and shape labels. Hampali et al. <cit.> proposed a dataset with similar annotations, which focuses on hand-object interaction scenes. Considering the situation of multiple hands in a picture, multi-stage methods <cit.> <cit.> that separate hand detection and pose estimation, as well as single-stage methods <cit.> <cit.> that jointly detect and reconstruct, have been proposed. Moon et al. <cit.> proposed a large-scale real-captured interacting hand dataset using a multi-view system. Based on this dataset, several subsequent works <cit.> <cit.> <cit.> <cit.> have conducted more in-depth research on left and right hand interaction and designed exquisite network structures to better extract features. §.§ Hand Reconstruction from Depth Map Compared to using only RGB images, it is more intuitive to recover the hand pose and shape from depth maps, as partial geometric information can be directly obtained. According to the different processing methods for input data, these methods can be roughly divided into two categories, including image-based methods and point cloud-based approaches. The former mostly directly employs CNNs to process depth maps like RGB images through feedback loop <cit.>, dense per pixel compression <cit.>, forward kinematics <cit.>, adaptive weighting regression <cit.>, and auxiliary latent variable <cit.>, etc. The latter processes the depth map into a point cloud and directly extracts point features from it to regress the hand pose. Ge et al. <cit.> proposed 3D CNN for point feature extraction and regress full hand pose in volumetric representation. In order to effectively utilize information in depth images and reduce network parameters, Ge et al. <cit.> adopted the network structure of PointNet <cit.> to extract point cloud features. The point cloud regularization module is also introduced to improve the robustness of the method. Subsequent work adopted similar frameworks while introducing the intermediate supervisory information such as heatmap and unit vector field <cit.>, semantic segmentation <cit.> to enhance the performance of the model. With the deepening of research, permutation equivariant layer (PEL) <cit.>, self-organizing map (SOM) <cit.> and Transformer <cit.> have also been introduced into hand pose estimation. As for interacting hand, Taylor et al. <cit.> trained a segmentation network to construct a 3D point cloud from depth maps and designed a signed distance field to minimize model fitting errors. Muller et al. <cit.> estimated a vertex-to-pixel correspondence map first and proposed an energy minimization framework, which can optimize the pose and shape parameters by fitting the point cloud. However, optimization-based model fitting methods rely more on precise 3D point cloud inputs. On the other hand, the learning-based methods may obtain more prior information from other depth maps to mitigate the impact of sensor noise, which has not yet been explored. §.§ Hand Reconstruction from RGB-D Image RGB-D fusion has been extensively studied in fields such as 3D object detection <cit.>, object pose estimation <cit.>, and semantic segmentation <cit.> <cit.>. However, there has been few in-depth research on hand reconstruction tasks.  <cit.> <cit.> <cit.> requested RGB-D sensor as input while the RGB image was only used to segment the hand part in the depth map. Cai et al. <cit.> used depth maps as regularization terms during training to reduce dependence on 3D annotations, and only used RGB images during testing. Yuan et al. <cit.> pre-trained a depth-based network and froze the parameters of the network during joint RGB-D training. The gap between the RGB-based method and the depth-based approach is narrowed by minimizing the intermediate features of the two branches. Kazakos et al. <cit.> designed a double-stream architecture for RGB-D fusion, and tried input-level fusion, feature-level fusion, and score-level fusion. Unfortunately, their experiments indicated that adding RGB information did not help with performance gains. Mueller et al. <cit.> directly used the 4-channel RGB-D input and trained two CNNs to locate and regress the 3D position of the hand. They chose to project RGB pixels onto a depth map to obtain a colored depth map and then predicted the absolute coordinates of the hand center and the 3D offset of each joint separately. Lin et al. <cit.> scaled the depth map to multiple sizes to aggregated features at different resolutions, and then adopted feature attention structures <cit.> to fuse RGB features. Sun et al. <cit.> adopted a similar dual stream structure, where the depth map branch used a shallower network to avoid overfitting. The features of the two branches were first cross fused in the middle part, and then concatenated together. This resulted in better results compared to direct concatenation. Each of the aforementioned approaches exhibits certain limitations. Primarily, they solely focus on regressing hand pose without undertaking shape reconstruction. While point cloud structures offer a more accurate depiction of geometric information compared to depth maps, there is a dearth of research in this area. Additionally, simply stitching the global features may pose challenges in effectively capturing local structures. A potential solution lies in multi-scale feature fusion. By taking into account these aforementioned limitations, we introduce a novel framework for dense hand mesh reconstruction, built upon our pyramid fusion module (PDFNet). § METHODOLOGY The goal of this paper is to restore a dense 3D mesh of both hands within real-world scenes through a single RGB-D image. Our framework takes both RGB images and point cloud generated from depth maps as inputs, and extracts features using classic ResNet50 <cit.> and PointNet++ <cit.>, respectively. Subsequently, the extracted features are fed into PDFNet for deep fusion to improve the performance of our model. The fused features are then fed into the GCN-based decoder to output the dense 3D mesh of both hands. By ingeniously fusing the modalities of RGB and depth, we are able to accurately reconstruct a 3D hand mesh with real depth and scale in camera space. For interactive scenarios in AR/VR applications, it is imperative to restore the absolute position within the camera coordinate system, surpassing the limitations of previous root-aligned outputs. Apart from the root position, the depth map also conveys the relative geometric relationship among hand joints, which yields significant contributions to the accuracy of reconstruction in local coordinate systems. §.§ Overview The overall structure of our approach is a classical encoder-decoder architecture, as depicted in Fig. <ref>. Our method can be divided into three integral components, including feature extraction, feature fusion, and feature decoding. Within the feature extraction module (Section <ref>), we extract 2D image features utilizing ResNet50, while simultaneously extracting 3D point cloud features using PointNet++. In the feature fusion phase (Section <ref>), the corresponding RGB features and point cloud features are fused at the pixel level through point cloud indexing. Finally, in the feature decoding phase (Section <ref>), we employ multi-layer Graph Convolutional Networks (GCN) and upsampling operations to decode the input global features into a finely detailed 3D dense mesh representation with two distinct hands. In the following sections, we will provide a detailed description of each module in the framework. §.§ Dual-stream Encoder In the feature extraction module, we need to fully extract the features from the RGB-D image containing both hands from the first perspective to restore accurate pose and shape. RGB Feature Extraction. Firstly, given an unprocessed monocular RGB image ℐ_c∈ℝ^H×W×3, we use the classic ResNet50 to extract 2D pyramid features as follows: F = {ℱ_1∈ℝ^H×W×3, ℱ_2∈ℝ^H/2×W/2×64,ℱ_3∈ℝ^H/4×W/4×256}. Then we adopt two simple decoder networks to regress the center P_ct = { P_l∈ℝ^2, P_r∈ℝ^2 } and mask M = { M_l∈ℝ^H×W, M_r∈ℝ^H×W} of the left and right hands. The predicted center position of each hand will be used to initialize the 3D position of the hand mesh, and the predicted mask will be used to segment the hand area in the depth map. Point Cloud Preprocessing. Given an unprocessed depth map ℐ_d∈ℝ^H×W×1 and predicted mask M for both hands, we first convert the 2D image into a 3D point set using the camera's intrinsic parameters. By calculating the mean depth of the point set, we filtered out outliers that exceed the threshold range [-0.08,+0.08] mm to reduce noise interference. Then, we randomly selected 1024 points from the remaining point set as the initial point cloud. Based on the generated initial point cloud, we can directly extract point cloud features using a specially designed network. Review of PointNet++. Compared to directly extracting features from 2D depth maps using CNNs, PointNet <cit.> pioneered the extraction of high-dimensional features directly from point cloud through a per-point multi-layer perceptron (MLP) network. However, there is a lack of mining for local structural features due to the fixed number of points in PointNet. Therefore, PointNet++ <cit.> proposed a hierarchical feature extraction architecture to address this issue. Specifically, it includes multiple point set abstraction levels by selecting a fixed number of points in each layer as the center of the local area. The K neighbors around each center point will be aggregated and high-dimensional features will be extracted through the classic PointNet network. The center point and high-dimensional features will be fed into the next layer and the aggregation operation will be repeated. Finally, global features are extracted from all points in the last layer through the PointNet network. It is worthy of noting that previous work often used PointNet for point cloud classification, and it is still an unexplored field to predict dense 3D meshes from sparse point cloud. Depth Feature Extraction. Given a set of point cloud data with both hands X_h = { X_l∈ℝ^N×C, X_r∈ℝ^N×C}, we refer to the structure of PointNet++ to extract pyramid point cloud features as follows: P = {𝒫_1∈ℝ^2×N×C, 𝒫_2∈ℝ^2×N_1×C_1,𝒫_3∈ℝ^2×N_2×C_2}. In our implementation, N=1024, C=3, N_1=512, C_1=131, N_2=128, C_2=259. At each level of point set abstraction, our approach involves employing ball queries to locate neighboring points within a predefined radius range. These identified points are subsequently fed into Multi-Layer Perceptrons (MLPs), enabling the extraction of high-dimensional features that correspond to the number of central points. These features are then concatenated with the features of the central point, yielding the point cloud features specific to that particular layer. Note that the pyramid point cloud features obtained at this stage exhibit a structure similar to the pyramid RGB features obtained earlier. In other words, as the scale decreases, the number of channels deepens, representing a continuous process of feature abstraction. By integrating pyramid features at different scales, we delve further into the feature characteristics of distinct modalities, thereby mutually reinforcing and complementing each other. §.§ Pyramid Deep Fusion Network At this stage, we have successfully obtained pyramid features for both modalities. Now, the crucial step is to fuse these features effectively. While the most simplistic approach involves a single layer of MLP to generate global features from the two modalities, followed by their concatenation, such a method neglects the local discrepancies present between the modalities. Factors such as motion blur, occlusion, and noise are significant local characteristics that might impair the ability of global features to complement each other. To address this issue, we have adopted a pixel-level feature fusion technique, aligning the corresponding RGB features with the point cloud features through 3D-2D projection of the point cloud. Notably, unlike the approach employed in DenseFusion <cit.>, we perform pixel-by-pixel fusion on multi-scale pyramid features. In contrast to simply concatenating two distinct features, we incorporate a feature space transformation module. This module dynamically allocates weights to avoid the influence of local biases on the overall performance. Specifically, we have designed a three-layer pyramid feature fusion structure, as shown in Fig. <ref>. With the help of PointNet++ <cit.> network, we downsample the initial point cloud 𝒳_1∈ℝ^1024×3 to more sparse point cloud 𝒳_2∈ℝ^512×3 and 𝒳_3∈ℝ^128×3 through central point aggregation. Each set of point cloud finds K neighboring points as a local point set through the ball query of the center point. Through the PointNet network, higher-dimensional features are extracted from each local point set and subsequently consolidated into a single point representation via max-pooling. The resulting aggregated high-dimensional features are concatenated with the center point features from the original point cloud to obtain the point features P of that layer. For easier comprehension, the corresponding pseudocode can be seen in algorithm <ref>. To acquire the RGB features corresponding to specific positions, we retain the index vector of the point cloud with respect to the depth map. Through this index, we project each point cloud onto a 2D feature map and gather the corresponding features, as illustrated in the projection-fetch process depicted in Fig. <ref>. The collected RGB features and point cloud features possess similar dimensions to facilitate seamless feature stitching, which is deemed as the conventional and effective approach for feature fusion. However, disparities exist in the data distribution and magnitude order between the two feature vectors, prompting the need for adaptive allocation of feature weights in order to attain enhanced outcomes. Motivated by the Spatial Feature Transform technique introduced by <cit.>, we have tailored a shallow MLP network to learn scale and shift parameters individually for the aforementioned two features. The RGB features serve as the conditioning factor to acquire the scale and shift parameters. This enables a feature affine transformation that maps the point features into a novel feature space, as illustrated below 𝒫̂ = 𝒫⊙α + β, (α, β) = ψ(ℱ). α and β are learned affine transformation parameters scale and shift, whose dimension is the same as 𝒫. ⊙ refers to element-wise multiplication, while ψ refers to our feature transformation network. The transformed feature 𝒫̂ will be aggregated into a more sparse high-dimensional feature point cloud through the point set abstraction layer of PointNet++. The point cloud features of the last layer are fused to generate a single global feature 𝒢∈ℝ^2×1024×1 through PointNet network. After obtaining the fused features, we aim to merge them with the center features derived from CNNs. The center features represent the global characteristics of the entire hand, while the fused features consist of sparse local features. This design combines global and local elements, which maximizes the representation of input images and leads to improved results. Our subsequent ablation experiment further corroborates this discovery. Once we have acquired the final fused features, they are fed into our GCN-based decoder to output dense 3D meshes of both hands. §.§ GCN-based Decoder To fully leverage the extracted feature information, our decoder has been primarily constructed on the foundation of a state-of-the-art (SOTA) method <cit.>. Unlike primarily addressing interactive hands positioned at the center, our method accommodates hands appearing in any position within the field of view. Thus, we draw inspiration from the design of CenterNet <cit.> and utilize the center point as a representation of the hand. When extracting the corresponding image features, we collect global features from the central point position within the feature map, instead of directly flattening the entire map. Subsequent comparative experiments have substantiated the advantages of our approach, as it effectively focalizes the features on the hand regions rather than the background areas, ultimately yielding superior results. We employ the Chebyshev Spectral Graph Conventional Network <cit.> to construct our 3D hand mesh, following the classic Coarse-to-fine structure. As in <cit.>, we construct a three-layer submesh with designated vertex quantities, N_1=63, N_2=126, N_3=252. The final mesh is consistent with the topology of MANO <cit.>, containing 778 vertices. Leveraging multiple upsampling layers, we successfully refine the hand mesh from the initial coarser submesh to the ultimate full MANO mesh. Similar to PointNet, GCN learns the geometric structure of 3D meshes by directly optimizing the features on each vertex. Given the fused global feature 𝒢, we map it into a more compact feature vector through a fully connected layer and concatenate it with the position encoding of vertices to obtain our initial graph features 𝒢_𝒱∈ℝ^N×C, (N=63,126,252),(C=512,256,128). Similar to  <cit.>, our graph convolution operation on each graph feature is defined as follows: G_out = ∑_k=0^K-1 C_k(L̂) G_in W_k. where C_k is Chebyshev polynomials of degree k and L̂∈ℝ^N×N is the scaled Laplacian matrix. W_k ∈ℝ^C_in×C_out is a learnable weight matrix. G_in∈ℝ^N×C_in and G_out∈ℝ^N×C_out are input and output features in graph convolution operations, respectively. Through multiple regression heads composed of fully connected layers, we map the graph features of the last layer to the corresponding optimization objectives, such as root node coordinates, root-aligned MANO mesh, GCN mesh, etc. §.§ Loss Functions To facilitate end-to-end training of the entire model, we design a series of loss functions to constrain the learning process of parameters. In contrast to the original GCN approach <cit.>, we augment our model with a localization module for both hands. This module incorporates an initialization scheme for the root node position, leveraging the hand center, and facilitates feature extraction at each central position. All our loss functions are provided in comprehensive detail below. Center Loss is used to supervise our hand center learning. In essence, it is a pixel-wise binary logistic regression problem. The center points of the left and right hands are positive samples, while the rest are negative samples. Similar to CenterNet <cit.>, we use the form of focal loss <cit.> to avoid the impact of imbalanced positive and negative sample sizes as follows: ℒ_c = ∑_h ∈{L,R} (1 - A_h)^γlog(A_h), where A_h ∈ [0,1] is the estimated confidence map for the positive class, and 1- A_h is the probability for the negative class. γ is a hyperparameter and is set to 2 in our experiment. Mask Loss is used to supervise the generation of hand masks, which is a typical semantic segmentation problem. We use smooth L_1 loss to calculate the difference between prediction and ground truth. ℒ_m = ||M - M̂||_1, where M̂ is the ground truth mask and M is our mask prediction. Root Loss represents the L_1 distance between the predicted root node and the ground truth. In this work, we select the first joint of the middle finger as our root node, which is the 9-th of the 21 joints. ℒ_root = ∑_h ∈{L,R} ||Root^h - R̂ôôt̂^̂ĥ||_1. Mesh Loss includes our GCN mesh loss and MANO mesh loss of 3D hand vertices, and we use L_1 loss for calculation. ℒ_V = ∑_h ∈{L,R} ||ℳ^h_GCN -ℳ̂^h_GCN||_1 + ||ℳ^h_MANO -ℳ̂^h_MANO||_1. Joint Loss. We use the predefined joint regressor 𝒥 in MANO to generate 3D joints. Similar to mesh loss, we use L1_1 loss. ℒ_J = ∑_h ∈{L,R} || 𝒥 (ℳ^h_MANO) - 𝒥 (ℳ̂^h_MANO)||_1. Re-projection Loss. We use projection functions to project 3D meshes and key points onto 2D images to calculate the re-projection loss, which is achieved through L_2 loss. ℒ_rep = ∑_h ∈{L,R} || (Π ( ℳ^h_MANO) - Π (ℳ̂^h_MANO))||_2 + || (Π ( 𝒥(ℳ^h_MANO)) - Π (𝒥 (ℳ̂^h_MANO)))||_2. Smooth Loss. To ensure the smoothness of the output mesh, we add normal vectors and edge length loss. ℒ_smooth = ∑_i=1^3|| e_i·n̂||_1 + ||e - ê||_1, where n̂ and e_i represent the ground truth normal vector and three edges on each face in the predicted mesh, respectively. e represents the length of each edge, while ê represents the corresponding ground truth. § EXPERIMENT §.§ Implementation Details Our proposed framework is implemented with PyTorch <cit.>, incorporating an asymmetric dual-stream architecture for feature encoding. Unlike IntagHand <cit.> necessitating centered interactive hands in their training process, our model accommodates hands from arbitrary positions from the first perspective. For instance, with the H2O dataset <cit.> as our exemplar, we transform the input image into a square shape using zero padding and subsequently rescale it uniformly into 384 × 384. Although larger resolutions can preserve more details, they place higher demands on training memory. To conduct the training, we utilize two RTX2080Ti GPUs and assign a batch size of 8 instances per card. The initial learning rate is set to 1×10^-4 and decreases by a factor of 10 at the 30th epoch. The entire training procedure spans 80 epochs and typically takes approximately three days to complete. Common data augmentation strategies, including scaling, rotation, translation, color jittering, and horizontal flipping, are used during training. §.§ Datasets and Evaluation Metrics H2O <cit.> is a realistic two-handed dataset that contains multi-view RGB-D images. We only used the first perspective data, including 55,742 images in the training set, 11,638 images in the validation set, and 23,391 images in the test set. The dataset provides high-resolution images, with RGB images and depth maps being 1280 × 720 and pixel aligned. In addition, it provides 62-dimensional MANO annotations for each hand, which can generate corresponding 3D meshes and keypoints. With the help of the camera intrinsic matrix, we obtained the corresponding 2D landmarks. The dataset captures different objects in different desktop backgrounds, resulting in complex and varied hand poses. H2O-3D <cit.> is a real captured dataset that focuses on the interaction scenarios between two hands and objects. There are a total of 17 multi-view sequences, with 5 experimenters participating and manipulating 10 different objects for recording. This dataset collected 3D annotations for 76,340 images, including 60,998 images from 69 single-camera sequences used in the training set and 15,342 images from 16 single-camera sequences used in the testing set. The dataset provides RGB-D image pairs with a resolution of 640 × 480, captured from a third perspective. RHD <cit.> is a large-scale synthetic dataset redered from freely available characters. It provides 3D key points annotation for both hands from a third perspective, containing 41,258 training and 2,728 testing data. The RGB-D image pairs are pixel-aligned with resolution of 320 × 320. Evaluation Metrics. To evaluate the accuracy of two-hand reconstruction, we used aligned mean per joint position error (AL-MPJPE) and aligned mean per vertex position error (AL-MPVPE) in millimeters to evaluate 3D key points and 3D mesh vertices after root node alignment, respectively. Practically, it becomes imperative to restore the accurate depth and scale of the reconstructed hands. Consequently, we additionally estimated the position of the root node and directly evaluated the MPJPE and MPVPE in the camera coordinate system. §.§ Comparisons with State-of-the-art Methods Two-hand reconstruction results on H2O dataset. Firstly, we compared our method with previous SOTA two-hand reconstruction methods <cit.> on H2O dataset. We also reported several single-hand pose estimation methods <cit.>, using two separate models for the left and right hand images of H2O and the results reported in the table are borrowed from H2O <cit.>. Due to being the first method to use RGB-D input for two-hand reconstruction, there is few existing reconstruction method to compare. Therefore, we added depth input to existing RGB-based methods to demonstrate the superiority of our fusion strategy. Since DenseFusion <cit.> was originally designed for the 6-DoF pose estimation of objects, we integrated it into our proposed framework for comparison. Similarly, the original PointNet++ <cit.> was designed for point cloud classification and segmentation, and we made corresponding modifications based on the authors' original implementation. Table <ref> shows the evaluation results on H2O. Our method significantly surpasses the previous SOTA methods both in terms of absolute position error MPJPE/MPVPE and relative position error AL-MPJPE/AL-MPVPE. It obtains 9.64mm MPJPE of left hands and 11.62mm MPJPE of right hands under camera space. As for root-aligned position error, it obtains 6.93mm AL-MPJPE of left hands and 8.74mm AL-MPJPE of right hands. For a fair comparison, all methods in the table use the ground truth mask provided by the dataset to segment the depth map. In our subsequent ablation experiments, we also compared the results of directly using the mask estimated by the model. The visual comparison results on the testing set of H2O can be seen in Fig. <ref>. We compared our method with DenseFusion <cit.> and IntagHand+D <cit.> on the H2O testing set. By projecting the predicted meshes onto the input image, we can visually compare the alignment results of hand-to-image. In addition, we placed the ground truth meshes and prediction results simultaneously in the camera coordinate system to compare the alignment results of hand-to-hand. We circled the parts with obvious differences in yellow in Fig. <ref>. It can be seen that our method achieved significantly better alignment in all perspectives. Two-hand reconstruction results on H2O-3D dataset. H2O3D is a challenging two-hand dataset from a third perspective. Our model has also been tested separately on it. As the dataset did not provide annotations for the testing set, we submitted and evaluated it on the official online platform[https://codalab.lisn.upsaclay.fr/competitions/4897]. Our method achieved a mean joint error of 10.7mm, which is significantly better than the baseline method's 11.7mm. The visualization results are shown in Fig. <ref>. Two-hand reconstruction results on RHD dataset. RHD is a synthetic two-hand dataset from a third perspective. It is very challenging because the position of the hand varies greatly, from a very close range of over 10mm to a distance of over 2 meters. We mainly used this dataset to test whether our PDFNet module can work in complex scenarios, and the results in Table <ref> also confirm this. As this dataset only provides annotations for sparse 3D key points, we replaced the GCN-based decoder with a simple three-layer fully connected layer to directly output the coordinates of 21 key points. This experiment also proves that our PDFNet algorithm has certain universality and benefits multiple decoders. The visualization results on this dataset can be seen in Fig. <ref>. §.§ Ablation Study We conducted a series of extensive ablation experiments to confirm the contributions of different modules within our framework. Firstly, our objective is to demonstrate the performance improvement achieved by incorporating depth maps comparing to using solely RGB inputs. This aligns with the intuition of most individuals and our original intention behind the design of fusion modules. Secondly, we aim to demonstrate the advantage of our proposed PDFNet algorithm over existing fusion strategies. To accomplish this, we focus on extend only the feature fusion module within the same encoder-decoder framework. Therefore, we show the efficacy of our pyramid design and feature transformation module. In the following, we present thorough ablation experiments and provide a detailed analysis of the corresponding outcomes. Comparison of different input modalities. We conducted a comprehensive analysis of our model's performance under different input scenarios, and the experimental results are presented in Table <ref>. Row-1 in the table denotes the baseline method, where only RGB images are employed for feature extraction. In Row-2, only depth maps are utilized to extract point features, with the aid of ground truth masks to generate the initial point cloud. Row-3 illustrates the model's direct usage of 4-channel RGB-D images as input. Comparing rows 1-3 in the table, it becomes evident that the integration of depth maps leads to a significant reduction in error by 50%. This aligns with our initial expectations, as predicting depth solely from RGB input inherently presents challenges due to the ill-posed nature of the problem. Visual comparisons are showcased in Fig. <ref>. Row-3 attains lower absolute position error compared to Row-2, however, it does not possess an advantage in terms of relative position error. This indicates that RGB images contribute to improving the accuracy of the final predictions while also introducing some background interference information. Moreover, it signifies that a simplistic and coarse 4-channel input is not an ideal solution. This emphasizes our preference to fully leverage the complementary nature of the two input modalities. Comparison of different fusion strategies. By comparing the results of DenseFusion <cit.> and Ours-full in Table <ref>, it can be found that our devised pyramid feature fusion method confers significant performance advantages. The absolute position error (MPJPE) has decreased from 23.58mm to 10.63mm, while the relative position error (AL-MPJPE) has decreased from 20.81mm to 7.84mm. Furthermore, we delved further into the impact of different modules in PDFNet on the final performance, as shown in Table <ref>. Through a comparison between Row-4 and Row-7, we observe that the feature transformation network (FTN) plays a pivotal role in reducing model error, underscoring the necessity of adaptive weight allocation for the two input modalities. Otherwise, the introduction of undesired background interference features, as seen in Row-3, would adversely affect the final model performance. In Row-5, we removed the center feature and directly utilized the fusion result of the point feature and RGB feature as the final feature. It is noteworthy that this approach leads to an increase of approximately 2mm in MPJPE and 1mm in AL-MPJPE compared to the full model. We posit that incorporating global center features proves advantageous, as it prevents the model from falling into local optima or overfitting by effectively leveraging the informative global-local features. In Row-6, we attempted to employ the model's estimated mask for segmenting the depth map. However, this resulted in an increase of nearly 1mm in all error metrics. This phenomenon can be attributed to the inherent challenges in semantic segmentation itself, where there is inevitably a disparity between the predicted mask and the ground truth. Luckily, our model's performance only experienced a minor decrease while still surpassing the state-of-the-art (SOTA) methods significantly. Comparison of different decoders. To demonstrate the efficacy of our framework and the ease deployment of PDFNet, we replaced our GCN-based decoder with a MANO-based decoder. The experimental results are shown in Table <ref>, indicating that the GCN module used in our framework has achieved significant performance advantages. In addition, compared to the results of IntagHand and IntagHand+D in Table <ref>, our MANO version still achieved improvements of 24.62mm MPJPE and 2.49mm MPJPE, respectively. This indicates that our PDFNet is an effective feature fusion algorithm that can extract more effective features for MANO-based decoders. Limitations. The precision and generalizability of the model's mask predictions are not yet optimal. It is worthy of contemplating the utilization of expansive pre-trained models, such as SAM <cit.>, to achieve greater adaptability across a wider application scenarios. In real-world implementations, the incorporation of temporal information from consecutive frames is imperative in acquiring consistent estimations. Regrettably, our current methodology solely supports single-frame RGB-D images as input, indicating room for further improvement. § CONCLUSION This paper presents a comprehensive end-to-end framework for reconstructing both hands from a single RGB-D input. We adopt a well-designed dual-stream architecture to extract depth and RGB features, separately. Moreover, a novel pyramid feature fusion algorithm, named PDFNet, is introduced to synergistically leverage the strengths of these two complementary input modalities. The model successfully generates dense two-hand meshes in the camera coordinate system by employing our GCN-based decoder. Experiments have shown that the fusion algorithm and reconstruction framework proposed in this paper can accurately reconstruct two-hand meshes with real depth and scale. Compared to the state-of-the-art methods, our approach obtains a remarkable enhancement in performance. In future work, we aim to explore hand-object interaction and human-environment interaction to broaden the scope of application scenarios. Furthermore, both temporal and multi-perspective information can be considered to improve the usability of the model. IEEEtran [ < g r a p h i c s > ]Jinwei Ren is currently a PhD candidate in the College of Computer Science and Technology, Zhejiang University, Hangzhou, China. Before that, he received the bachelor degree from Chongqing University, China, in 2017. His research interests include SLAM and computer vision, with a focus on 3D reconstruction. [ < g r a p h i c s > ]Jianke Zhu received the master’s degree from University of Macau in Electrical and Electronics Engineering, and the PhD degree in computer science and engineering from The Chinese University of Hong Kong, Hong Kong in 2008. He held a post-doctoral position at the BIWI Computer Vision Laboratory, ETH Zurich, Switzerland. He is currently a Professor with the College of Computer Science, Zhejiang University, Hangzhou, China. His research interests include computer vision and multimedia information retrieval.
http://arxiv.org/abs/2307.04586v2
20230710142856
Timbre transfer using image-to-image denoising diffusion implicit models
[ "Luca Comanducci", "Fabio Antonacci", "Augusto Sarti" ]
eess.AS
[ "eess.AS" ]
Reliable Devices Yield Stable Quantum Computations The manuscript is authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The U.S. Government retains for itself, and others acting on its behalf, a paid-up nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute copies to the public, and perform publicly and display publicly, by or on behalf of the Government. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan: https://www.energy.gov/doe-public-access-plan. Samudra Dasgupta^1, 2^*, and Travis S. Humble^1,2^† ^1Quantum Science Center, Oak Ridge National Laboratory, Oak Ridge, Tennessee, USA ^2Bredesen Center, University of Tennessee, Knoxville, Tennessee, USA ^*[email protected], ORCID: 0000-0002-7831-745X ^†[email protected], ORCID: 0000-0002-9449-0498 February 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Timbre transfer techniques aim at converting the sound of a musical piece generated by one instrument into the same one as if it was played by another instrument, while maintaining as much as possible the content in terms of musical characteristics such as melody and dynamics. Following their recent breakthroughs in deep learning-based generation, we apply Denoising Diffusion Models (DDMs) to perform timbre transfer. Specifically, we apply the recently proposed Denoising Diffusion Implicit Models (DDIMs) that enable to accelerate the sampling procedure. Inspired by the recent application of DDMs to image translation problems we formulate the timbre transfer task similarly, by first converting the audio tracks into log mel spectrograms and by conditioning the generation of the desired timbre spectrogram through the input timbre spectrogram. We perform both one-to-one and many-to-many timbre transfer, by converting audio waveforms containing only single instruments and multiple instruments, respectively. We compare the proposed technique with existing state-of-the-art methods both through listening tests and objective measures in order to demonstrate the effectiveness of the proposed model. § INTRODUCTION Timbre is an extremely important perceptual aspect of music, yet it is hard to both model and define. The concept of musical timbre can be defined as the perceived characteristics of a musical sound that are different from pitch and amplitude contours <cit.>. Timbre Transfer concerns the task of converting a musical piece from one timbre to another while preserving the other music-related characteristics. While this operation is not trivial, it is of extreme interest for several applications, from the development of plugins to be used in Digital Audio Workstations (DAW) to enabling the possibility of playing sounds corresponding to not widely available musical instruments. In this paper, we present DiffTransfer, a technique for timbre transfer which is tested both between single and multiple instruments and is based on a continuous Denoising Diffusion Implicit Model (DDIM) with deterministic sampling <cit.>, a modified version of Denoising Diffusion Probabilistic Models (DDPMs) that are trained using the same procedure, but allow for faster sampling times. Specifically, in <cit.> it was empirically shown that DDIMs allow for 10×-50× faster wall-clock time performances with respect to DDPMs. In order to be able to convert one timbre into another, we use a procedure similar to the recently proposed image-to-image technique Palette <cit.>. Specifically, we use as input to the diffusion model the noise and condition it with the chosen input timbre spectrogram, then, through the denoising procedure, the model learns to reconstruct spectrograms of the desired timbre. We consider the scenario where the timbre-transfer task is paired, which means that the desired and input spectrograms have the same melodic/harmonic content, but differ in terms of timbre. We experiment both with the possibility of converting between tracks containing only single instruments and also mixtures of instruments, with no prior separation step, while making no modifications to the model in order to take into account both configurations. In order to demonstrate the effectiveness of the proposed model, we compare DiffTransfer with state-of-the-art techniques, both through objective measures and by performing a user-based listening test. The source code and audio excerpts can be found at < https://lucacoma.github.io/DiffTransfer/>. § RELATED WORK Several types of timbre Transfer techniques have been proposed in the literature. In <cit.> a CycleGAN <cit.> is applied in order to perform an unpaired transfer using the Constant-Q transform and the audio is then recovered through a WaveNet <cit.> model. In <cit.> an attention-based architecture is applied in order to convert mel spectrograms, which are then inverted through a MelGAN architecture <cit.>. Gaussian mixture-based variational autoencoders are applied <cit.> in order to learn a latent space where pitch and timbre representations are disentangled. Another class of methods, instead, extracts musical parameters such as pitch and loudness from the input audio tracks and performs the transfer by resynthesizing sound through a network that has learned to generate tracks with the desired timbre. The most known example of these techniques is the Differentiable Digital Signal Processing (DDSP) <cit.> model. Other similar techniques were proposed such as <cit.>, where a hierarchical model is used in order to reconstruct the signal at increasing resolutions. Recently there have been proposed also models that directly work on the audio waveform such as <cit.>, where music pieces are translated to specific timbre domains. The only model that, to the best of our knowledge and except for the one proposed in this paper, is tested on multi-instrument timbre transfer without any source separation pre-processing is the Music-STAR network, presented in <cit.>. In Music-STAR a WaveNet autoencoder <cit.> is trained by applying teacher-forcing <cit.> to the decoders in order to recover the desired timbre. Denoising Diffusion Probabilistic Models (DDPMs) <cit.> have recently become the latest state-of-the-art for what concerns deep learning-based generation fastly replacing Generative Adversarial Networks (GANs) <cit.> and Variational Autoencoders <cit.>, due to their easier training procedure and increased quality of the produced results. DDPMs have been successfully applied to a wide variety of image-related tasks such as generation <cit.> and translation <cit.>. More recently, DDPMs have been also used for audio-related tasks. In <cit.> a diffusion model is applied in order to convert midi tracks to spectrograms, while in <cit.> a text-to-music diffusion model is proposed. DDPMs have also been applied to symbolic music generation <cit.>, speech synthesis <cit.> and singing voice extraction <cit.>. While DDPMs have extremely powerful generation capabilities they suffer from slow sampling times. To ameliorate this issue, recently Denoising Diffusion Implicit Models (DDIMs) <cit.>, which allow for faster sampling times and were recently applied to image inpainting <cit.>. § PROPOSED MODEL In this section, we describe the proposed DiffTransfer technique for timbre transfer. Instead of working directly with raw audio signals, we convert them into log mel-scaled spectrograms, due to their easier handling by deep learning models. We then propose a model that, given as input the spectrogram corresponding to the conditioning instrument, generates the corresponding target spectrogram that would have been obtained by playing the same piece of music with the target instrument. Operatively we achieve this through a conditional continuous-time DDIM, which learns to denoise the target instrument spectrogram, while conditioned on the input instrument spectrogram, as depicted in Fig. <ref>. At inference time, the model is fed with the input conditioning instrument concatenated with Gaussian noise and generates the corresponding target spectrogram. We retrieve the audio signal by applying to the log mel spectrograms the SoundStream[<https://tfhub.dev/google/soundstream/mel/decoder/music/1>] model <cit.>, provided by <cit.> where it was trained on a custom music dataset. In the following, we'll provide a brief overview of the DDIM framework and notation used in this paper, in order to make the tractation as compact as possible, for additional and more thorough formulations, we refer the reader to <cit.> and <cit.>. We aim at giving a general overview of the process and we'll use a slight abuse of notation to describe the diffusion process using the continuous time framework, in order to make it more similar to the more common literature regarding DDPMs and DDIMs. §.§ Diffusion Decoder We adopt a procedure similar to the Palette <cit.> image-to-image translation technique in order to train the timbre transfer decoder as a Denoising Diffusion Implicit Model (DDIM) <cit.>. Broadly speaking, DDIMs work by learning how to generate data from noise in a two-part procedure. The first part is denoted as the forward process, where Gaussian noise γ∼𝒩(0,1) is subsequently added to the input until it is indistinguishable from the former. The second part consists of the reverse process where a decoder learns how to invert the forward process, effectively reconstructing data from the noise. DDIMs can be seen as a generalization of DDPMs that shares the same training procedure, however, they differ in the modeling of the reverse process, by using a non-markovian diffusion process, which allows for faster generation times. §.§.§ Forward Process Let us define 𝐗 and 𝐘 as the log mel spectrograms corresponding to the conditioning and target instruments, respectively. We choose a continuous diffusion time <cit.>in order to be able to change the number of desired sampling steps. If we consider T steps, then the diffusion time can be defined as t ∈{0,1}, where consecutive times are separated by Δ_t=1/T. Then, the forward process is defined similarly to the case of DDPMs by subsequently adding noise to the target spectrogram for T steps q(𝐘_t|𝐘_t-Δ_t) = 𝒩(𝐘_t, √((α_t))𝐘_t-Δ_t, β_t 𝐈),          q(𝐘_1:T|𝐘_0) = ∏_t=1^Tq(𝐘_t-Δ_t) where α and β are parameters defined by a simplified cosine schedule <cit.>. §.§.§ Reverse Process In the case of DDIMs, the reverse diffusion process is operated by introducing an additional distribution p_θ, where a sample 𝐘_t-Δ t can be generated from a sample 𝐘_t as 𝐘_t-Δ t = √(β_t-Δ t)( c-√(β_t)γ_θ^(t)(𝐘_t,𝐗)/√(()α_t)) + √(1-α_t-Δ_t)·γ_θ^(t)(𝐘_t,𝐗), , where γ is the noise estimated by a network with parameters θ. The noise at time t γ_θ^(t) is estimated by a network that is conditioned also on the input timbre spectrogram 𝐗, similarly to the formulation proposed in Palette <cit.>. §.§.§ Training Procedure The denoising process is operated through a U-Net architecture which is conditioned on 𝐗 and trained to predict the added noise in order to minimize the L1 loss 𝔼 = ||γ_θ^(t)(𝐘_t,𝐗)-γ||_1^1, where γ is the true perturbation, while γ_θ^(t)(𝐘_t,𝐗) is the estimate of the noise added to the target spectrogram at time t, conditioned on the input spectrogram 𝐗. §.§ Architecture The decoder architecture is based on a U-Net model. The building element is made of residual blocks, in each of these the input is processed by (i) a 2D convolutional layer with swish activation, followed by batch normalization and by (ii) a convolutional layer with no activation. Both convolutional layers have kernel size 3. The output of this procedure is then summed with the residual, which is obtained by processing the input with a convolutional layer with kernel size 1. The encoder part of the network consists of 3 downsampling blocks, each consisting of 4 residual blocks having filter sizes 64,128,256. The output of each downsampling block is followed by average pooling, with pool size 2 in order to compress the dimension of the spectrograms. The last block of the encoder is followed a self-attention block. The bottleneck obtained through the encoder is processed by a residual block with 512 filters and is then processed by the decoder, which is a specular version of the encoder. The only difference lies in the use of transposed convolutions in order to create upsampling layers needed to increase the dimension of the features. The last downsampling layer of the encoder, the bottleneck and the first upsampling layer of the decoder are followed by self-attention. §.§ Deployment The proposed model takes as input spectrograms of a fixed size, therefore audio tracks longer than the ones used for training need to be sliced accordingly. The decoder takes as input the conditioning spectrogram 𝐗 and the diffusion noise and retrieves an estimate of the latter, which can then be subtracted in order to obtain an estimate of the desired output timbre spectrogram 𝐘̂. The output waveform y can then be obtained by feeding the pre-trained SoundStream model with 𝐘̂. § EXPERIMENTS In this section, we describe experiments performed with the aim of demonstrating the capabilities of the proposed DiffTransfer technique both in the single-instrument and multi-instrument application scenarios. In Fig. <ref> we show an example of input, generated and ground-truth spectrograms, obtained via the DiffTransfer model when converting from a Clarinet to Strings. §.§ Dataset In order to train the model we considered the StarNet dataset <cit.>, which contains a set of tracks that are played with two timbre-domains, namely strings-piano and vibraphone-clarinet. The dataset consists of roughly 22 hours of audio. We used the reduced version of the dataset, where tracks are resampled to 16000  Hz and converted them to mono. In order to perform the evaluation, we use the same ten tracks considered in <cit.>, in order to ease the comparison with their model. §.§ Techniques Under Comparison We consider two baselines in order to compare the performances of the proposed DiffTransfer architecture. For what concerns the single-instrument timbre transfer task, we consider the Universal Network <cit.> fine-tuned on the StarNet dataset as done in <cit.>. For what concerns the multi-timbre task, we consider the mixture-supervised version of the Music-STAR network proposed in <cit.>. We perform three different types of timbre transfer tasks: single, where only single instruments are converted, single/mixed where the separate conversions of single instruments are mixed in order to create the desired mixture track and mixture, where the mixture is directly converted. These nomenclatures are used just to ease the presentation of the results, we would like to point out that, for what concerns the DiffTransfer architecture, no specific changes are required for the various types of applications, except for the choice of desired input data. §.§ Experiment Setup The Universal Network and Music-STAR architectures are trained with the procedure described in <cit.>. The DiffTransfer network is trained for 5000 epochs using a batch size of 16, with the AdamW optimizer <cit.> with learning rate 2e-5 and weight decay 1e-4. The epoch that minimizes the L1 noise prediction loss is chosen in order to retain the model used to compute the results. We train a total of six models, performing the following timbre transfer conversions: vibraphone to piano, piano to vibraphone, clarinet to strings, strings to clarinet vibraphone/clarinet to piano/strings and piano/strings to vibraphone/clarinet. The network input features are computed by first applying the Short-Time Fourier Transform (STFT) with a Hann window of size 0.020 s and 50 % overlap to normalized audio tracks. Then the log mel spectrogram is computed over 128 bins corresponding to the range of 0-16000 Hz. We do not feed the entire audio tracks as input to the network, instead, during each epoch we extract 128 frames from the log mel spectrogram, corresponding to ≈ 2 s. Each spectrogram slice is normalized between -1 and 1 before being given as input to the network and the output spectrograms are denormalized before being fed to the SoundStream model in order to recover the audio waveform. Since the tracks considered for the test are of length 10 s and the model gets as input a fixed 128 frames spectrogram we slice the conditioning spectrogram before feeding into the model and we keep the input noise fixed for all slices, in order to ensure consistency in the generation. All spectrogram slices are normalized in the range [-1, 1] and denormalized before being fed to the SoundStream decoder. §.§ Objective Evaluation We evaluate the model objectively in order to analyze the perceptual similarity and content preservation capabilities of the generated tracks with respect to the ground truth audio. In order to evaluate the perceptual similarity, we compute the Fréchet Audio Distance (FAD) <cit.> using the VGGish embeddings <cit.>, through a PyTorch implementation[<https://pypi.org/project/frechet-audio-distance/>]. FAD is a reference-free metric for music enhancement algorithms, which views the embeddings as a continous multivariate Gaussian and is computed between the real and generated data as FAD = ||μ_r -μ_g||^2 + tr(Σ_r + μ_g -2√(Σ_r Σ_g)), where (μ_r, Σ_r) and (μ_g, Σ_g) are the mean and covariances of the embeddings corresponding to the real and generated data, respectively. Similarly to <cit.>, we compute FAD in order to analyze the perceptual similarity between the generated audios with respect to the ground truth one, corresponding to the original StarNet dataset. To understand the content-preservation capabilities of the model, following <cit.>, we compute how the pitch contours of generated ground truth audio tracks are dissimilar, by calculating the mismatch between two sets of pitches A and B through the Jaccard Distance JD(A,B) = 1 - |A ∩ B|/|A ∪ B|, where a lower value corresponds to a lower mismatch and thus to a higher degree of similarity between the generated pitch contours. Pitch contours are computed using a multi-pitch version of the MELODIA <cit.> as implemented in the Essentia library <cit.>, rounding pitches to the nearest semitone. We report the values obtained by computing the metrics on the test dataset in Table <ref>. §.§ Subjective Evaluation In order to evaluate subjectively the timbre transfer capabilities, we perform a listening test with 18 human participants. The web page of the test is available at [<https://listening-test-ismir-ttd.000webhostapp.com/>]. The test was split into two parts corresponding to the single and multiple instrument application scenarios, respectively. During the single instrument part of the test, the users listened to four tracks, corresponding to the four types of conversions performed, namely: clarinet to strings, strings to clarinet, piano to vibraphone, vibraphone to piano. Each example consisted of two conditions, one obtained via the DiffTransfer model and the other through the Universal Network. In the second part of the test, concerning multiple instrument timbre transfer, a total of four tracks were considered, two for the conversion from vibraphone/strings to piano/strings waveforms and two for the reverse conversion. Each example consisted of four conditions, namely DiffStar (single/mix), Universal Network (single/mix), DiffStar (mixture) and Music-STAR (mixture). Both the order of conditions and the order of examples in each separate part of the test were randomized. The participants were asked to rate the conditions in terms of similarity with respect to the reference track on a 5 elements Likert scale where 1 corresponds to bad and 5 to excellent. We report the results obtained through the listening test in Table <ref>. §.§ Discussion By briefly inspecting both the objective and subjective results, reported in Table <ref> and  <ref>, respectively, it is clear how the proposed DiffTransfer model outperforms the Universal Network and Music-STAR baselines both for what concerns the single and multiple timbre transfer tasks. When considering single timbre results, DiffTransfer is able to achieve significantly better performances in terms of FAD, Jaccard Distance and Perceived Similarity, with respect to the Universal network. The gap between the two methods becomes even more evident when considering the single/mixed case, i.e. when single timbre transfer tracks are mixed in order to form the desired mixture audio. For what concerns the Music-STAR method, the gap with respect to DiffTransfer remains high in terms of FAD, but becomes less noticeable when considering JD and the perceived subjective similarity. § CONCLUSION In this paper, we have presented DiffTransfer a technique for both single- and multi-instrument timbre transfer using Denoising Diffusion Implicit models. The novelty of the proposed approach lies in the fact that in addition to being, to the best of our knowledge, the first application of diffusion models to timbre transfer, it is the first model to be tested in order to perform single and multi-timbre transfer, without varying the architecture depending on which application is chosen. We compared the proposed model with state-of-the-art Universal Network and Music-STAR baselines through both objective evaluation measures and a listening test, demonstrating the better capabilities of the proposed DiffTransfer approach. Future works will involve increasing the audio quality of the generated audio, by taking into account the consistency of subsequent generated spectrograms. Furthermore, we plan on modifying the model in order to be able to perform unpaired timbre transfer, which greatly eases the dataset requirements and applicability of the technique.
http://arxiv.org/abs/2307.05864v1
20230712012552
Stable-Limit Non-symmetric Macdonald Functions
[ "Milo James Bechtloff Weising" ]
math.RT
[ "math.RT", "math.CO" ]
Stable-Limit Non-symmetric Macdonald Functions Milo James Bechtloff Weising August 12, 2023 ============================================== We construct and study an explicit simultaneous 𝒴-eigenbasis of Ion and Wu's standard representation of the ^+stable-limit double affine Hecke algebra for the limit Cherednik operators 𝒴_i. This basis arises as a generalization of Cherednik's non-symmetric Macdonald polynomials of type GL. We utilize links between ^+stable-limit double affine Hecke algebra theory of Ion-Wu and the double Dyck path algebra of Carlsson-Mellit that arose in their proof of the Shuffle Conjecture. As a consequence, the spectral theory for the limit Cherednik operators is understood. The symmetric functions comprise the zero weight space. We introduce one extra operator that commutes with the _i action and dramatically refines the weight spaces to now be one-dimensional. This operator, up to a change of variables, gives an extension of Haiman's operator Δ' from Λ to _as^+. Additionally, we develop another method to build this weight basis using limits of trivial idempotents. § INTRODUCTION The Shuffle Conjecture <cit.>, now the Shuffle Theorem <cit.>, is a combinatorial statement regarding the Frobenius character, ℱ_R_n, of the diagonal coinvariant algebra R_n which generalizes the coinvariant algebra arising from the geometry of flag varieties. The conjecture built on the work of many people during the 1990s, including but not limited to Bergeron, Garsia, Haiman, and Tesler <cit.> <cit.> <cit.>. The following explicit formula is due to Haiman <cit.> ℱ_R_n(X;q,t) = (-1)^n ∇ e_n[X] where the operator ∇ is a diagonalizable operator on symmetric functions prescribed by its action on the modified Macdonald symmetric functions H_μ as ∇H_μ = H_μ[-1]·H_μ . The original conjecture of Haglund, Haiman, Loehr, Remmel, and Ulyanov <cit.> states the following: <cit.> (-1)^n ∇ e_n[X] = ∑_π∑_ w ∈_π t^(π) q^(π,w) x_w. In the above, π ranges over the set of Dyck paths of length n and _π is the set of word parking functions corresponding to π. The values (π) and (π,w) are certain statistics corresponding to π and w ∈_π. In <cit.>, Carlsson and Mellit prove the Compositional Shuffle Conjecture of Haglund, Morse, and Zabrocki <cit.>, a generalization of the original Shuffle Conjecture. Carlsson and Mellit construct and investigate a quiver path algebra called the Double Dyck Path algebra 𝔸_q,t. They construct a representation of 𝔸_q,t, called the standard representation, built on certain mixed symmetric and non-symmetric polynomial algebras with actions from Demazure-Lusztig operators, Hall-Littlewood creation operators, and plethysms. The Compositional Shuffle Theorem falls out after a rich understanding of the standard representation is developed. Later analysis done by Carlsson, Gorsky, and Mellit <cit.> showed that in fact 𝔸_q,t occurs naturally in the context of equivariant cohomology of Hilbert schemes. Recent work by Ion and Wu <cit.> has solidified the links between the work of Carlsson and Mellit on 𝔸_q,t and the representation theory of double affine Hecke algebras. Ion and Wu introduce the ^+stable-limit double affine Hecke algebra ℋ^+ along with a representation of ℋ^+ on the space of almost-symmetric functions, 𝒫_as^+, from which one can recover the standard 𝔸_q,t representation. The main obstruction in making a stable-limit theory for the double affine Hecke algebras is the lack of an inverse/directed-limit system of the double affine Hecke algebras in the traditional sense. Ion and Wu get around this obstruction by introducing a new notion of convergence (Defn. <ref>) for sequences of polynomials with increasing numbers of variables along with limit versions of the standard Cherednik operators defined by this convergence. Central to the study of the standard Cherednik operators are the non-symmetric Macdonald polynomials. The non-symmetric Macdonald polynomials in full generality were introduced first by Cherednik <cit.> in the context of proving the Macdonald constant-term conjecture. The introduction of the double affine Hecke algebra, along with the non-symmetric Macdonald polynomials by Cherednik, constituted a significant development in representation theory. They serve as a non-symmetric counterpart to the symmetric Macdonald polynomials introduced by Macdonald as a q,t-analog of Schur functions. Further, they give an orthogonal basis of the polynomial representation consisting of weight vectors for the Cherednik operators. The spectral theory of non-symmetric Macdonald polynomials is well understood using the combinatorics of affine Weyl groups. The correct choice of symmetrization applied to a non-symmetric Macdonald polynomial will yield their symmetric counterpart. The type A symmetric Macdonald polynomials are a remarkable basis for symmetric polynomials simultaneously generalizing many other well studied bases which can be recovered by appropriate specializations of values for q and t. The aforementioned modified Macdonald functions H_μ can be obtained via a plethystic transformation from the symmetric Macdonald polynomials in sufficiently many variables. It is natural to seek a stable-limit extension for the non-symmetric Macdonald polynomials following the methods of Ion and Wu. In particular, does the standard ℋ^+ representation 𝒫_as^+ have a basis of weight vectors for the limit Cherednik operators _i? The first main theorem of this paper (Theorem <ref>) answers this question in the affirmative. In the second main theorem of this paper (Theorem <ref>) we use a new operator Ψ_p_1, which commutes with the limit Cherednik operators, to distinguish between -weight vectors with the same -weight. The operator Ψ_p_1 is up to a change of variables an extension of Haiman's operator Δ' <cit.> from Λ to _as^+ (Remark <ref>). The operator Ψ_p_1 is a limit of operators from finite variable DAHAs. We conjecture (Conjecture <ref>) that for any symmetric function F ∈Λ there is an analogous sequence of operators from finite variable DAHAs giving an analogous operator Ψ_F on _as^+. If true, this conjecture would yield an action of the elliptic Hall algebra <cit.> <cit.> on _as^+ (Remark <ref>). This paper is the full version of the author's accepted submission to FPSAC2023 <cit.>. §.§.§ Structure of the paper Section <ref> introduces many of the definitions and notations needed throughout this paper. In Sections <ref> and <ref> we construct a basis of weight vectors for the limit Cherednik operators 𝒴_i. Our strategy for this is the following. First, in Section <ref> we show that the non-symmetric Macdonald polynomials have stable-limits in the sense that if we start with a composition μ and consider the compositions μ * 0^m for m ≥ 0 then the corresponding sequence of non-symmetric Macdonald polynomials E_μ * 0^m converges to an element E_μ of 𝒫_as^+. Next, in Section <ref> we show that these limits of non-symmetric Macdonald polynomials are 𝒴-weight vectors. Importantly, the newly constructed set of E_μ do not span 𝒫_as^+. To fill in these gaps, the lowering operators d_- from 𝔸_q,t are used to create enough -weight vectors to span 𝒫_as^+. Finally, a symmetrization operator is used to show that the spanning set obtained from this process is actually a basis in Theorem <ref>. Lemma <ref>, Corollary <ref>, and Lemma <ref> together give a description of the weights for the above weight basis of 𝒫_as^+; in other words we describe the -spectrum. In the last two sections of this paper we investigate some applications of Theorem <ref>. In Section <ref> we derive some recurrence relations for the stable-limit Macdonald function basis similar to the classical Knop-Sahi relations. In Section <ref> we construct an operator Ψ_p_1 on _as^+ which is diagonal on the stable-limit Macdonald function basis, and thus commutes with the limit Cherednik operators . The action of Ψ_p_1 distinguishes between our basis elements with identical -weight. This leads to the second main theorem of this paper, Theorem <ref>, where we prove that after adding this new operator to the algebra of limit Cherednik operators the resulting algebra is commutative and has one dimensional weight spaces in _as^+. §.§.§ Acknowledgments The author would first like to thank their advisor Monica Vazirani for their tremendous help with the proof checking and editing of this paper and for their continued guidance of the author through the hurdles of the academic world. The author would like to thank the FPSAC2023 referees who alerted the author to an unpublished work of Ion and Wu <cit.> which independently determines the -spectrum on 𝒫_as^+. The author would also like to thank Daniel Orr and Ben Goodberry for their insights regarding the symmetrization operators occurring in subsection <ref>. § DEFINITIONS AND NOTATION §.§ Double Affine Hecke Algebras in Type GL We present here the conventions that will be used in this paper for the double affine Hecke algebra of type GL. Take note of the quadratic relation (T_i-1)(T_i+t) = 0 which has been chosen to match with the conventions in <cit.> but may differ from other authors. Define the double affine Hecke algebra ℋ_n to be the ℚ(q,t)-algebra generated by T_1,…,T_n-1, X_1^± 1,…,X_n^± 1, and Y_1^± 1,…,Y_n^± 1 with the following relations: 2 * (T_i -1)(T_i +t) = 0, T_iT_i+1T_i = T_i+1T_iT_i+1, T_iT_j = T_jT_i, |i-j|>1, * T_i^-1X_iT_i^-1 = t^-1X_i+1, T_iX_j = X_jT_i, j ∉{i,i+1}, X_iX_j = X_jX_i, * T_iY_iT_i = tY_i+1, T_iY_j = Y_jT_i, j∉{i,i+1}, Y_iY_j = Y_jY_i, * Y_1T_1X_1 = X_2Y_1T_1, * Y_1X_1⋯ X_n = qX_1⋯ X_nY_1 Further, define the special element ω_n by ω_n := T_n-1^-1⋯ T_1^-1Y_1^-1. This conveniently allows us to write Y_1 = ω_n^-1T_n-1^-1⋯ T_1^-1. When required we will write Y_i^(n) for the element Y_i in ℋ_n to differentiate between the element Y_i^(m) in a different ℋ_m for n ≠ m. We will often use the following basic fact about ℋ_n the proof of which we will omit. Let f(X_1,…,X_n), g(Y_1,…, Y_n) ∈ℋ_n be symmetric Laurent polynomials in X's and Y's respectively. Then for all 1 ≤ i ≤ n-1, [T_i,f(X_1,…,X_n)] = [T_i,g(Y_1,…, Y_n)] = 0. §.§.§ Standard DAHA Representation Let 𝒫_n = ℚ(q,t)[x_1^± 1,…,x_n^± 1]. The standard representation of ℋ_n is given by the following action on 𝒫_n: * T_if(x_1,…,x_n) = s_i f(x_1,…,x_n) +(1-t)x_i 1-s_i/x_i-x_i+1f(x_1,…,x_n) * X_if(x_1,..,x_n)= x_if(x_1,…,x_n) * ω_nf(x_1,…,x_n) = f(q^-1x_n,x_1,…,x_n-1) Here s_i denotes the operator that swaps the variables x_i and x_i+1. Under this action the T_i operators are known as the Demazure-Lusztig operators. For q,t generic 𝒫_n is known to be a faithful representation of ℋ_n. The action of the elements Y_1,…,Y_n ∈ℋ_n are called Cherednik operators. Set ℋ_n^+ to be the positive part of ℋ_n i.e. the subalgebra generated by T_1,…,T_n-1, X_1,…,X_n, and Y_1,…,Y_n without allowing for inverses in the X and Y elements and set 𝒫_n^+ = ℚ(q,t)[x_1,…,x_n]. Importantly, 𝒫_n^+ is a ℋ_n^+-submodule of 𝒫_n. For 1≤ i ≤ n-1 define the intertwiners, φ_i^(n)∈ℋ_n, as φ_i^(n):= [T_i,Y_i^(n)] = T_iY_i^(n) - Y_i^(n)T_i. The intertwiner elements have the following properties which are readily verified from the relations of ℋ_n: * φ_i^(n) = T_i(Y_i^(n)-Y_i+1^(n)) + (1-t)Y_i+1^(n) * φ_i^(n)Y_j^(n) = Y_s_i(j)^(n)φ_i^(n) * (φ_i^(n))^2 = (Y_i^(n)-tY_i+1^(n))(Y_i+1^(n)-tY_i^(n)). §.§.§ Non-symmetric Macdonald Polynomials Before discussing non-symmetric Macdonald polynomials we must first review some basic combinatorial definitions. In this paper, a composition will refer to a finite tuple μ = (μ_1,…,μ_n) of non-negative integers. We allow for the empty composition ∅ with no parts. We will let denote the set of all compositions. The length of a composition μ = (μ_1,…,μ_n) is ℓ(μ) = n and the size of the composition is | μ | = μ_1+…+μ_n. As a convention we will set ℓ(∅) = 0 and |∅| = 0. We say that a composition μ is reduced if μ = ∅ or μ_ℓ(μ)≠ 0. We will let denote the set of all reduced compositions. Given two compositions μ = (μ_1,…,μ_n) and β = (β_1,…,β_m), define μ * β = (μ_1,…,μ_n,β_1,…,β_m). A partition is a composition λ = (λ_1,…,λ_n) with λ_1≥…≥λ_n ≥ 1. Note that vacuously we allow for the empty partition ∅. We denote the set of all partitions by . We denote (μ) to be the partition obtained by ordering the nonzero elements of μ in weakly decreasing order. The dominance ordering for partitions is defined by λ⊴ν if for all i≥ 1, λ_1 + … +λ_i ≤ν_1 + … +ν_i where we set λ_i = 0 whenever i > ℓ(λ) and similarly for ν. If λ⊴ν and λ≠ν. we will write λ◃ν. We will in a few instances use the notation 1(p) to denote the value 1 if the statement p is true and 0 otherwise. In this paper we will write 𝔖_n for the permutation group on the set [n]:= {1,…,n}. In line with the conventions in <cit.> we define the Bruhat order on the type GL_n weight lattice ℤ^n as follows. Let e_1,...,e_n be the standard basis of ℤ^n and let α∈ℤ^n. We define the Bruhat ordering on ℤ^n, written simply by <, by first defining cover relations for the ordering and then taking their transitive closure. If i<j such that α_i < α_j then we say α > (ij)(α) and additionally if α_j - α_i > 1 then (ij)(α) > α + e_i - e_j where (ij) denotes the transposition swapping i and j. As an equivalent definition we say α < β if (α) ◃(β) and in the case that λ = (α) = (β) then we have α < β when σ < γ in the Bruhat order for minimal length permutations σ, γ with σ(α) = λ, γ(β) = λ. It is important to note that with respect to the Bruhat order any weakly decreasing vector v ∈ℤ^n is the minimal element in its permutation orbit 𝔖_n.v. The non-symmetric Macdonald polynomials (for GL_n) are a family of Laurent polynomials E_μ∈𝒫_n for μ∈ℤ^n uniquely determined by the following: * Triangularity: Each E_μ has a monomial expansion of the form E_μ = x^μ + ∑_λ < μ a_λx^λ * Weight Vector: Each E_μ is a weight vector for the operators Y_1^(n),…,Y_n^(n)∈ℋ_n. The non-symmetric Macdonald polynomials are a Y^(n)-weight basis for the ℋ_n standard representation 𝒫_n. For μ∈ℤ^n, E_μ is homogeneous with degree μ_1+… +μ_n. Further, the set of E_μ corresponding to μ∈ℤ_≥ 0^n gives a basis for 𝒫_n ^+. §.§.§ Combinatorial Formula for Non-symmetric Macdonald Polynomials Note that the q,t conventions in <cit.> differ from those appearing in this paper. In the below theorem the appropriate translation q → q^-1 has been made. In <cit.>, Haglund, Haiman, and Loehr give an explicit monomial expansion formula for the non-symmetric Macdonald polynomials in terms of the combinatorics of non-attacking labellings of certain box diagrams corresponding to compositions which we will now review. <cit.> For a composition μ = (μ_1,…,μ_n) define the column diagram of μ as dg'(μ):= {(i,j)∈ℕ^2 : 1≤ i≤ n, 1≤ j ≤μ_i }. This is represented by a collection of boxes in positions given by dg'(μ). The augmented diagram of μ is given by dg(μ):= dg'(μ)∪{(i,0): 1≤ i≤ n}. Visually, to get dg(μ) we are adding a bottom row of boxes on length n below the diagram dg'(μ). Given u = (i,j) ∈ dg'(μ) define the following: * (u) := {(i,j') ∈ dg'(μ): j' > j} * ^(u) := {(i',j) ∈ dg'(μ): i'<i, μ_i'≤μ_i} * ^(u):= {(i',j-1) ∈dg(μ): i'>i, μ_i'<μ_i} * (u) := ^(u) ∪^(u) * (u):= |(u)| = μ_i -j * a(u) := |(u)|. A filling of μ is a function σ: dg'(μ) →{1,...,n} and given a filling there is an associated augmented filling σ: dg(μ) →{1,...,n} extending σ with the additional bottom row boxes filled according to σ((j,0)) = j for j = 1,…,n. Distinct lattice squares u,v ∈ℕ^2 are said to attack each other if one of the following is true: * u and v are in the same row * u and v are in consecutive rows and the box in the lower row is to the right of the box in the upper row. A filling σ: dg'(μ) →{1,…,n} is non-attacking if σ(u) ≠σ(v) for every pair of attacking boxes u,v ∈dg(μ). For a box u= (i,j) let d(u) = (i,j-1) denote the box just below u. Given a filling σ:dg'(μ)→{1,…,n}, a descent of σ is a box u ∈ dg'(μ) such that σ(u) > σ(d(u)). Set (σ) to be the set of descents of σ and define (σ):= ∑_u ∈(σ) ((u)+1). The reading order on the diagram dg(μ) is the total ordering on the boxes of dg(μ) row by row, from top to bottom, and from right to left within each row. If σ: dg'(μ) →{1,…,n} is a filling, an inversion of σ is a pair of attacking boxes u,v ∈dg(μ) such that u < v in reading order and σ(u) > σ(v). Set (σ) to be the set of inversions of σ. Define the statistics * (σ):= |(σ)| -|{i<j: μ_i ≤μ_j}| - ∑_u ∈(σ) a(u) * (σ):= ( ∑_u ∈ dg'(μ) a(u) ) -(σ). Lastly, for a filling σ:dg'(μ) →{1,…,n} set x^σ:= x_1^|σ^-1(1)|⋯ x_n^|σ^-1(n)|. The combinatorial formula for non-symmetric Macdonald polynomials can now be stated. <cit.> For a composition μ with ℓ(μ) = n the following holds: E_μ = ∑_σ: μ→ [n] non-attacking x^σq^-(σ)t^(σ)∏_u ∈ dg'(μ) σ(u) ≠σ(d(u))( 1-t/1-q^-((u)+1)t^(a(u)+1)). We finish this subsection with a visual example of a non-attacking filling and its associated statistics. Below is the augmented filling σ of a non-attacking filling σ: (3,2,0,1,0,0) → [6] pictured as labels inside the boxes of dg(3,2,0,1,0,0). centertableaux, boxframe= normal, boxsize= 2em 6 4 1 1 2 3 1 2 3 4 5 6 Let u be the column 1 box of dg(3,2,0,1,0,0) filled with a 4 in the above diagram. Notice that u is a descent box of σ as 4 is larger than the label 1 of the box d(u) just below u. Further, we see that a(u) = 2 and (u) = 1. Considering the diagram as a whole now we see that x^σ = x_1^2x_2x_3x_4x_6, (σ) = 3, |(σ)| = 21, (σ) = 14, and (σ) = 1. The contribution of this non-attacking labelling to the HHL formula for E_(3,2,0,1,0,0)∈_6^+ is x_1^2x_2x_3x_4x_6 q^-3t^1( 1-t/1-q^-1t^3)( 1-t/1-q^-1t^2)( 1-t/1-q^-2t^3)( 1-t/1-q^-1t^2). §.§ Symmetric Functions Define the ring of symmetric functions Λ to be the subalgebra of the inverse limit of the symmetric polynomial rings ℚ(q,t)[x_1,…,x_n]^𝔖_n with respect to the quotient maps sending x_n → 0 consisting of those elements with bounded x-degree. For i ≥ 0 define the i-th power sum symmetric function by p_i = x_1^i + x_2^i + … . It is a classical result that Λ is isomorphic to ℚ(q,t)[p_1,p_2,…]. For any expression G = a_1g^μ_1 + a_2g^μ_2 +… with rational scalars a_i ∈ℚ and distinct monomials g^μ_i in a set of algebraically independent commuting free variables {g_1,g_2,…} the plethsytic evaluation of p_i at the expression G is defined to be p_i[G] := a_1g^iμ_1 +a_2g^i μ_2+ … . Note that g_i are allowed to be q or t. Here we are using the convention that iμ = (iμ_1,…, i μ_r) for μ = (μ_1,⋯, μ_r). The definition of plethystic evaluation on power sum symmetric functions extends to all symmetric functions F ∈Λ by requiring F → F[G] be a ℚ(q,t)-algebra homomorphism. Note that for F ∈Λ, F = F[x_1+x_2+…] and so we will often write F = F[X] where X:= x_1+ x_2 + …. For a partition λ define the monomial symmetric function m_λ by m_λ := ∑_μ x^μ where we range over all distinct monomials x^μ such that σ(μ) = λ for some permutation σ. For n ≥ 0 define the complete homogeneous symmetric function h_n by h_n:= ∑_|λ|= n m_λ . We can extend plethysm to ℚ(q,t)[[p_1,p_2,…]]. The plethystic exponential is defined to be the element of ℚ(q,t)[[p_1,p_2,…]] given by Exp[X]:= ∑_n ≥ 0 h_n[X]. Here we list some notable properties of the plethystic exponential which will be used later in this paper. * Exp[0] = 1 * Exp[X+Y]= Exp[X]Exp[Y] * Exp[x_1+x_2+…] = ∏_i=1^∞( 1/1-x_i) * Exp[(1-t)(x_1+x_2+…)] = ∏_i=1^∞( 1-tx_i/1-x_i) Here we give a few examples of plethystic evaluation. * p_3[1+5t+qt^2] = 1+5t^3 + q^3t^6 * s_2[(1-t)X] = (p_2+p_1,1/2)[(1-t)X] = (1-t^2)p_2[X]+(1-t)^2p_1,1[X]/2 * Exp[t/1-t]= ∏_n=1^∞(1/1-t^n) §.§.§ Hall-Littlewood Symmetric Functions For the purposes of this paper we need the following explicit collection of symmetric functions. For n ≥ 0 define the Jing vertex operator ℬ_n ∈ End_ℚ(q,t)(Λ) by ℬ_n[F] : = ⟨ z^n ⟩ F[X-z^-1]Exp[(1-t)zX]. Here ⟨ z^n ⟩ denotes the operator which extracts the coefficient of z^n of any formal series in z. For a partition λ = (λ_1,...,λ_r) define the Hall-Littlewood symmetric function, 𝒫_λ, by 𝒫_λ:= ℬ_λ_1⋯ℬ_λ_r(1). Note that the operator ℬ_n is graded with degree n. The definition of the Hall-Littlewood symmetric functions in this paper matches with <cit.> and <cit.> but differs from that of other authors. As we will see later in Proposition <ref> the 𝒫_λ[X] are the same as the dual Hall-Littlewood symmetric functions Q_λ[X;t] defined by Macdonald <cit.>. These symmetric functions have the following useful properties. * 𝒫_λ is homogeneous with degree |λ| * 𝒫_(n)[X] = h_n[(1-t)X] * If n ≥λ_1 then ℬ_n(𝒫_λ) = 𝒫_n*λ * ℬ_0(𝒫_λ) = t^ℓ(λ)𝒫_λ Lastly, it is a classical result that the collection {𝒫_λ|λ∈} is a basis of Λ. §.§ Stable-Limit DAHA of Ion and Wu As the index n varies, the standard ℋ_n representations, _n, fail to form a direct/inverse system of compatible ℋ_n representations. However, as the authors Ion and Wu investigate in <cit.>, this sequence of representations is compatible enough to allow for the construction of a limiting representation for a new algebra resembling a direct limit of the double affine Hecke algebras of type GL. We will start by giving the definition of this algebra. <cit.> The ^+stable-limit double affine Hecke algebra of Ion and Wu, ℋ^+, is the algebra generated over ℚ(q,t) by the elements T_i,X_i,Y_i for i ≥ 1 satisfying the following relations: * The generators T_i,X_i for i ∈ℕ satisfy (<ref>) and (<ref>) of Defn. <ref>. * The generators T_i,Y_i for i ∈ℕ satisfy (<ref>) and (<ref>) of Defn. <ref>. * Y_1T_1X_1 = X_2Y_1T_1. Importantly, there is no relation of the form Y_1X_1⋯ X_n = q X_1⋯ X_n Y_1 in ℋ^+. As such there is no invertible 'ω' element in ℋ^+ which in ℋ_n normally realizes the cyclic symmetry of the affine type A root systems. <cit.> Let 𝒫_∞^+ denote the inverse limit of the rings 𝒫_k^+ with respect to the homomorphisms π_k: _k+1^+→_k^+ which send x_k+1 to 0 at each step. We can naturally extend π_k to a map 𝒫_∞^+→𝒫_k which will be given the same name. Let 𝒫(k)^+ := ℚ(q,t)[x_1,…,x_k]⊗Λ[x_k+1+x_k+2+…]. Define the ring of almost symmetric functions by 𝒫_as^+ := ⋃_k≥ 0𝒫(k)^+. Note 𝒫_as^+⊂𝒫_∞^+. Define ρ: 𝒫_as^+→ x_1𝒫_as^+ to be the linear map defined by ρ(x_1^a_1⋯ x_n^a_nF[x_m+x_m+1+…]) = 1(a_1 > 0) x_1^a_1⋯ x_n^a_nF[x_m+x_m+1+…] for F ∈Λ. Note that ρ restricts to maps _n → x_1_n which are compatible with the quotient maps π_n. The ring _as^+ is a free graded Λ-module with homogeneous basis given simply by the set of monomials x^μ with μ reduced. Therefore, _as^+ has the homogeneous ℚ(q,t) basis given by all x^μm_λ[X] ranging over all reduced compositions μ and partitions λ. Further, the dimension of the homogeneous degree d part of 𝒫(k)^+ is equal to the number of pairs (μ,λ) of reduced compositions μ and partitions λ with |μ|+|λ| = d and ℓ(μ) ≤ k. In order to define the operators required for Ion and Wu's main construction we must first review the new definition of convergence introduced in <cit.>. <cit.> Let (f_m)_m ≥ 1 be a sequence of polynomials with f_m ∈𝒫_m^+. Then the sequence (f_m)_m ≥ 1 is convergent if there exist some N and auxiliary sequences (h_m)_m≥1, (g^(i)_m)_m≥ 1, and (a^(i)_m)_m≥ 1 for 1≤ i ≤ N with h_m, g^(i)_m ∈𝒫_m^+, a^(i)_m ∈ℚ(q,t) with the following properties: * For all m, f_m = h_m + ∑_i=1^N a^(i)_m g^(i)_m. * The sequences (h_m)_m≥1, (g^(i)_m)_m≥1 for 1≤ i ≤ N converge in 𝒫_∞^+ with limits h,g^(i) respectively. That is to say, π_m(h_m+1) = h_m and π_m(g_m+1^(i)) = g_m^(i) for all 1≤ i ≤ N and m ≥ 1. Further, we require g^(i)∈𝒫_as^+. * The sequences a^(i)_m for 1≤ i ≤ N converge with respect to the t-adic topology on ℚ(q,t) with limits a^(i) which are required to be in ℚ(q,t). The sequence is said to have a limit given by lim_m f_m = h + ∑_i=1^Na^(i)g^(i). This definition of convergence is a mix of both the stronger topology arising from the inverse system given by the maps π_m and the t-adic topology arising from the ring ℚ(q,t). It is important to note that part of the above definition requires convergent sequences to always be written as a finite sum of fixed length with terms that converge independently. Here we list a few instructive examples of convergent sequences and their limits: * lim_m t^m = 0 * lim_m 1+… +t^m = 1/1-t * lim_m1/q^2-t^m(x_3^2 +…+ x_m^2 ) = q^-2 p_2[x_3+…]. In this paper we will be entirely concerned with convergent sequences (f_m)_m ≥ 1 with almost symmetric limits lim_m f_m ∈_as^+. In this case it follows readily from definition that each of these convergent sequences necessarily will have the form f_m(x_1,…,x_m) = ∑_i=1^N c_i^(m)x^μ^(i)F_i[x_1+… + x_m] where N ≥ 1 is fixed, c_i^(m) are convergent sequences of scalars with lim_m c_i^(m)∈ℚ(q,t), F_i are symmetric functions, and μ^(i) are compositions. Here we will consider x^μ^(i) = 0 in _m whenever ℓ(μ^(i)) > m. <cit.> For m ≥ 1 suppose A_m is an operator on _m^+. The sequence (A_m)_m ≥ 1 of operators is said to converge if for every f ∈_as^+ the sequence (A_m(π_m(f)))_m ≥ 1 converges to an element of _as^+. From <cit.> the corresponding operator on _as^+ given by A(f): = lim_m A_m(π_m(f)) is well defined and said to be the limit of the sequence (A_m)_m ≥ 1. In this case we will simply write A = lim_m A_m. There are two important examples of convergent operator sequences which will be relevant for the rest of this paper. For all i ≥ 1 and m ≥ 1 let X_i^(m) denote the operator on _m^+ given by 0 if m < i and by X_i^(m)f = x_i f if i ≤ m. Similarly for i ≥ 1 and m ≥ 1 let T_i^(m) denote the operator on _m^+ given by 0 if m-1 < i and by T_if = s_if + (1-t)x_i f-s_if/x_i-x_i+1 if i ≤ m-1. Then for all i ≥ 1 it is immediate from definition that the sequences (X_i^(m))_m ≥ 1 and (T_i^(m))_m ≥ 1 converge to operators X_i and T_i respectively on _as^+. Further, their corresponding actions are given for f ∈_as^+ simply by * X_i(f) = x_if * T_i(f) = s_if + (1-t)x_i f-s_if/x_i-x_i+1. The following important technical proposition of Ion and Wu will be used repeatedly in this paper. If A = lim_m A_m and f = lim_m f_m are limit operators and limit functions respectively then A(f) = lim_m A_m(f_m). This is a sort of continuity statement for convergent sequences of operators. The utility of the above proposition is that for an operator arising as the limit of finite variable operators, A = lim_m A_m say, we can use any sequence (f_m)_m ≥ 1 converging to f ∈_as^+ in order to calculate A(f). §.§.§ The Standard +Stable-Limit DAHA Representation Ion and Wu begin their construction of the standard representation of ℋ^+ by noting the following key fact. <cit.> For n ≥ 1 π_n-1 t^n Y_1^(n)X_1 = t^n-1 Y_1^(n-1)X_1π_n-1. In other words, the action of the operators t^nY_1^(n) and t^n-1Y_1^(n-1) are compatible on x_1𝒫_n. As such there exists a limit operator Y_1^(∞): x_1𝒫^+_∞→ x_1𝒫^+_∞ such that π_nY_1^(∞) = t^nY_1^(n). A crucial idea of Ion and Wu is to extend the action of the operators t^n Y_1^(n) on x_1𝒫_n to all of 𝒫_n using the previously defined projection ρ:𝒫_n→ x_1𝒫_n. <cit.> Define the operator Y_1^(n) := ρ∘ t^n Y_1^(n). For 2 ≤ i ≤ n define Y_i^(n) by requiring Y_i^(n) = t^-1T_i-1Y_i-1^(n)T_i-1. A direct check shows that Y_1^(n)X_1 = t^n Y_1^(n)X_1 so that Y_1^(n) extends the action of t^n Y_1^(n) on x_1𝒫_n as desired. The main utility of this specific choice of definition is the following theorem. <cit.> The sequence (Y_1^(m))_m ≥ 1 converges to an operator 𝒴_1 on _as^+. Define the operators 𝒴_i for i ≥ 2 by _i:= t^-1T_i-1_i-1T_i-1. The operators 𝒴_i along with the Demazure-Lusztig action of the T_i's and multiplication by the X_i's generate an ℋ^+ action on 𝒫_as^+. In particular, the authors Ion and Wu show that despite the fact that for 1 ≤ i ≠ j ≤ n, Y_i^(n)Y_j^(n)≠Y_j^(n)Y_i^(n) the limit Cherednik operators commute: 𝒴_i𝒴_j = 𝒴_j𝒴_i . The action of the _i operators respect the canonical filtration of _as^+ = ⋃_k≥ 0(k)^+. For all n≥ 0, the operators {_1,...,_n} restrict to operators on the space (n)^+ whereas the operators {_n+1,_n+2,...} annihilate (n)^+. Note that for n = 0, (0)^+ = Λ so all of the operators _i annihilate Λ. §.§ Double Dyck Path Algebra The Double Dyck Path Algebra 𝔸_q,t, introduced by Carlsson and Mellit <cit.>, is a quiver path algebra with vertices indexed by non-negative integers with the following edge operators: * d_+,d_+^*: k → k+1 * T_1,...,T_k-1: k → k * d_-: k+1 → k. The full set of relations for 𝔸_q,t are omitted here but can be found in <cit.>. In order to match the parameter conventions in Ion and Wu's work <cit.> we will consider 𝔸_t,q as opposed to 𝔸_q,t formed by simply swapping q and t in the defining relations of 𝔸_q,t. Here we highlight a few notable relations of 𝔸_t,q which will be required later: * The loops T_1,...,T_k-1 at vertex k≥ 2 generate a type A finite Hecke algebra * d_-^2T_k-1 = d_-^2 starting at vertex k ≥ 2 * T_id_- = d_-T_i at vertex k for 1≤ i≤ k-2 * z_id_-= d_-z_i at vertex k for 1≤ i ≤ k-1 where z_1 := t^k/1-t[d_+^*,d_-]T_k-1^-1⋯ T_1^-1 and z_i+1 = t^-1T_iz_iT_i. §.§.§ The Standard 𝔸_t,q Representation and the +Stable-Limit DAHA Vital to the proof of the Compositional Shuffle Conjecture by Carlsson and Mellit <cit.> is their construction of a particular representation of 𝔸_t,q. <cit.> For k ≥ 0 let V_k = ℚ(q,t)[y_1,…,y_k]⊗Λ be associated to the vertex k and denote by V_∙ be the system of spaces V_k. Let ζ_k denote the algebra homomorphism ζ_kf(y_1,…,.y_k-1,y_k) = f(y_2,…,y_k,qy_1). If f is a formal series with respect to the variable y with coefficients in some ring R denote by 𝔠_y(f) ∈ R the constant term of f i.e. the coefficient of y^0 in f. Note that each 𝔖_k acts on V_k by permuting the variables y_1,...,y_k. Define the following operators: * T_iF = s_iF + (1-t)y_i F-s_iF/y_i-y_i+1 * d_-F = 𝔠_y_k(F[X-(t-1)y_k]Exp[-y_k^-1X]) * d_+F = -T_1⋯ T_k (y_k+1F[X+(t-1)y_k+1]) * d_+^*F = ζ_kF[X+(t-1)y_k+1]. <cit.> The above operators define a representation of 𝔸_t,q on V_∙. Ion and Wu use their construction of the standard ℋ^+ representation _as^+ to recover the standard 𝔸_t,q representation V_∙. <cit.> There exists an 𝔸_t,q representation structure on _∙ = ((k)^+)_k≥ 0 isomorphic to the standard representation V_∙ such that at each vertex k, z_i acts by _i and y_i acts by X_i. Further, according to this isomorphism (k)^+ is identified with V_k via the map x_1^a_1⋯ x_k^a_kF[x_k+1+…] → y_1^a_1⋯ y_k^a_kF[X/t-1]. § STABLE-LIMITS OF NON-SYMMETRIC MACDONALD POLYNOMIALS We start by investigating the properties of certain sequences of non-symmetric Macdonald polynomials. We will find that if we fix any composition μ and consider the sequence of compositions (μ*0^m)_m≥ 0 the corresponding sequence of non-symmetric Macdonald polynomials (E_μ*0^m)_m≥ 0 will converge in the sense of Definition <ref>. It is important to note that in most cases the sequence (E_μ*0^m)_m≥ 0 will not converge with respect to the inverse system (π_k: _k+1→_k)_k≥ 1. This should be expected because the spectra of the Cherednik operators acting on _k+1 are incompatible with the spectra from the Cherednik operators acting on _k. However, by using the HHL explicit combinatorial formula for the non-symmetric Macdonald polynomials we show that the combinatorics of non-attacking labellings underlying the sequence (E_μ*0^m)_m≥ 0 converge in a certain sense. The weaker convergence notion introduced by Ion and Wu is consistent with these combinatorics. For our purposes later in this paper we will heavily rely on the convergence of these sequences as a bridge between the limit Cherednik operators _i and their classical counterparts. We now show the convergence of the sequence (E_μ*0^m)_m ≥ 0. First, we describe a convenient rearrangement of the monomials in each E_μ*0^m. Let μ be a composition with ℓ(μ) = n and m ≥ 0. Then E_μ * 0^m has the explicit expression given by E_μ * 0^m = ∑_λ partition |λ| ≤ |μ| m_λ[x_n+1+… + x_n+m] ∑_σ:μ * 0^ℓ(λ)→ [n+ℓ(λ)] non-attacking ∀ i = 1,...,ℓ(λ) λ_i = |σ^-1(n+i)| x_1 ^|σ^-1(1)|⋯ x_n ^|σ^-1(n)|Γ^(m) (σ) where Γ^(m)(σ) := q^-(σ)t^(σ)∏_ u ∈ dg'(μ * 0^ℓ(λ)) σ(u) ≠σ(d(u)) u  not in row 1 ( 1-t/1-q^-((u)+1)t^(a(u)+1)) ∏_ u ∈ dg'(μ * 0^ℓ(λ)) σ(u) ≠σ(d(u)) u  in row 1 ( 1-t/1-q^-((u) +1)t^(a(u) + m + 1)). First, start with directly applying the HHL formula (<ref>): E_μ * 0^m = ∑_σ: μ * 0^m → [n + m] non-attacking x^σq^-(σ)t^(σ)∏_u ∈ dg'(μ * 0^m) σ(u) ≠σ(d(u))( 1-t/1-q^-((u)+1)t^(a(u)+1)) . We know that E_μ * 0^m is symmetric in the variables x_n+1,...,x_n+m <cit.> so it follows that the ℚ(q,t)[x_1,...,x_n]-coefficient of each monomial in x_n+1,...,x_n+m is independent of the ordering of the latter variables. Hence, we find that by grouping these monomials by symmetry E_μ * 0^m = ∑_λ m_λ[x_n+1+...+x_n+m] ∑_σ: μ * 0^m → [n + m] non-attacking ∀ i  λ_i = |σ^-1(n+i)| x_1 ^|σ^-1(1)|⋯ x_n ^|σ^-1(n)| q^-(σ)t^(σ) × ∏_u ∈ dg'(μ) σ(u) ≠σ(d(u))( 1-t/1-q^-((u)+1)t^(a(u)+1)). Note that by degree considerations the only possible partitions λ that have a nonzero contribution to the above sum have |λ| ≤ |μ| and hence we can rewrite the above sums as ∑_λ∑_σ: μ * 0^m → [n + m] non-attacking ∀ i  λ_i = |σ^-1(n+i)| = ∑_λ partition |λ| ≤ |μ|∑_σ:μ* 0^m → [n+ℓ(λ)] non-attacking ∀ i  λ_i = |σ^-1(n+i)| . In the latter sum above we have written each σ as a non-attacking labelling σ: μ*0^m → [n+ℓ(λ)] to emphasize that the numbers occurring in this labelling are contained in the set [n+ℓ(λ)] which is independent of m. However, these are still considered labellings of the diagram corresponding to μ*0^m and hence we calculate the corresponding q,t coefficients in the HHL formula accordingly. We must now understand the dependence on m of the statistics , , , and a in each of the non-attacking labellings σ: μ*0^m → [n+ℓ(λ)] as m varies. Fix a non-attacking labelling σ: μ * 0^k → [n+k] for some k ≤ m and let σ_m be the associated labelling of μ * 0^m. Recall that (σ) = ∑_u ∈(σ) ((u) + 1) and similarly for (σ_m). The only descent boxes of σ_m occur in the diagram dg'(μ) itself and (u) for these boxes will not depend on m. Therefore, (σ_m) = (σ). For u ∈ dg'(μ * 0^m) clearly u ∈ dg'(μ) and by direct computation we see that when u is not in row 1 then a(u) does not depend on m. However, for u in row 1 a(u) when calculated in the diagram dg(μ) increases to a(u)+m when calculated in the diagram dg(μ*0^m). This comes from counting the extra row 0 boxes for each box in row 1. Also note that in any non-attacking labelling there cannot be descent boxes in row 1. Now from careful counting we get the following: * |(σ_m)| = |(σ)| + (n+k)(m-k) + m-k2 * |{i<j : (μ * 0^m)_i ≤ (μ * 0^m)_j } | = | {i<j : (μ*0^k)_i ≤ (μ*0^k)_j } | + (#{i:μ_i = 0} +k)(m-k) + m-k2 * ∑_u ∈(σ_m) a(u) = ∑_u ∈(σ) a(u). By using the above calculations and cancelling out terms we get (σ_m) = |(σ_m)| - | {i<j: (μ * 0^m)_i ≤ (μ * 0^m)_j } | - ∑_u ∈(σ_m) a(u) = |(σ)| - | {i<j: (μ*0^k)_i ≤ (μ*0^k)_j } | -∑_u ∈(σ) a(u) + (n- #{i:μ_i = 0})(m-k) = (σ) +#{i:μ_i ≠ 0}(m-k). Further, from the prior observation about how arm, a(u), changes with m we see that ∑_u ∈ dg'(μ * 0^m) a(u) = #{i:μ_i ≠ 0}(m-k) + ∑_u ∈ dg'(μ*0^k) a(u) where arm has been calculated in the corresponding diagrams. We then have (σ_m) = ( ∑_u ∈ dg'(μ * 0^m) a(u) ) - (σ_m) = (#{i:μ_i ≠ 0}(m-k) + ∑_u ∈ dg'(μ*0^k) a(u)) - ( (σ) + #{i:μ_i ≠ 0}(m-k)) = ( ∑_u ∈ dg'(μ*0^k) a(u)) - (σ) = (σ). Thus (σ_m) = (σ) and (σ_m)= (σ). Lastly, we return to the expansion of E_μ*0^m we found above. For each partition λ with |λ| ≤ |μ| we now see that ∑_σ:μ* 0^m → [n+ℓ(λ)] non-attacking ∀ i  λ_i = |σ^-1(n+i)| x_1 ^|σ^-1(1)|⋯ x_n ^|σ^-1(n)| q^-(σ)t^(σ)∏_u ∈ dg'(μ) σ(u) ≠σ(d(u))( 1-t/1-q^-((u)+1)t^(a(u)+1)) = ∑_σ:μ* 0^ℓ(λ)→ [n+ℓ(λ)] non-attacking ∀ i  λ_i = |σ^-1(n+i)| x_1 ^|σ^-1(1)|⋯ x_n ^|σ^-1(n)|Γ^(m)(σ). where Γ^(m)(σ) := q^-(σ)t^(σ)∏_ u ∈ dg'(μ * 0^ℓ(λ)) σ(u) ≠σ(d(u)) u  not in row 1 ( 1-t/1-q^-((u)+1)t^(a(u)+1)) ∏_ u ∈ dg'(μ * 0^ℓ(λ)) σ(u) ≠σ(d(u)) u  in row 1 ( 1-t/1-q^-((u) +1)t^(a(u) + m + 1)). and we calculate all of the associated statistics in their respective diagrams. Now that we have conveniently rearranged the monomial terms of each E_μ*0^m and identified the dependence of the coefficients on the parameter m we can give a simple proof that the sequence (E_μ*0^m)_m ≥ 0 converges. \begincor/defn Let μ be a composition with ℓ(μ) = n. The sequence (E_μ * 0^m)_m ≥ 1 converges to an almost-symmetric function E_μ:= lim_m E_μ * 0^m∈_as^+ given explicitly by E_μ = ∑_λ partition |λ| ≤ |μ| m_λ[x_n+1+…] ∑_σ:μ * 0^ℓ(λ)→ [n+ℓ(λ)] non-attacking ∀ i = 1,...,ℓ(λ) λ_i = |σ^-1(n+i)| x_1 ^|σ^-1(1)|⋯ x_n ^|σ^-1(n)|Γ(σ) where Γ(σ) := lim_mΓ^(m)(σ) = q^-(σ)t^(σ)∏_ u ∈ dg'(μ * 0^ℓ(λ)) σ(u) ≠σ(d(u)) u  not in row 1 ( 1-t/1-q^-((u)+1)t^(a(u)+1)) ∏_ u ∈ dg'(μ * 0^ℓ(λ)) σ(u) ≠σ(d(u)) u  in row 1 ( 1-t ). \endcor/defn Note that the formula in Theorem <ref> is a fixed size finite sum where the only dependence on m is in the m_λ symmetric function terms and the t^m occurring in the Γ^(m) terms. Thus in the sense of Ion and Wu, see Definition <ref>, this sequence converges to a well defined element of _as^+. In particular, each m_λ[x_n+1+… +x_n+m] converges to m_λ[x_n+1+…] and t^m converges to 0 in the Γ-term. Simplifying gives the formula above. It follows from Corollary <ref> that the almost symmetric functions E_μ are homogeneous of degree |μ| and E_μ∈(ℓ(μ))^+. Note importantly, that for any composition μ (not necessarily reduced) and any n ≥ 0, by shifting the terms of the sequence (E_μ*0^m)_m≥ 0 we see that E_μ *0^n = E_μ. Let λ be a partition with ℓ(λ) = n and |λ| = N. Then E_λ is determined by E_λ*0^N∈_n+N^+. That is to say, if E_λ*0^N(x_1,...,x_n+N) = c_1 x^μ^(1)m_ν^(1)[x_n+1+...+x_n+N] +...+ c_k x^μ^(k)m_ν^(k)[x_n+1+...+x_n+N] then E_λ = c_1 x^μ^(1)m_ν^(1)[x_n+1+...] +...+ c_k x^μ^(k)m_ν^(k)[x_n+1+...] . As λ is a partition, row 1 of any non-attacking labelling of λ must be 1,2,,...,ℓ(λ). Thus no boxes of dg'(λ) in row 1 will have σ(u) ≠σ(d(u)) and so there will be no contributions from any of the terms of the form ∏_ u ∈ dg'(λ) σ(u) ≠σ(d(u)) u  row 1 ( 1-t/1-q^-((u) +1)t^(a(u) + m + 1)). Further, from Corollary <ref> it is clear that these are the only coefficients that depend on m in the limit. Also it follows that each term of the form x^μm_ν[x_n+1+…] that occurs in the expansion of E_λ appears at least by the m = N step of the limit. From these two facts it follows that the expansion of E_λ will match that of E_λ*0^N(x_1,...,x_n+N) up to truncating each m_ν[x_n+1+…] to m_ν[x_n+1+… + x_n+N] using π_n+N. § 𝒴-WEIGHT BASIS OF 𝒫_AS^+ Given a family of commuting operators { y_i : i ∈ I } and a weight vector v we denote its weight by the function α: I →ℚ(q,t) such that y_iv = α(i)v. We sometimes denote α as (α_1,α_2,…). §.§ The E_μ are 𝒴-Weight Vectors In what follows, the classical spectral theory for non-symmetric Macdonald polynomials is used to demonstrate that the limit functions E_μ are 𝒴-weight vectors. The below lemma is a simple application of this classical theory and basic properties of the t-adic topology on ℚ(q,t). For a composition μ with ℓ(μ) = n define α_μ^(m) to be the Y^(n+m)-weight of E_μ *0^m. Then in the t-adic topology on ℚ(q,t) the sequence (t^n+mα_μ^(m)(i))_m≥ 0 converges in m to some α_μ(i) ∈ℚ(q,t). In particular, α_μ(i) = 0 for i > n and for 1≤ i≤ n we have that α_μ(i) = 0 exactly when μ_i = 0. Take μ = (μ_1,…,μ_n). From classical double affine Hecke algebra theory <cit.> we have α_μ^(0)(i) = q^μ_it^1-β_μ(i) where β_μ(i) := #{j: 1≤ j ≤ i  , μ_j ≤μ_i} + #{j: i < j ≤ n  , μ_i > μ_j}. If we calculate β_μ*0^m(i) directly it follows then that t^n+mα_μ^(m)(i) = q^μ_it^n+m+1-(β_μ(i) + m 1(μ_i ≠ 0)) = t^nα_μ^(0)(i) i≤ n, μ_i ≠ 0 q^μ_it^n+m+1-(β_μ(i) + m 1(μ_i ≠ 0)) = t^n+mα_μ^(0)(i) i≤ n, μ_i = 0 t^n+m+1-(#{j:μ_j =  0} + i-n) = t^#{j:μ_j ≠ 0}t^m+1-(i-n) i > n . Lastly, by taking the limit m→∞ we get the result. For a composition μ define the weight α_μ using the formula in Lemma <ref> for the list of scalars α_μ(i) for i ∈ℕ. For a composition μ = (μ_1,…,μ_n) with μ_i ≠ 0 for 1≤ i ≤ n, E_μ is a -weight vector with weight α_μ. Fix any r ∈ℕ. We start by rewriting the operator _r explicitly in terms of the limit definition of _1. _r = t^-(r-1)T_r-1⋯ T_1 _1 T_1 ⋯ T_r-1 = t^-(r-1)T_r-1⋯ T_1 lim_k t^kρω_k ^-1 T_k-1^-1⋯ T_1^-1T_1⋯ T_r-1π_k = lim_k t^k T_r-1⋯ T_1 ρ t^-(r-1)ω_k ^-1 T_k-1^-1⋯ T_r^-1π_k = lim_k t^k T_r-1⋯ T_1 ρ T_1^-1⋯ T_r-1^-1 t^-(r-1) T_r-1⋯ T_1ω_k ^-1 T_k-1^-1⋯ T_r^-1π_k = lim_k t^k T_r-1⋯ T_1 ρ T_1^-1⋯ T_r-1^-1 Y_r^(k)π_k. Applying _r to E_μ we see by taking k = n+m ≥ n and shifting the indices that _r(E_μ) = lim_m t^n+m T_r-1⋯ T_1 ρ T_1^-1⋯ T_r-1^-1 Y_r^(n+m)(E_μ*0^m) = lim_m T_r-1⋯ T_1 ρ T_1^-1⋯ T_r-1^-1 t^n+mα_μ^(m)(r) E_μ*0^m and by Lemma <ref> this converges to _r( E_μ) = α_μ(r) (T_r-1⋯ T_1 ρ T_1^-1⋯ T_r-1^-1) E_μ . Importantly, we have implicitly used the fact that both of the sequences (E_μ*0^m)_m and (α_μ^(m)(r))_m converge, that the operator T_r-1⋯ T_1 ρ T_1^-1⋯ T_r-1^-1 commutes with the quotient maps π_k: _k+1→_k for k> r, and Proposition 6.21 in <cit.>. We will show that the right side is α_μ(r)E_μ. As α_μ(r) = 0 for r > n by Lemma <ref> we reduce to the sub case r ≤ n. Fix r ≤ n. If we could show that x_1 divides T_1^-1⋯ T_r-1^-1E_μ then we would have ρ(T_1^-1⋯ T_r-1^-1E_μ) = T_1^-1⋯ T_r-1^-1E_μ implying that _r(E_μ) = α_μ(r) (T_r-1⋯ T_1 ρ T_1^-1⋯ T_r-1^-1) E_μ = α_μ(r) T_r-1⋯ T_1 T_1^-1⋯ T_r-1^-1) E_μ = α_μ(r)E_μ as desired. To show that x_1 | T_1^-1⋯ T_r-1^-1E_μ it suffices to show that for all m ≥ 0, x_1 | T_1^-1⋯ T_r-1^-1 E_μ *0^m. To this end fix m ≥ 0. We have that α_μ^(m)(r)E_μ*0^m = Y_r^(n+m)(E_μ*0^m) = t^n+m-r+1T_r-1⋯ T_1 ω_n+m^-1 T_n+m-1^-1⋯ T_r^-1 E_μ*0^m. Since α_μ^(m)(r) ≠ 0 we can have 1/α_μ^(m)(r) T_1 ^-1⋯ T_r-1^-1 act on both sides of the above to get T_1^-1⋯ T_r-1^-1 E_μ*0^m = t^n+m-r+1/α_μ^(m)(r)ω_n+m^-1 T_n+m-1^-1⋯ T_r^-1 E_μ*0^m. By HHL any non-attacking labelling of μ* 0^m will have row 1 diagram labels given by {1,2,…,n} so in particular x_r divides E_μ*0^m for all m > 0. Lastly, ω_n+m^-1 T_n+m-1^-1⋯ T_r^-1X_r = ω_n+m^-1 t^-(n+m-r)X_n+mT_n+m-1⋯ T_r = qt^-(n+m-r) X_1 ω_n+m^-1 T_n+m-1⋯ T_r. Thus x_1 divides T_1^-1⋯ T_r-1^-1 E_μ*0^m for all m ≥ 0 showing the result. Now we consider the general situation where the composition μ can have some parts which are 0. We can extend the above result, Lemma <ref>, by a straight-forward argument using intertwiner theory from the study of affine Hecke algebras. For all compositions μ, E_μ is a -weight vector with weight α_μ. Lemma <ref> shows that this statement holds for any composition with all parts nonzero. Fix a composition μ with length n. We know that by sorting in decreasing order that μ can be written as a permutation of a composition of the form ν * 0^m for a partition ν and some m ≥ 0. From the definition of Bruhat order it follows that ν*0^m will be the minimal element out of all of its distinct permutations, including μ. Necessarily, this finite subposet generated by the permutations of ν*0^m is isomorphic to the Bruhat ordering on the coset space 𝔖_n/𝔖_κ where 𝔖_κ is the Young subgroup of 𝔖_n corresponding to the stabilizer of ν*0^m. Hence, it suffices to show inductively that for any composition β with ν*0^m ≤β < s_i(β) ≤μ, if E_β satisfies the theorem then so will E_s_i(β). As μ is finitely many covering elements away in Bruhat from ν*0^m this induction will indeed terminate after finitely many steps. Using the intertwiner operators from affine Hecke algebra theory, given by φ_i = T_i_i-_iT_i in this context, we only need to show that for any composition β with ν*0^m ≤β < s_i(β) ≤μ, φ_i E_β = (α_β(i) - α_β(i+1))E_s_i(β). Suppose the theorem holds for some β with ν*0^m ≤β < s_i(β) ≤μ. Then we have the following: φ_iE_β = (T_i(_i - _i+1) + (1-t)_i+1)E_β = (α_β(i)- α_β(i+1))T_iE_β + (1-t)α_β(i+1)E_β = lim_m (t^n+mα_β^(m)(i)- t^n+mα_β^(m)(i+1))T_iE_β *0^m + (1-t)t^n+mα_β^(m)(i+1)E_β *0^m = lim_m(t^n+mα_β^(m)(i)- t^n+mα_β^(m)(i+1))E_s_i(β) *0^m = (α_β(i) - α_β(i+1))E_s_i(β). As an immediate consequence of the proof of Theorem <ref> we have the following. Let μ be a composition and i ≥ 1 such that s_i(μ) > μ. Then E_s_i(μ) = (T_i + (1-t)α_μ(i+1)/α_μ(i) - α_μ(i+1)) E_μ. We have shown in Theorem <ref> there is an explicit collection of 𝒴-weight vectors E_μ in 𝒫_as^+ arising as the limits of non-symmetric Macdonald polynomials E_μ * 0^m. Unfortunately, these E_μ do not span 𝒫_as^+. To see this note that one cannot write a non-constant symmetric function as a linear combination of the E_μ. However, in the below work we build a full -weight basis of _as^+. §.§ Constructing a Full -Weight Basis §.§.§ Defining the Stable-Limit Non-symmetric Macdonald Functions To complete our construction of a full weight basis of 𝒫_as^+ we will need the ∂_-^(k) operators from Ion and Wu. These operators are, up to a change of variables and plethsym, the d_- operators from Carlsson and Mellit's standard 𝔸_t,q representation. <cit.> Define the operator ∂_-^(k): 𝒫(k)^+→𝒫(k-1)^+ to be the 𝒫_k-1^+-linear map which acts on elements of the form x_k^nF[x_k+1+x_k+2+…] for F ∈Λ and n ≥ 0 as ∂_-^(k)(x_k^nF[x_k+1+x_k+2+…]) = ℬ_n(F)[x_k + x_k+1 +…]. Here the ℬ_n are the Jing operators which serve as creation operators for Hall-Littlewood symmetric functions 𝒫_λ given explicitly by the following plethystic formula: ℬ_n(F)[X] = ⟨ z^n ⟩ F[X-z^-1]Exp[(1-t)zX] . Importantly, the ∂_-^(k) operators do not come from the ℋ^+ action itself. Note that the ∂_-^(k) operators are homogeneous by construction. We will require the useful alternative expression for the ∂_-^(k) operators which can be found in <cit.>. Recall the notation 𝔠_y from Definition <ref>. Let τ_k denote the alphabet shift 𝔛_k →𝔛_k-1 acting on symmetric functions where 𝔛_i : = x_i+1+x_i+2+.... Then for f ∈_k and F ∈Λ ∂_-^(k)(f(x_1,… x_k) F[𝔛_k]) = τ_k 𝔠_x_k f(x_1,...,x_k)F[𝔛_k -x_k]Exp[-(t-1)x_k^-1𝔛_k]. <cit.>. As an immediate consequence of this explicit description of the action of the ∂_-^(k) operator we get the following required lemmas. <cit.> The map ∂_-^(k): (k)^+→(k-1)^+ is a projection onto (k-1)^+ i.e. for f ∈(k-1)^+⊂(k)^+ we have that ∂_-^(k)(f) = f. Fix F ∈Λ. It suffices to show that ∂_-^(k)(F[𝔛_k-1]) = F[𝔛_k-1]. By using the coproduct on Λ we can expand F[𝔛_k-1] = F[x_k+𝔛_k] in powers of x_k^i with some coefficients F_i ∈Λ as F[x_k+𝔛_k] = ∑_i ≥ 0 x_k^iF_i[𝔛_k]. From Lemma <ref> we have ∂_-^(k)(F[𝔛_k-1]) = ∂_-^(k)(F[x_k+𝔛_k]) = ∂_-^(k)(∑_i ≥ 0 x_k^iF_i[𝔛_k]) = τ_k 𝔠_x_k( ∑_i ≥ 0 x_k^iF_i[𝔛_k -x_k]Exp[-(t-1)x_k^-1𝔛_k] ) = τ_k 𝔠_x_k F[𝔛_k -x_k +x_k]Exp[-(t-1)x_k^-1𝔛_k] = τ_k 𝔠_x_k F[𝔛_k] Exp[-(t-1)x_k^-1𝔛_k] = τ_k F[𝔛_k] 𝔠_x_k Exp[-(t-1)x_k^-1𝔛_k] = τ_k F[𝔛_k] = F[𝔛_k-1]. For all G ∈Λ and g(x) ∈(k)^+ ∂_-^(k)(G[x_k + x_k+1+…]g(x)) = G[x_k + x_k+1+…] ∂_-^(k)(g(x)). It suffices to take g(x) ∈(k)^+ to be of the form g(x) = f(x_1,…,x_k)F[𝔛_k] with f∈_k^+ and F ∈Λ. From Lemma <ref> we get the following: ∂_-^(k)(G[x_k + x_k+1+…]g(x)) = ∂_-^(k)(G[𝔛_k-1]g(x)) = τ_k 𝔠_x_k G[𝔛_k-1-x_k] f(x_1,…,x_k)F[𝔛_k - x_k]Exp[-(t-1)x_k^-1𝔛_k] = τ_k 𝔠_x_k G[𝔛_k] f(x_1,…,x_k)F[𝔛_k - x_k]Exp[-(t-1)x_k^-1𝔛_k] = τ_kG[𝔛_k]𝔠_x_kf(x_1,…,x_k)F[𝔛_k - x_k]Exp[-(t-1)x_k^-1𝔛_k] = G[𝔛_k-1]τ_k𝔠_x_kf(x_1,…,x_k)F[𝔛_k - x_k]Exp[-(t-1)x_k^-1𝔛_k] = G[𝔛_k-1]∂_-^(k)(f(x_1,…,x_k)F[𝔛_k]) = G[𝔛_k-1]∂_-^(k)(g(x)). For G ∈Λ and g(x) ∈(k)^+ ∂_-^(k)(G[X]g(x)) = G[X] ∂_-^(k)(g(x)). Take G ∈Λ and g(x) ∈(k)^+. Expand G[X] as a finite sum of terms of the form f_i(x_1,…,x_k-1)F_i[x_k +…], where f_i ∈_k-1 and F_i ∈Λ so G[X] = ∑_i f_i(x_1,…,x_k-1)F_i[x_k +…]. By Lemma <ref> and the fact that ∂_-^(k) is a _k-1^+-linear map from Definition <ref> we now see that ∂_-^(k)(G[X]g(x)) = ∑_i∂_-^(k)( f_i(x_1,…,x_k-1)F_i[x_k +…]g(x)) = ∑_i f_i(x_1,…,x_k-1)F_i[x_k +…]∂_-^(k)(g(x)) = G[X]∂_-^(k)(g(x)). We can now construct a full -weight basis of 𝒫_as^+. We parameterize this basis by pairs (μ | λ) for μ a reduced composition and λ a partition. Combinatorially, this is reasonable because, as already mentioned, the monomial basis for _as^+, {x^μm_λ|μ∈, λ∈}, is indexed by pairs of reduced compositions and partitions. For μ a reduced composition and λ a partition define the stable-limit non-symmetric Macdonald function corresponding to (μ|λ) as E_(μ|λ) := ∂_-^(ℓ(μ)+1)⋯∂_-^(ℓ(μ)+ℓ( λ))E_μ*λ. For a partition λ define 𝒜_λ := E_(∅|λ)∈Λ . Later in Theorem <ref>, we will show that the collection {E_(μ|λ)|μ∈, λ∈} is a -weight basis for _as^+. Note importantly that E_(μ|λ)∈𝒫(ℓ(μ))^+ and E_(μ|λ) is homogeneous of degree |μ| + |λ|. Further, we have E_(μ|∅) = E_μ and E_(∅ |λ) = 𝒜_λ. Notice that in Definition <ref> it makes sense to consider E_(μ|λ) when μ is not necessarily reduced. However, it is a nontrivial consequence of Theorem <ref> that an analogously defined E_(μ*0|λ) is a nonzero scalar multiple of E_(μ|λ). Thus there is no need to consider the case of μ non-reduced when building a basis of _as^+. There is another basis of _as^+ given by Ion and Wu in their unpublished work <cit.> which is equipped with a natural ordering with respect to which the limit Cherednik operators are triangular. It follows then that after we show in Corollary <ref> that the E_(μ|λ) are -weight vectors that each E_(μ|λ) has a triangular expansion in Ion and Wu's basis. The stable-limit non-symmetric Macdonald functions E_(μ|λ) as defined in this paper are distinct from the stable-limits of non-symmetric Macdonald polynomials occurring in <cit.>. In their paper Haglund, Haiman, and Loehr investigate stable-limits of the form (E_0^m* μ)_m ≥ 0 where μ is a composition. Their analysis does not require the convergence definition of Ion and Wu as the sequences (E_0^m* μ)_m ≥ 0 have stable limits in the traditional sense. Further, the limits of the (E_0^m* μ)_m ≥ 0 sequences are symmetric functions whereas, as we will see soon, the E_(μ|λ) are not fully symmetric in general. The following simple lemma will be used to show that since the E_μ*λ are -weight vectors the stable-limit non-symmetric Macdonald functions E_(μ|λ) are -weight vectors as well. We describe their weights in Corollary <ref>. Suppose f ∈𝒫(k)^+ is a -weight vector with weight (α_1,…,α_k,0,0,…). Then ∂_-^(k) f ∈𝒫(k-1)^+ is a -weight vector with weight (α_1,…,α_k-1,0,0,…). We know that from <cit.> for g ∈𝒫(k)^+ and 1≤ i ≤ k-1, _i ∂_-^(k) g = ∂_-^(k)_i g so _i∂_-^(k) f = ∂_-^(k)_i f = α_i ∂_-^(k) f. From <cit.> we have that if i≥ k then _i annihilates 𝒫(k-1). Since ∂_-^(k)f ∈𝒫(k-1)^+ for all i ≥ k, _i ∂_-^(k)f = 0. Here we give a few basic examples of stable-limit non-symmetric Macdonald functions expanded in the Hall-Littlewood basis 𝒫_λ and their corresponding weights. ∙E_(∅|2) = 𝒫_2[x_1+…] +q^-1/1-q^-1t𝒫_1,1[x_1+…]; weight α_(∅|2) = (0,0,…) ∙E_(2|∅) = x_1^2 + q^-1/1-q^-1tx_1 𝒫_1[x_2+…]; weight α_(2|∅) =(q^2t,0,…) ∙E_(1,1,1|∅) = x_1x_2x_3; weight α_(1,1,1|∅) =(qt^3,qt^2,qt,0,…) ∙E_(1,1|1) = x_1x_2𝒫_1[x_3+…]; weight α_(1,1|1)= (qt^3,qt^2,0,…) ∙E_(1|1,1) = x_1𝒫_1,1[x_2+⋯]; weight α_(1|1,1)= (qt^3,0,…) As an immediate result of Lemma <ref> we have the following: For μ∈ and λ∈, E_(μ|λ)∈_as^+ is a -weight vector with weight α_(μ|λ) given explicitly by α_(μ|λ)(i) = α_μ*λ(i) = q^μ_it^ℓ(μ)+ℓ(λ)+1-β_μ*λ(i) i≤ℓ(μ) , μ_i ≠ 0 0 otherwise. By Definition <ref> we have that E_(μ|λ) := ∂_-^(ℓ(μ)+1)⋯∂_-^(ℓ(μ)+ℓ( λ))E_μ*λ. From Theorem <ref> we know that E_μ*λ is a -weight vector with weight α_μ*λ. Recall that from Lemma <ref> that α_μ*λ(i) = q^(μ*λ)_it^ℓ(μ*λ)+1-β_μ*λ(i) for i ≤ℓ(μ*λ) and equals 0 for i > ℓ(μ*λ). Using Lemma <ref> inductively now shows that E_(μ|λ) is a -weight vector with weight α_(μ|λ) given by the expression given in the statement of this corollary. By using the HHL-type formula we proved for the functions E_μ in Corollary <ref>, we readily find a similar formula for the full set of stable-limit non-symmetric Macdonald functions. For a reduced composition μ and partition λ we have that E_(μ|λ) = ∑_ν partition |ν| ≤ |μ|+|λ|∑_σ:μ*λ * 0^ℓ(ν)→ [ℓ(μ)+ℓ(λ)+ℓ(ν)] non-attacking ∀ i = 1,...,ℓ(ν) ν_i = |σ^-1(ℓ(μ)+ℓ(λ)+i)|Γ(σ) x_1 ^|σ^-1(1)|⋯ x_ℓ(μ) ^|σ^-1(ℓ(μ))| × ℬ_|σ^-1(ℓ(μ)+1)|⋯ℬ_|σ^-1(ℓ(μ)+ℓ(λ))|( m_ν)[𝔛_ℓ(μ)+ ℓ(λ)] where Γ(σ) := q^-(σ)t^(σ)∏_ u ∈ dg'(μ*λ * 0^ℓ(ν)) σ(u) ≠σ(d(u)) u  not in row 1 ( 1-t/1-q^-((u)+1)t^(a(u)+1)) ∏_ u ∈ dg'(μ * λ * 0^ℓ(ν)) σ(u) ≠σ(d(u)) u  in row 1 ( 1-t ) . Unfortunately, this formula is not nearly as elegant or useful as the HHL formula (<ref>). The main obstruction comes from not having a full understanding of the action of the Jing operators ℬ_a on the monomial symmetric functions. If one were to find an explicit expansion of elements like ℬ_a_1⋯ℬ_a_r(m_λ) into another suitable basis of Λ (possibly the 𝒫_ν basis) one would be able to give a much more elegant description of these functions. Likely there is a nice way to do this that has eluded this author. §.§ 𝒜_λ Basis for Λ and Symmetrization via the Trivial Hecke Idempotent Lemma <ref> shows that the following operator is well defined on _as^+ i.e. independent of k. For f∈𝒫(k)^+⊂𝒫_as^+ define σ(f) := ∂_-^(1)⋯∂_-^(k) f . Then σ defines an operator 𝒫_as^+→Λ which we call the stable-limit symmetrization operator. Note that σ( E_λ) = 𝒜_λ and σ(E_(μ|λ)) = σ(E_μ*λ). For all 0 ≤ k < n define the operator ϵ^(n)_k: _n^+→_n^+ as ϵ^(n)_k(f) : = 1/[n-k]_t!∑_σ∈𝔖_(1^k,n-k) t^n-k 2 - ℓ(σ) T_σ(f). Here 𝔖_(1^k,n-k) is the Young subgroup of 𝔖_n corresponding to the composition (1^k,n-k), T_σ = T_s_i_1⋯ T_s_i_r whenever σ = s_i_1⋯ s_i_r is a reduced word representing σ, and [m]_t! : = ∏_i=1^m(1-t^i/1-t) is the t-factorial. We will simply write ϵ^(n) for ϵ^(n)_0. For n ≥ 1 define the rational function Ω_n(x) = Ω_n(x_1,…,x_n;t): = ∏_1≤ i<j≤ n(x_i - tx_j/x_i - x_j). We will need the following technical result relating the action of ϵ^(n) on polynomials to a Weyl character type sum involving Ω_n. For f(x) ∈_n^+ ϵ^(n)(f(x)) = 1/[n]_t!∑_σ∈𝔖_nσ( f(x)Ω_n(x) ). See Remark 4.17 in <cit.>. After translating the finite Hecke algebra quadratic relations in <cit.> to match those occurring in this paper the formula matches. From the formula above in Proposition <ref> we can show that the sequence of trivial idempotents (ϵ^(n))_n≥ 1 converges in the sense of <cit.>. The sequence of operators (ϵ^(n))_n≥ 1 converges to an idempotent operator ϵ: _as^+→Λ such that for all i ≥ 1, ϵ T_i = ϵ. From <cit.> in Chapter 3 and Proposition <ref> we see that for all partitions λ with ℓ(λ) = k and n ≥ k that ϵ^(n)(x^λ) = [n-k]_t!/[n]_t! v_λ(t) P_λ[x_1+… + x_n;t] where P_λ[X;t] is the Hall-Littlewood symmetric function defined by Macdonald (not to be confused with 𝒫_λ[X] seen previously in this paper) and v_λ(t) := ∏_i ≥ 1([m_i(λ)]_t!) where m_i(λ) is the number of i 's in λ = 1^m_1(λ)2^m_2(λ)⋯. Now we note that with respect to the t-adic topology, lim_n →∞[n-k]_t!/[n]_t! = (1-t)^k so that lim_nϵ^(n)(x^λ) = v_λ(t)(1-t)^ℓ(λ)P_λ[X;t] and hence (ϵ^(n)(x^λ))_n ≥ 1 converges. Note that following Macdonald's definitions, v_λ(t)(1-t)^ℓ(λ)P_λ[X;t] = Q_λ[X;t]. Since ϵ^(n)T_i = ϵ^(n) for 1≤ i ≤ n-1 it follows that for all compositions μ, the sequence (ϵ^(n)(x^μ))_n ≥ 1 is convergent. Clearly from definition we have that for all symmetric functions F ∈Λ and f(x) ∈_n^+ ϵ^(n)(F[x_1+ … + x_n]f(x)) = F[x_1+ … + x_n] ϵ^(n)(f(x)). It follows now from a straightforward convergence argument using Remark <ref> that for all g ∈_as^+ the sequence (ϵ^(n)(π_n(g)))_n≥ 1 converges. The resulting operator ϵ:= lim_nϵ^(n)∘π_n is evidently idempotent as its codomain is Λ and certainly ϵ acts as the identity on symmetric functions. Further, for all i ∈ℕ we have ϵ T_i = lim_nϵ^(n)∘π_n T_i and since π_n commutes with T_i for n > i + 1 we see that lim_nϵ^(n)∘π_n T_i = lim_nϵ^(n) T_i ∘π_n = lim_nϵ^(n)∘π_n = ϵ. For all k ≥ 0 the sequence (ϵ^(n)_k)_n>k converges to an idempotent operator ϵ_k: _as^+→(k)^+ such that for all i ≥ k+1, ϵ_k T_i = ϵ_k. This follows immediately from Proposition <ref> after shifting indices and noting that the operators ϵ^(n)_k commute with multiplication by x_1,…, x_k. Now we will extend our definition of the stable-limit symmetrization operator σ to partial symmetrization operators in the natural way. For k ≥ 0 let σ_k: _as^+→(k)^+ be defined on g ∈(n)^+ for n ≥ k by σ_k(g):= ∂_-^(k+1)⋯∂_-^(n)(g). The operators σ_k are well defined by Lemma <ref>. In particular, if g ∈(ℓ)^+ for 0 ≤ℓ≤ k then (ℓ)^+⊂(k)^+ and there is no ambiguity in defining σ_k(g) = ∂_-^(k+1)⋯∂_-^(n)(g) as above. Note that σ_0 = σ. Further, for all μ∈ and λ∈ we see that in this new terminology E_(μ|λ) = σ_ℓ(μ)(E_μ*λ). Further, if k ≤ℓ then σ_kσ_ℓ = σ_k. We will now show that as operators on _as^+, ϵ_ℓ = σ_ℓ for all ℓ≥ 0. For all ℓ≥ 0, ϵ_ℓ = σ_ℓ . By shifting indices it suffices to just prove that ϵ = σ, i.e., the ℓ = 0 case. Further, since both maps are T_i-equivariant Λ-module maps (see Corollary <ref>) it suffices to show that for all partitions λ, ϵ(x^λ) = σ(x^λ). From the proof of Proposition <ref> we saw that ϵ(x^λ) = Q_λ[X;t] whereas it follows from the definition of the Jing vertex operators that σ(x^λ) = 𝒫_λ[X]. Therefore, it suffices to argue that Q_λ[X;t] = 𝒫_λ[X]. To this end we will prove that 𝒫_λ[X] = ⟨ z_1^λ_1⋯ z_r^λ_r⟩ Exp[(1-t)(z_1+…+z_r)X]Exp[(t-1)∑_1≤ i < j ≤ rz_j/z_i] which by 2.15 in Macdonald Chapter 3 <cit.> is an alternative definition for Q_λ[X;t]. Suppose λ = (λ_1,…, λ_r) is a partition. Note first that by definition 𝒫_λ[X] = ℬ_λ_1⋯ℬ_λ_r(1). We will now induct on the number of operators ℬ acting on 1 in the expression ℬ_λ_1⋯ℬ_λ_r(1). As a base case ℬ_λ_r(1) = ⟨ z_r ^λ_r⟩ 1[X-z_r^-1]Exp[(1-t)z_rX] = ⟨ z_r ^λ_r⟩ Exp[(1-t)z_rX]. We claim that for all 1≤ k ≤ r ℬ_λ_k⋯ℬ_λ_r(1) = ⟨ z_k^λ_k⋯ z_r^λ_r⟩ Exp[(1-t)(z_k+… +z_r)X]Exp[(t-1)∑_k≤ i<j≤ rz_j/z_i]. Suppose the above is true for some 1< k ≤ r. Then ℬ_λ_k-1ℬ_λ_k⋯ℬ_λ_r(1) = ℬ_λ_k-1( ⟨ z_k^λ_k⋯ z_r^λ_r⟩ Exp[(1-t)(z_k+… +z_r)X]Exp[(t-1)∑_k≤ i<j≤ rz_j/z_i] ) = ⟨ z_k-1^λ_k-1⟩⟨ z_k^λ_k⋯ z_r^λ_r⟩ Exp[(1-t)(z_k+… +z_r)(X-z_k-1^-1)]Exp[(t-1)∑_k≤ i<j≤ rz_j/z_i] Exp[(1-t)z_k-1X]. Now we use the additive property of the plethystic exponential namely, Exp[A+B] = Exp[A]Exp[B], to rearrange terms and get ⟨ z_k-1^λ_k-1⋯ z_r^λ_r⟩ Exp[(1-t)(z_k+… +z_r)X] Exp[(1-t)z_k-1X]Exp[(t-1)∑_k≤ i<j≤ rz_j/z_i]Exp[(t-1)(z_k/z_k-1+… +z_r/z_k-1)] which simplifies to ⟨ z_k-1^λ_k-1⋯ z_r^λ_r⟩ Exp[(1-t)(z_k-1+z_k+… +z_r)X] Exp[(t-1)∑_k-1≤ i<j≤ rz_j/z_i] showing that the formula (<ref>) holds for all 1≤ k≤ r. Taking k=1 shows equation (<ref>) holds. As an immediate consequence of Proposition <ref> we find the following enlightening description for the E_(μ|λ) functions. For all (μ|λ) with μ a reduced composition and λ a partition, E_(μ|λ) = lim_nϵ_ℓ(μ)^(n)(E_μ*λ*0^n-(ℓ(μ)+ℓ(λ))). In particular, for partitions λ, 𝒜_λ[X] = (1-t)^ℓ(λ) P_λ[X;q^-1,t] where P_λ[X;q^-1,t] is the symmetric Macdonald function. As a consequence the set {𝒜_λ : λ∈} is a basis of Λ. The P_λ[X;q,t] are the symmetric Macdonald functions as defined by Macdonald in <cit.> and seen in Cherednik's work <cit.> not to be confused with the modified symmetric Macdonald functions H_μ seen in many places but in particular in the work of Haiman <cit.>. Further, Corollary <ref> gives an interpretation of the E_(μ|λ) as limits of partially symmetrized non-symmetric Macdonald polynomials. Goodberry in <cit.> and Lapointe in <cit.> have investigated similar families of partially symmetric Macdonald polynomials. Up to a change of variables and limiting these different notions are likely directly related. In order to prove the first main theorem in this paper, Theorem <ref>, we will require the following straightforward lemma. For any composition μ there is some nonzero scalar γ_μ∈ℚ(q,t) such that σ( E_μ ) = γ_μ𝒜_(μ) where γ_μ = 1 when μ is a partition. We know that for all partitions λ, σ( E_λ ) = 𝒜_λ so this lemma holds trivially for partitions. Now we proceed by induction on Bruhat order similarly to the argument in the proof of Theorem <ref>. To show the lemma holds it suffices to show that if μ is a composition and k such that s_k(μ) > μ in Bruhat order and σ( E_μ ) = γ_μ𝒜_(μ) for γ_μ≠ 0 then σ( E_s_k(μ) ) = γ_s_k(μ)𝒜_(μ) for γ_s_k(μ)≠ 0. To this end fix such μ and k. Then by Corollary <ref> E_s_k(μ) = ( T_k + (1-t)α_μ(k+1)/α_μ(k)- α_μ(k+1)) E_μ. From Proposition <ref> σ = lim_mϵ^(m) so that σT_k = σ. Therefore, σ(E_s_k(μ)) = σ( ( T_k + (1-t)α_μ(k+1)/α_μ(k)- α_μ(k+1)) E_μ) = (1+ (1-t)α_μ(k+1)/α_μ(k)- α_μ(k+1)) σ(E_μ) = ( α_μ(k) - t α_μ(k+1)/α_μ(k) - α_μ(k+1)) γ_μ𝒜_(μ). By Lemma <ref> we see that since s_k(μ) > μ it follows that α_μ(k) ≠ t α_μ(k+1). Hence, γ_s_k(μ) := ( α_μ(k) - t α_μ(k+1)/α_μ(k) - α_μ(k+1)) γ_μ≠ 0 so the result follows. Note that using the recursive formula γ_s_k(μ) = ( α_μ(k) - t α_μ(k+1)/α_μ(k) - α_μ(k+1)) γ_μ in the proof of Lemma <ref>, the formula for the eigenvalues α_μ(k) in Lemma <ref>, and the base condition γ_μ = 1 for μ a partition, it is possible to give an explicit expression for γ_μ for any composition μ. However, all we need for the purposes of this paper is that γ_μ≠ 0 so we will not find such an explicit expression for γ_μ. §.§.§ First Main Theorem and a Full -Weight Basis of _as^+ Finally, we prove that the stable-limit non-symmetric Macdonald functions are a basis for 𝒫_as^+. To do this we will use the stable-limit symmetrization operator to help distinguish between stable-limit non-symmetric Macdonald functions with the same -weight. (First Main Theorem) The E_(μ|λ) are a -weight basis for 𝒫_as^+. As there are sufficiently many E_(μ|λ) in each graded component of every 𝒫(k)^+ it suffices to show that these functions are linearly independent. Certainly, weight vectors in distinct weight spaces are linearly independent. Using Lemmas <ref> and <ref>, we deduce that if E_(μ^(1)|λ^(1)) and E_(μ^(2)|λ^(2)) have the same weight then necessarily μ^(1) = μ^(2). Hence, we can restrict to the case where we have a dependence relation c_1E_(μ|λ^(1)) +… + c_N E_(μ|λ^(N)) = 0 for λ^(1),…,λ^(N) distinct partitions. By applying the stable-limit symmetrization operator we see that σ(c_1E_(μ|λ^(1)) +… + c_N E_(μ|λ^(N)) ) = σ(c_1E_μ*λ^(1) +…+c_N E_μ*λ^(N)) = 0 . Now by Lemma <ref>, σ(E_μ*λ^(i)) = γ_μ*λ^(i)𝒜_(μ*λ^(i)) with nonzero scalars γ_μ*λ^(i) yielding 0 = c'_1𝒜_(μ*λ^(1)) +…+ c'_n 𝒜_(μ*λ^(N)). The partitions λ^(i) are distinct so we know that the partitions (μ*λ^(i)) are distinct as well. By Corollary <ref> the symmetric functions 𝒜_(μ*λ^(i)) are linearly independent. Thus c'_i = 0 implying c_i = 0 for all 1 ≤ i ≤ N as desired. § SOME RECURRENCE RELATIONS FOR THE E_(Μ|Λ) In this section we will discuss a few recurrence relations for the stable-limit non-symmetric Macdonald functions. We start by looking at the action of the Demazure-Lusztig operators T_i and the lowering operators ∂_-. For a reduced composition μ = (μ_1,…,μ_r) and partition λ =(λ_1,…,λ_k) if μ_r ≥λ_1 and μ_r-1≠ 0 then ∂_-^(r)( E_(μ_1,…,μ_r|λ_1,…,λ_k)) = E_(μ_1,…,μ_r-1|μ_r,λ_1,…,λ_k). This follows immediately from the definitions of E_(μ|λ) and ∂_-^(r). Take μ∈ and λ∈ and suppose 1≤ i ≤ℓ(μ) -1 such that s_i(μ) > μ and s_i(μ) ∈. Then E_(s_i(μ)|λ) = ( T_i + (1-t)α_μ*λ(i+1)/α_μ*λ(i)- α_μ*λ(i+1)) E_(μ|λ). Since s_i(μ) > μ we know that s_i(μ*λ) > μ*λ so by Corollary <ref> E_s_i(μ*λ) = ( T_i + (1-t)α_μ*λ(i+1)/α_μ*λ(i)- α_μ*λ(i+1)) E_μ*λ. Now we know T_i commutes with the operators ∂_-^(ℓ(μ)+1),… , ∂_-^(ℓ(μ)+ℓ( λ)) and thus we see that E_(s_i(μ)|λ) = ∂_-^(ℓ(μ)+1)⋯∂_-^(ℓ(μ)+ℓ( λ)) (E_s_i(μ*λ)) = ∂_-^(ℓ(μ)+1)⋯∂_-^(ℓ(μ)+ℓ( λ))( ( T_i + (1-t)α_μ*λ(i+1)/α_μ*λ(i)- α_μ*λ(i+1)) E_μ*λ) = ( T_i + (1-t)α_μ*λ(i+1)/α_μ*λ(i)- α_μ*λ(i+1)) ∂_-^(ℓ(μ)+1)⋯∂_-^(ℓ(μ)+ℓ( λ))(E_μ*λ) = ( T_i + (1-t)α_μ*λ(i+1)/α_μ*λ(i)- α_μ*λ(i+1)) E_(μ|λ). For μ = (μ_1,…, μ_r) ∈ and λ∈ we have that T_rE_(μ|λ) = γ_μ*λ/γ_(μ_1,…,μ_r-1,0,μ_r)*λE_(μ_1,…,μ_r-1,0,μ_r|λ). First note that by Corollary <ref> φ_r(E_(μ|λ)) = (T_r(_r-_r+1)+(1-t)_r+1)E_(μ|λ) = (α_μ*λ(r)-0)T_rE_(μ|λ) + (1-t)(0)E_(μ|λ) = α_μ*λ(r)T_rE_(μ|λ) and by Lemma <ref> α_μ*λ(r) ≠ 0 since μ_r ≠ 0. Therefore, φ_r(E_(μ|λ)) is nonzero and therefore must be a -weight vector with weight (α_μ*λ(1),…,α_μ*λ(r-1),0,α_μ*λ(r), 0,…). By using the explicit formula for the eigenvalues α_μ*λ(i) from Lemma <ref> we see that for 1 ≤ i ≤ r, α_μ*λ(i) = 0 exactly when μ_i = 0 and further, for all 1 ≤ i ≤ r with μ_i ≠ 0, α_μ*λ(i) = q^μ_it^b_i for some b_i. Hence by Theorem <ref> and Corollary <ref>, φ_r(E_(μ|λ)) is of the form φ_r(E_(μ|λ)) = ∑_ν a_νE_(μ_1,…,μ_r-1,0,μ_r|ν) ν ranges over a finite set of partitions ν and a_ν are some scalars. Note that we have σ(φ_r(E_(μ|λ))) = σ(α_μ*λ(r)T_rE_(μ|λ)) and since σT_r = σ σ(φ_r(E_(μ|λ)))= α_μ*λ(r)σ(E_(μ|λ)) = α_μ*λ(r) γ_μ*λ𝒜_(μ*λ) using Lemma <ref>. Similarly, we see that σ( ∑_ν a_νE_(μ_1,…,μ_r-1,0,μ_r|ν)) = ∑_ν a_νγ_(μ_1,…,μ_r-1,0,μ_r)*ν𝒜_(μ*ν) since ((μ_1,…,μ_r-1,0,μ_r)*ν) = (μ*ν) for all ν. Thus 𝒜_(μ*λ) = ∑_ν a_ν' 𝒜_(μ*ν) where a'_ν:= a_νγ_(μ_1,…,μ_r-1,0,μ_r)*ν/α_μ*λ(r)γ_μ*λ. By Corollary <ref> we know that the 𝒜_β are a basis for Λ and so we see that the only possible partition ν that can contribute a nonzero term in the above expansion is ν = λ. Further, a'_λ = 1 and thus a_λ = α_μ*λ(r)γ_μ*λ/γ_(μ_1,…,μ_r-1,0,μ_r)*λ. Therefore, φ_r(E_(μ|λ)) = α_μ*λ(r) T_r E_(μ|λ) = α_μ*λ(r)γ_μ*λ/γ_(μ_1,…,μ_r-1,0,μ_r)*λE_(μ_1,…,μ_r-1,0,μ_r|λ) which yields T_rE_(μ|λ) = γ_μ*λ/γ_(μ_1,…,μ_r-1,0,μ_r)*λE_(μ_1,…,μ_r-1,0,μ_r|λ). Define ω_m := X_1T_1^-1⋯ T_m-1^-1 considered as an operator on _m^+. These operators are the same as the corresponding operators of the same name defined by Ion and Wu up to inversion and some scalars. We have defined the operators as above for convenience. The operators ω_m and ω_m are used by Ion and Wu <cit.> to give operators analogous to the d_+,d_+^* operators in 𝔸_t,q. The sequences of operators (ω_m)_m≥ 1 and (ω_m^-1)_m ≥ 1 converge to operators ω, ω^*: _as^+→_as^+ respectively with actions given by * ω(x_1^a_1⋯ x_k^a_kF[X]) = x_1T_1^-1⋯ T_k^-1x_1^a_1⋯ x_k^a_kF[X] * ω^*(x_1^a_1⋯ x_k^a_kF[X]) = x_2^a_1⋯ x_k+1^a_kF[X+(q-1)x_1]. Let (f_m)_m≥ 1 be a convergent sequence with limit f ∈(k)^+. We start by showing the sequence (ω_m(f_m))_m ≥ 1 converges to an element of _as^+. It follows directly by the definition of convergence that there exists some M >k such that for all i and m with m ≥ M and k+1 ≤ i ≤ m-1, T_if_m = f_m. Therefore, for all m ≥ M ω_m(f_m) = x_1T_1^-1⋯ T_k^-1f_m which clearly converges to x_1T_1^-1⋯ T_k^-1f. It follows then that the sequence of operators (ω_m)_m≥ 1 converges to an operator which we call ω. By considering f = x_1^a_1⋯ x_k^a_kF[X] with F ∈Λ we get the first formula in the lemma statement above. Next we will show the sequence (ω^-1_m(π_m(f)))_m ≥ 1 converges. Expand f as f = ∑_i=1^N c_ix^μ^(i)F_i[X] for c_i ∈ℚ(q,t), compositions μ^(i), and F_i ∈Λ where we may assume each composition μ^(i) has length k so that for all m ≥ k π_m(f) = ∑_i=1^N c_i x^μ^(i) F_i[x_1+… + x_m]. Applying ω_m^-1 to π_m(f) gives for m ≥ k ω_m^-1(π_m(f)) = ∑_i=1^N c_i x_2^μ^(i)_1⋯ x_k+1^μ^(i)_kF[qx_1+x_2+… + x_m] so therefore we get lim_mω_m^-1(π_m(f)) = ∑_i=1^N c_i x_2^μ^(i)_1⋯ x_k+1^μ^(i)_kF[X+(q-1)x_1]. Thus the sequence of operators (ω_m^-1)_m ≥ 1 converges to an operator which we call ω^*. Lastly, by applying this formula to f = x_1^a_1⋯ x_k^a_kF[X] with F ∈Λ to see the second formula given in the lemma statement. In line with the above results in this section we will now give a partial generalization of the classical Knop-Sahi relation regarding the action of the ω operators on Macdonald polynomials. For all compositions μ t^#{j:μ_j ≠ 0}ω( E_μ) = x_1 ω^*(E_μ) = E_1*μ. Suppose ℓ(μ) = n. Recall that for all m ≥ 1 (Y^(n+m)_n+m)^-1 = t^n+m-1ω_n+mT_1^-1⋯ T_n+m-1^-1. Therefore, by recalling the eigenvalue notation in Lemma <ref> we have t^n+m-1ω_n+mT_1^-1⋯ T_n+m-1^-1E_μ*0^m = (Y^(n+m)_n+m)^-1 E_μ*0^m = α_μ^(m)(n+m)^-1E_μ*0^m so that t^n+m-1α_μ^(m)(n+m) x_1 T_1^-1⋯ T_n+m-1^-1E_μ*0^m = x_1 ω_n+m^-1E_μ*0^m. From Lemma <ref> we see that t^n+m-1α_μ^(m)(n+m) = t^#{j:μ_j ≠ 0} which gives t^#{j:μ_j ≠ 0} x_1 T_1^-1⋯ T_n+m-1^-1E_μ*0^m = t^#{j:μ_j ≠ 0}ω_n+m(E_μ*0^m) = x_1 ω_n+m^-1E_μ*0^m. From the classical Knop-Sahi relations (see <cit.>) applied to E_μ*0^m we get x_1 ω_n+m^-1E_μ*0^m = E_1*μ*0^m-1. Applying Corollary <ref> and Lemma <ref> as m →∞ now gives t^#{j:μ_j ≠ 0}ω(E_μ) = x_1 ω^*(E_μ) = E_1*μ. § CONSTRUCTING E_(Μ|Λ)-DIAGONAL OPERATORS FROM SYMMETRIC FUNCTIONS The main goal of the following section of this paper is to construct an operator on _as^+ which is diagonal in the stable-limit Macdonald function basis, commutes with the limit Cherednik operators _i, but does not annihilate Λ. This operator will be constructed from a limit of operators arising from the action of t^mY^(m)_1+ … +t^m Y^(m)_m on _m^+. After finding the eigenvalues of this new operator we will show that the addition of this operator to the algebra generated by the limit Cherednik operators has simple spectrum on _as^+. We begin with the following natural definition. For F ∈Λ define the operator Ψ_F^(m): _m^+→_m^+ by Ψ_F^(m) := F[t^m Y_1^(m)+ … + t^m Y_m^(m)]. Further, for a composition μ with ℓ(μ) = n and m ≥ 0 define the scalar κ_μ^(m)(q,t) as κ_μ^(m)(q,t):= ∑_i=1^n+m t^n+mα_μ^(m)(i). Recall from Lemma <ref> that α_μ^(m)(i) is given by Y_i^(n+m)E_μ*0^m = α_μ^(m)(i)E_μ*0^m. For all compositions μ the sequence (κ_μ^(m)(q,t))_m≥ 0 converges to some κ_μ(q,t) ∈ℚ(q,t). Further, κ_μ(q,t) = κ_μ*0^k(q,t) for all k ≥ 0 and κ_μ(q,t) = κ_s_i(μ)(q,t) for all 1≤ i ≤ℓ(μ) -1. Using Lemma <ref> we get the following: κ_μ^(m)(q,t) = ∑_i=1^n+mt^n+mα_μ^(m)(i) = ∑_i=1^nt^n+mα_μ^(m)(i) + ∑_i=n+1^n+m t^#{j:μ_j ≠ 0}t^m+1-(i-n) = ∑_i=1^nt^n α_μ^(0)(i) t^m 1(μ_i=0) + t^#{j:μ_j ≠ 0}∑_i=1^mt^m-i+1 = ∑_μ_i ≠ 0t^n α_μ^(0)(i) + t^m ∑_μ_i =0 t^n α_μ^(0)(i) + t^#{j:μ_j ≠ 0}∑_i=1^mt^i. Therefore, κ_μ(q,t) := lim_mκ_μ^(m)(q,t) = ( ∑_i:μ_i ≠ 0t^n α_μ^(0)(i) ) + t^1+#{j:μ_j ≠ 0}/1-t∈ℚ(q,t). The last statement regarding κ_μ*0^k(q,t) and κ_s_i(μ)(q,t) follows now directly from Lemma <ref> and classical DAHA intertwiner theory. Recall from the proof of Lemma <ref> that t^nα_μ^(0) = q^μ_it^n+1-β_μ(i). Applying this to the Lemma <ref> gives the combinatorial formula κ_μ(q,t) = t^1+#{j:μ_j ≠ 0}/1-t+ ∑_μ_i ≠ 0 q^μ_it^n+1- β_μ(i). If we consider the partition λ to have an infinite string of 0's attached to its tail then κ_λ(q,t) = ∑_i=1^∞ q^λ_it^i. Notice that this is exactly equal to t/1-t(1 - (1-t)(1-q)B_λ(q,t) ) where B_λ(q,t) is the diagram generator of λ in <cit.>. Let λ and ν be partitions. Then κ_λ(q,t) = κ_ν(q,t) if and only if λ = ν. This follows readily from the identity κ_λ(q,t) = ∑_i=1^∞ q^λ_it^i given in the prior remark. In this next result we will show that the sequence of operators (Ψ_p_1^(m))_m≥ 1 converges to a well defined map on _as^+. As expected these operators are well-behaved on sequences of the form ϵ_ℓ(μ)^(m)(E_μ*λ*0^m-(ℓ(μ)+ℓ(λ))). In fact it is not hard to show that (Ψ_p_1^(m))_m≥ 1 converges on the former sequences. However, this is not a sufficient argument to show the convergence of the (Ψ_p_1^(m))_m≥ 1. In order to obtain a well-defined operator on _as^+ from the sequence of operators (Ψ_p_1^(m))_m≥ 1 one needs to show that given an arbitrary convergent sequence (f_m)_m≥ 1 the corresponding sequence (Ψ_p_1^(m)(f_m))_m≥ 1 converges. Therefore, the difficulty in the following proof is to show that the Ψ_p_1^(m) are well behaved in general. The sequence of operators (Ψ_p_1^(m))_m≥ 1 converges to an operator Ψ_p_1: _as^+→_as^+ which is diagonal in the E_(μ|λ) basis with Ψ_p_1(E_(μ|λ)) = κ_μ*λ(q,t) E_(μ|λ). Notice that every element of _as^+ is a finite ℚ(q,t)-linear combination of terms of the form T_σx^λF[X] where σ is a permutation, λ is a partition, and F∈Λ. Therefore, to show that the sequence of operators (Ψ_p_1^(m))_m≥ 1 converges it suffices using Remark <ref> to show that sequences of the form (Ψ_p_1^(m)(T_σx^λF[x_1+ … +x_m]))_m ≥ 1 converge. For m sufficiently large, T_σ commutes with Ψ_p_1^(m) = t^m(Y_1^(m)+ … + Y_m^(m)) so it suffices to consider only sequences of the form (Ψ_p_1^(m)(x^λF[x_1+ … +x_m])) _m ≥ 1 . Let λ be a partition, k := ℓ(λ), F∈Λ, and take m > k. Recall that Y_1^(m)X_1 = t^mY_1^(m)X_1 (<ref>) from which it follows directly that Y_i^(m)X_i = t^mY_i^(m)X_i for all 1 ≤ i ≤ m. Then for all 1≤ i ≤ k we have that since λ_i ≠ 0, t^mY_i^(m)(x^λF[x_1+…+x_m]) = Y_i^(m)(x^λF[x_1+…+x_m]). Therefore, t^m(Y_1^(m)+ … +Y_k^(m))(x^λF[x_1+ … +x_m]) = (Y_1^(m)+ … + Y_k^(m))(x^λF[x_1+ … +x_m]). Now since x^λF[x_1+…+x_m] is symmetric in the variables {k+1,…,m} we see that t^m(Y_k+1^(m)+ … + Y_m^(m))(x^λF[x_1+…+x_m]) = (t^m-kT_k⋯ T_1 ω_m^-1T_m-1^-1⋯ T_k+1^-1 + t^m-k-1T_k+1⋯ T_1 ω^-1_mT_m-1^-1⋯ T_k+2^-1 + …+ tT_m-1⋯ T_1 ω_m^-1)(x^λF[x_1+…+x_m]) = (t^m-kT_k⋯ T_1 + t^m-k-1T_k+1⋯ T_1 +…+ tT_m-1⋯ T_1)ω_m^-1(x^λF[x_1+…+x_m]) = (t^m-kT_k⋯ T_1 + t^m-k-1T_k+1⋯ T_1 +…+ tT_m-1⋯ T_1)(x_2^λ_1⋯ x_k+1^λ_kF[qx_1+x_2+ … + x_m]) = (t^m-k+t^m-k-1T_k+1+…+ tT_m-1⋯ T_k+1) ( T_k⋯ T_1x_2^λ_1⋯ x_k+1^λ_kF[qx_1+x_2+ … + x_m] ). Notice that since T_k⋯ T_1x_2^λ_1⋯ x_k+1^λ_kF[qx_1+x_2+ … + x_m] is symmetric in the variables {k+2,…,m} ϵ_k+1^(m)(T_k⋯ T_1x_2^λ_1⋯ x_k+1^λ_kF[qx_1+x_2+ … + x_m]) = T_k⋯ T_1x_2^λ_1⋯ x_k+1^λ_kF[qx_1+x_2+ … + x_m]. Therefore, t^m(Y_k+1^(m)+ … + Y_m^(m))(x^λF[x_1+…+x_m]) = (t^m-k+…+ tT_m-1⋯ T_k+1)ϵ_k+1^(m)(T_k⋯ T_1x_2^λ_1⋯ x_k+1^λ_kF[qx_1+x_2+ … + x_m]) = t(t^m-k-1+ … + 1)ϵ_k^(m)(T_k⋯ T_1x_2^λ_1⋯ x_k+1^λ_kF[qx_1+x_2+ … + x_m]) where the last equality follows from ( t^m-k-1+ t^m-k-2T_k+1+…+ T_m-1⋯ T_k+1/t^m-k-1+ t^m-k-2+… + 1) ϵ_k+1^(m) = ϵ_k^(m). Putting it all together we see that Ψ_p_1^(m)(x^λF[x_1+ … +x_m]) = t^m(Y_1^(m)+…+Y_m^(m))(x^λF[x_1+ … +x_m]) = t^m(Y_1^(m)+…+Y_k^(m))(x^λF[x_1+ … +x_m]) + t^m(Y_k+1^(m)+…+Y_m^(m))(x^λF[x_1+ … +x_m]) = (Y_1^(m)+ … + Y_k^(m))(x^λF[x_1+ … +x_m]) + t(t^m-k-1+ … + 1)ϵ_k^(m)(T_k⋯ T_1x_2^λ_1⋯ x_k+1^λ_kF[qx_1+x_2+ … + x_m]) which by Theorem <ref> and Corollary <ref> converges to (_1+ … + _k)(x^λF[X]) + t/1-tϵ_k(T_k⋯ T_1x_2^λ_1⋯ x_k+1^λ_kF[X+(q-1)x_1]) . Therefore, the limit operator Ψ_p_1:= lim_mΨ_p_1^(m) is well defined. We will now show that the E_(μ|λ) are weight vectors of Ψ_p_1 and compute their corresponding weight values. Let μ∈ and λ∈. By Corollary <ref> we have that E_(μ|λ) = lim_mϵ_ℓ(μ)^(m)(E_μ*λ*0^m-(ℓ(μ)+ℓ(λ))). Therefore, by Proposition 6.21 from <cit.>, Lemma <ref>, and Lemma <ref> it follows that Ψ_p_1(E_(μ|λ)) = lim_mΨ_p_1^(m)(ϵ_ℓ(μ)^(m)(E_μ*λ*0^m-(ℓ(μ)+ℓ(λ)))) = lim_m t^m(Y_1^(m)+…+Y_m^(m))ϵ_ℓ(μ)^(m)(E_μ*λ*0^m-(ℓ(μ)+ℓ(λ))) = lim_mϵ_ℓ(μ)^(m)( t^m(Y_1^(m)+…+Y_m^(m))E_μ*λ*0^m-(ℓ(μ)+ℓ(λ)) ) = lim_mκ_μ*λ^(m-(ℓ(μ)+ℓ(λ)))(q,t)ϵ_ℓ(μ)^(m)(E_μ*λ*0^m-(ℓ(μ)+ℓ(λ))) = κ_μ*λ(q,t)E_(μ|λ). From the proof of Theorem <ref> we see that in particular, for partitions λ we have that Ψ_p_1(𝒜_λ[X]) = t/1-t(1-(1-t)(1-q)B_λ(q,t))𝒜_λ[X] . We saw that in Corollary <ref> 𝒜_λ[X] = (1-t)^ℓ(λ)P_λ[X;q^-1,t] so, following the argument of Haiman in <cit.>, the operator t^-1(1-t)Ψ_p_1 is up to a change of variables equal to Δ'. Therefore, we can view t^-1(1-t)Ψ_p_1 in a certain sense (after changing variables) as extending the operator Δ' from Λ to _as^+. Further, Theorem <ref> does not follow immediately from the work of Ion and Wu in <cit.> and in particular, Ψ_p_1≠_1+_2+… although the latter operator is certainly well defined in a weak sense as a diagonal operator in the E_(μ|λ) basis. The easiest way to see this is to note that _1+_2+… will annihilate Λ whereas Ψ_p_1 acting on the basis 𝒜_λ of Λ has nonzero eigenvalues κ_λ(q,t) ≠ 0. Let Y denote the ℚ(q,t)-subalgebra of End_ℚ(q,t)(_as^+) generated by Ψ_p_1 and _i for i ≥ 1. _as^+ has a basis of Y-weight vectors and every Y-weight space of _as^+ is 1-dimensional. Since Ψ_p_1 is diagonal in the E_(μ|λ) basis, see Theorem <ref>, it commutes with each _i. Therefore, Y is a commutative algebra of operators on _as^+ so it makes sense to ask about its weights in _as^+. To show that the Y-weight spaces of _as^+ are 1-dimensional it suffices to show that if (μ^(1)|λ^(1)) ≠ (μ^(2)|λ^(2)) for μ^(1),μ^(2)∈ and λ^(1),λ^(2)∈ with E_(μ^(1)|λ^(1)) and E_(μ^(2)|λ^(2)) having the same -weight then the Ψ_p_1 eigenvalues for E_(μ^(1)|λ^(1)) and E_(μ^(2)|λ^(2)) are distinct. Necessarily, from the proof of Theorem <ref>, if E_(μ^(1)|λ^(1)) and E_(μ^(2)|λ^(2)) have the same -weight then μ^(1) = μ^(2) = μ. Since (μ|λ^(1)) ≠ (μ|λ^(2)) it follows that λ^(1)≠λ^(2) so that (μ*λ^(1)) ≠(μ*λ^(2) ). From Lemma <ref> we then know that κ_μ*λ^(1)≠κ_μ*λ^(2) so lastly by Theorem <ref> we see that the Ψ_p_1 eigenvalues for E_(μ|λ^(1)) and E_(μ|λ^(2)) are distinct. Hence, the Y-weight spaces of _as^+ are 1-dimensional. Theorem <ref> motivates the following definition. For F ∈Λ let Ψ_F: _as^+→_as^+ be the diagonal operator in End_ℚ(q,t)(_as^+) in the {E_(μ|λ): μ∈, λ∈} basis given by Ψ_F(E_(μ|λ)) := F[κ_μ*λ(q,t)]E_(μ|λ). Notice that by construction every operator Ψ_F commutes with the image of every _i ∈ End_ℚ(q,t)(_as^+) since from Corollary <ref> we know that the E_(μ|λ) are a basis of _as^+. For all F ∈Λ we have that Ψ_F = lim_mΨ_F^(m). Trivially, this conjecture holds for F = 1 ∈Λ and Theorem <ref> shows that this conjecture holds for F = p_1. Thus we see that the conjecture holds for F ∈ℚ(q,t)[p_1]. It is easy to extend part of the argument in the proof of Theorem <ref> to show that lim_mΨ_F^(m)(ϵ_ℓ(μ)^(m)(E_μ*λ*0^m-(ℓ(μ)+ℓ(λ)))) = F[κ_μ*λ(q,t)] E_(μ|λ) = Ψ_F(E_(μ|λ)). However, this is not sufficient to prove the conjecture. Similarly, to the proof of Theorem <ref> one needs to know that the sequence of operators (Ψ_F^(m))_m ≥ 1 is well behaved on arbitrary convergent sequences in order to prove convergence to an operator in End_ℚ(q,t)(_as^+). It would be sufficient to show that for every r ≥ 2 the sequence of operators (Ψ_p_r^(m))_m≥ 1 converges since if this sequence converges its limit operator will necessarily agree with Ψ_p_r on the E_(μ|λ) basis. This conjecture would imply the existence of an action of the elliptic Hall algebra <cit.> <cit.> on the space of almost symmetric functions. In independent work, according to private communications, Dongyu Wu has constructed an elliptic Hall algebra on _as^+. Further, in Wu's action P_(0,1) acts identically to Ψ_p_1 up to a scalar.
http://arxiv.org/abs/2307.04180v1
20230709140850
Lattice path matroidal subdivisions, Positive Tropical Grassmannian and Amplituhedron
[ "Ayush Kumar Tewari", "Ahmed Umer Ashraf" ]
math.CO
[ "math.CO", "math-ph", "math.AG", "math.MP", "52B40, 14T15, 81U99" ]
[ Diancong Jin August 12, 2023 =================== We introduce the notion of lattice path matroidal subdivisions, or LPM subdivisions for short, and show that these subdivisions are regular and hence the weight vectors for them lie in the Dressian. This leads us to explore the structure of the set of these weights inside the Dressian and owing to the fact that Lattice path matroids are positroids, we move to the positive Dressian which in turn is equal to the positive tropical Grassmannian, an object of immense interest currently in Physics. This is related to the amplituhedron and positive configuration space, which we describe here and wish to explore these connections further. § INTRODUCTION Lattice path matroids (LPM) [We use this abbreviation for lattice path matroid and lattice path matroidal depending on the context] were introduced by Bonin et.al in <cit.> and matroidal properties including the Tutte polynomial were derived for them. Subsequently, it was proven that they are positroids <cit.> and also enjoy multiple connections with the positive Grassmannian. Lattice paths in themselves are ubiquitous in various topics within mathematics for example in combinatorics, representation theory, etc. In our work we see this feature helping us connect our study to various topics of not only mathematics but also to a recently defined concept in physics, the amplituhedron <cit.>, which is a geometric object encoding information concerning the scattering amplitudes of particles. We begin with the introduction of lattice path matroidal subdivisions, which are matroidal subdivisions with each maximal cell corresponding to a lattice path matroid polytope. The idea for this class of subdivisions comes from the lattice path matroid polytope decompositions <cit.>, which is a subclass of matroid base polytope decompositions, studied in detail in <cit.>. Lattice path matroidal decompositions enjoy a unique property; they are obtained in an iterative way via simple decompositions into two LPMs, termed as a hyperplane split. We harness this property to relate them to the well-known class of split subdivisions. This relation eventually helps us in proving one of our first results. Any subdivision of a lattice path matroid polytope _[P,Q] is regular. Not only we are able to establish regularity for LPM subdivisions but we also show that they are obtained as common refinements of split subdivisions, which allows much more structure to these subdivisions. We introduce the notion of LPMfan as the polyhedral fan that corresponds to LPM subdivisions. We discuss the relation of LPMfan to various well-known polyhedral fan structures which correspond to regular matroidal subdivisions, namely tropical Grassmannian and Dressian. Since LPM are positroids as well, this discussion can also be connected to the positive part of the tropical Grassmannian and Dressian. We furnish computational examples for both LPM subdivisions and LPMfans for the underlying hypersimplex Δ(k,n) which is an LPM polytope, where k=3,4 and n=6,8 respectively. Postnikov <cit.> led the study on the stratification of the positive Grassmannian into cells that enjoy equivalences with various combinatorial objects, like decorated permuations, reduced plabic graphs, etc. We also put our results into perspective by discussing how our LPM subdivisions correspond to these combinatorial objects. This also helps us in bringing the connections to the geometric object amplituhedron, introduced first by Arkani et. al <cit.> to study problems concerning scattering amplitudes in high energy physics. We point the reader to <cit.> for exploring the connections between scattering amplitudes in physics and the geometry of the Grassmannian in full detail. Our discussion mostly revolves around the connections between positive Grassmannian, positive tropical Grassmannian and the amplituhedron. Firstly, for the m=2 amplituhedron, we provide a purely matroidal treatment to the definition of BCFW[the abbreviation is after the names of Physicists Britto, Cachazo, Feng, and Witten] style recurrence relations for positroid dissections of the hypersimplex in the form of Theorem <ref>. These positroidal dissections were introduced in <cit.> and it is shown in <cit.> that via T-duality they are also related to certain dissections of the m=2 amplituhedron 𝒜_n,k,2. Secondly, for the m=4 amplituhedron, in <cit.> it is shown that BCFW cells of the amplituhedron correspond to a noncrossing lattice paths of a certain lattice rectangle. Additionally, a recent work <cit.> shows that BCFW cells provide a triangulation of the amplituhedron 𝒜_n,k,4. In light of these results, we prove the following result which is the first result highlighting the relation between the BCFW triangulation of 𝒜_n,k,4 and positroidal dissection of a certain hypersimplex. Each triangulation of the amplituhedron 𝒜_n,k,4 into (k, n)-BCFW cells provides a positroid dissection {Γ_i} of the hypersimplex Δ(k,n-4), where each BCFW cell corresponds to a lattice path matroid polytope Γ_i. Lastly, <cit.> discusses the relation between positroidal cells of the positive Grassmannian and the positive configuration space, via the Chow quotient of the Grassmanian. We also encounter a special class of LPM's throughout our study, namely snakes, which are minimal, and we use this property, to provide examples of clusters for them, which implies intricate connections between LPM's and the underlying cluster algebra, which we wish to explore further in subsequent work. This minimality of snakes also helps us answer partially a Question asked in <cit.>. We would like to make a special mention of the various salient features which we encounter for snakes and would like to state them as follows, Snakes are lattice path matroids, positroids, minimal, binary, indecomposable, series-parallel, graphical <cit.>, order, alcoved [We do acknowledge that order and alcoved are properties satisfied by matroid polytopes of snakes.] <cit.> In Section <ref> we introduce all basic definitions which we will use in further discussions. Section <ref> introduces the notion of LPM subdivisions and Theorem <ref> is proven here. Section <ref> describes the relation between the positive tropical Grassmannian and LPM subdivisions. Section <ref> collects all our computational examples, which are mostly LPM subdivisions and LPMfan for LPM polytopes Δ(3,6) and Δ(4,8). Section <ref> introduces the notion of amplituhedron and relates in detail the findings pertaining to LPM's. Finally, we discuss probable future problems and open questions in Section <ref>. § PRELIMINARIES We would like to guide the readers unfamiliar with the concepts in this section to <cit.> and <cit.> for further details. A matroid of rank k on the set [n] := {1,2, …, n} is a nonempty collection ⊆[n]k of k-element subsets of [ n ], called bases of , that satisfies the exchange axiom: For any I , J ∈ and i ∈ I, there exists j ∈ J such that I ∖{ i }∪{ j }∈. A matroid is called realizable if it can be represented by elements of a matrix over some field 𝕂. A positroid of rank k is a matroid that can be represented by a k × n-matrix with non-negative maximal minors. The Grassmannian (k,n) is the parameterization of the family of all k-dimensional subspaces of n-dimensional vector space in 𝕂^n. It also possesses a smooth projective variety structure, corresponding to the vanishing set of the Plücker ideal ℐ_k,n. An element in the Grassmannian (k,n) can be understood as a collection of n vectors v_1, , v_n∈𝕂^k spanning the space 𝕂^k modulo the simultaneous action of (k,n) on the vectors, where the vectors v_i are the columns of a k × n-matrix A. Then an element V ∈(k,n) represented by A gives the matroid _V whose bases are the k-subsets I ⊂ [n] such that _I(A) ≠ 0. Here, _I(A) denotes the determinant of A_I, the k × k submatrix of A with the column set I. An element V ∈(k,n) is termed as totally non-negative if _I(V) ≥ 0, for all I ∈[n]k. The set of all totally non-negative V ∈(k,n) is the totally non-negative Grassmannian ^≥ 0(k,n); abusing notation, we refer to ^≥ 0(k,n) as the positive Grassmannian <cit.>. Tropical geometry is the study of polynomials over the tropical semiring 𝕋 = {ℝ∪∞, max, + }. Given e = (e_1 , , e_N ) ∈ℤ^N_≥ 0 , we let x^e denote x_1^e_1 x_N^e_N. For a polynomial f = ∑_e ∈ E a_ex^e, we firstly associate a corresponding tropical polynomial f where the binary operations are replaced by tropical addition and multiplication respectively, and we denote by Trop(f) the tropical hypersurface associated to f which is the collection of all points where the maxima is achieved at least twice. Let E = E^+∪ E^-⊆ℤ^N_≥ 0 , and let f be a nonzero polynomial with real coefficients such that f = ∑_e ∈ E^+a_ex^e - ∑_e ∈ E^-a_ex^e, where all of the coefficients a_e are non-negative real numbers. Then ^+(f) denotes the positive part of (f), and the set of all points (x_1 , , x_N) such that, if we form the collection of numbers ∑ e_ix^i for e ranging over E, then the minimum of this collection is not unique and furthermore is achieved for some e ∈ E^+ and some e ∈ E^- <cit.>. The tropical Grassmannian (k,n) is the intersection of the tropical hypersurfaces (f), where f ranges over all elements of the Plücker ideal ℐ_k,n which is generated by the quadratic Plücker relations <cit.>. The Dressian (k,n) is the intersection of the tropical hypersurfaces (f), where f ranges over all three-term Plücker relations. Similarly, the positive tropical Grassmannian ^+(k,n) is the intersection of the positive tropical hypersurfaces ^+(f), where f ranges over all elements of the Plücker ideal. The positive Dressian ^+(k,n) is the intersection of the positive tropical hypersurfaces ^+(f), where f ranges over all three-term Plücker relations. The underlying matroid for the definitions of the tropical Grassmannian and Dressian is the uniform matroid _k,n. However, the notion of Dressian can be extended to arbitrary matroids with the definition of a local Dressian. The local Dressian () is defined as the tropical pre-variety given by the set of quadrics obtained from the three-term Plücker relations by setting the variables p_B to zero, where B is not a basis of <cit.>. A subdivision Σ of a polytope P in ℝ^d is said to be regular if there exits a weight vector w such that if the vertices of P are lifted to heights provided by w in ℝ^d+1 and subsequently the lower convex hull is projected back to ℝ^d, then the subdivision Σ is retrieved. A tropical polynomial with Newton polytope P defines a tropical hypersurface that is dual to a regular subdivision of P. We point the reader to <cit.>, <cit.> for further details about this duality. We recall details about a special class of subdivisions that appear in our work. A split subdivision is a subdivision with exactly two maximal cells <cit.>. Two splits S_1 and S_2 are said to be compatible if the hyperplane along the split edges do not intersect in the interior of the polytope. We now introduce definitions dealing with lattice path matroids. Let E be a set (which is going to be the ground set of the matroid), and let 𝒜 = (A_j : j ∈ J ) be a set system over E, that is, a multiset of subsets of a finite set S. A transversal of 𝒜 is a set { x_j : j ∈ J } of |J| distinct elements such that x_j∈ A_j for all j ∈ J. A partial transversal of 𝒜 is a transversal of a set system of the form (A_k : k ∈ K) with K a subset of J. A transversal matroid is a matroid whose independent sets are the partial transversals of some set system 𝒜 = (A_j : j ∈ J) and 𝒜 is called the presentation of the transversal matroid. We denote this matroid by [𝒜]. The bases of a transversal matroid are the maximal partial transversals of 𝒜 <cit.>. We now recall the definition of a lattice path matroid as a certain kind of transversal matroid <cit.>. Consider an r× (n-r) rectangular lattice grid _r,n. This consists of all the lattice points { (a, b) : 0 ≤ a ≤ n-r ,  0 ≤ b ≤ r}, and all the edges between neighboring lattice points. This can also be thought as a Young diagram <cit.> consisting of r· (n-r) unit squares of the partition λ = (n-r, n-r, ⋯, n-r_r). An NE-path over _r, n is a path from the point (0,0) to the point (n-r, r) each of whose step is either a step in (1,0) direction (i.e. an E-step) or a step in (0,1) direction (i.e. an N-step). Note that for each edge in _r,n, its position in any NE-path is the same. Hence we can denote it by this position. Using this observation, we can denote each NE-path by the sequence of its north steps. Let P and Q be two NE-paths on _r, n denoted by P = p_1p_2… p_r Q = q_1 q_2 … q_r then the set of all NE-paths between P and Q forms a matroid. That is, [P,Q] = {{i_1, i_2, …, i_r}: p_j ≤ i_j ≤ q_j for j=1,…, r } Sometimes, we denote the matroid [P,Q] by just [J] where J is the skew Young diagram bounded by P and Q. An example of a lattice path matroid is depicted in Figure <ref>, where the edges in the North direction are marked with their respective indices. § LPM SUBDIVISIONS We use the following definition from <cit.>, We call a lattice path matroid [P,Q] a snake if it has at least two elements, it is connected and the strip contained between the paths P and Q does not contain any interior lattice point. Snakes are also referred to as border strip matroids. Snakes have the minimal number of bases a rank r connected matroid over n elements has. That is why they are also called minimal matroids <cit.>. In contrast to this, uniform matroids are maximal with respect to this property. We introduce a new class of subdivision as follows: Let [P,Q] be a lattice path matroid and _[P,Q] be its matroid polytope. A subdivision Σ of _[P,Q] is called a lattice path matroidal (LPM) subdivision if all maximal cells of Σ are lattice path matroid polytopes. In <cit.>, matroid base polytope decompositions are studied in detail and it is shown that for a lattice path matroid [P,Q] which is not a snake, its matroid polytope _[P,Q] admits a decomposition into lattice path matroids polytopes s.t _[P,Q] = ⋃_i=1^t_[P_i,Q_i] where each _[P_i,Q_i] is also a lattice path matroid base polytope for some lattice path matroid M[P_i,Q_i], and for each 1 ≤ i ≠ j ≤ t, the intersection _[P_i,Q_i]∩_[P_j,Q_j] is a face of both _[P_i,Q_i] and _[P_j,Q_j]. A hyperplane LPM split decomposition is a decomposition in exactly two lattice path matroid polytopes, i.e., t=2 and as a consequence of <cit.> we also know that these two LPM polytopes are full-dimensional. We feel that it is a good time to recall the notion of a polytopal subdivision <cit.>, For a polytope P ∈ℝ^d, a (polyhedral) subdivision Σ is a polytopal complex whose vertices are the vertices of P, and that covers P. Σ can be understood as a collection of faces F, such that for any two faces F_i and F_j, F_i∩ F_j∈Σ. It is pretty obvious from the definition above that the notions of LPM decompositions and LPM subdivisions coincide and we state this in the form of Corollary <ref> Let Σ' = (_[P_1,Q_1]), _[P_t,Q_t]) be a decomposition of _[P,Q] into lattice path matroid polytopes. Then Σ' coincides with the subdivision Σ of _[P,Q], where each maximal cell is C_i = _[P_i,Q_i]. The LPM subdivision corresponding to a hyperplane LPM split decomposition is called a split subdivision. The subsequent subdivision is obtained iteratively via split subdivisions which correspond to hyperplane LPM split decomposition. We take this opportunity to specify our terminology so as to minimize any confusion in the text; split with the prefix 'hyperplane' would always refer to the LPM subdivision of a lattice path matroid into two LPM, whereas split with the suffix 'hyperplane' would refer to the hyperplane defining a split subdivision. From now on our discussion would mostly focus on the LPM subdivisions, however, because of the equivalence in Corollary <ref> most of our results also extend to LPM decompositions, unless otherwise stated. The property of being obtained via iterative hyperplane LPM split decompositions is unique to the LPM decompositions described in <cit.> and is different in this aspect from the concept of matroid decompositions defined in <cit.> which define a new quasisymmetric invariant for matroids which acts as a valuation on decompositions of matroid polytopes. Although, Kapranov <cit.> showed that for rank 2 matroids such matroid decompositions can be obtained via hyperplane split decompositions. We recall first this technical result regarding split subdivisions, Split subdivisions are regular. Let S be a split subdivision of a polytope P. We provide a canonical weight vector for this subdivision in the following way. Let a be the normal vector to the split hyperplane H_S. We define the weight vector for S as w_S: Vert(P) →ℝ such that w_S(v) = |av| if v ∈ S_+ 0 if v ∈ S_- It is clear that this weight function is well-defined and induces the split subdivision S. We now state a technical result concerning split LPM subdivisions. We call an LPM polytope _[J]⊆_[P,Q] a truncated LPM polytope if _[J] = _[P,Q]∖ (_[P,Q]∩ H_-), where H_- is a halfspace defined by the split hyperplane H of a split subdivision (cf. Figure <ref>). A split subdivision of a truncated LPM polytope _[J] into two LPM can be extended to a split subdivision of the LPM polytope _[P,Q] into two LPM. We consider a split S of the LPM polytope _[P,Q]. By Lemma <ref> we know there exists a weight vector w_S of the form w_S(v) = |av| if v ∈ S_+ 0 if v ∈ S_- where a is the normal vector to the split hyperplane H_S. Similarly, let us consider a split S' of the truncated LPM polytope _[J]. Again by Lemma <ref> we know that restricted to _[J] there exists a weight vector w_S' of the form w_S'(v) = |bv| if v ∈ S'_+ 0 if v ∈ S'_- where b is the normal vector the split hyperplane H_S' and we choose S'_- such that S_-⊆ S'_-. Now we notice that there exists an extension of the weight vector w_S' to w'_S' which is defined as follows w'_S'(v) = w_S'(v) if v ∈_[J] 0 if v ∈_[P,Q]∩ S_- For an LPM polytope _[P,Q], the split subdivisions induced from a hyperplane split decomposition are compatible. We proceed by proving the claim for two arbitrarily chosen split subdivisions. Let S_1 and S_2 be two split subdivisions of _[P,Q]. Since split LPM subdivisions are defined in an iterative manner, therefore without loss of generality we assume that S_2 restricted to the truncated LPM polytope _[J] = _[P,Q]∖ (_[P,Q]∩ S_1_-) defines a split subdivision for _[J]. But this implies that the split hyperplane H_S_2∈_[J]. Therefore, the split hyperplane H_S_1 and H_S_2 cannot meet in the interior of _[P,Q]. Hence, the splits S_1 and S_2 are compatible. As for the case of the hypersimplex Δ(k,n) which is also a LPM polytope, we already know that any two splits are always compatible <cit.>. The compatibility of splits which provide the iterative description of LPM subdivisions also shows that LPM are split matroids, introduced by Joswig and Schroeter in <cit.>. Any LPM subdivision Σ of a lattice path matroid polytope _[P,Q] is regular. Let σ be the LPM decomposition corresponding to Σ. We know that σ can be obtained via iterative hyperplane LPM split decompositions. These hyperplane LPM split decompositions correspond to split subdivisions. Let {S_1, S_2, S_n} be the sequence of split subdivisions which correspond to Σ. We note that {S_2,, S_n} are splits for the corresponding truncated LPM polytope _[J]. By Lemma <ref> we know that the splits { S_2, S_3, S_n} can be extended to split subdivisions for _[P,Q] and let {S'_2, S'_3, S'_n} be the corresponding split subdivisions on _[P,Q] for Σ. We see that Σ is the common refinement of the splits {S_1, S'_2, S'_n} and as we know from Lemma <ref> that these splits are compatible, therefore this common refinement is well defined. We now invoke the Split Decomposition Theorem <cit.> to conclude that there exists a canonical weight vector w = ∑_S'α^w_w_S' w_S' which induces Σ, where the sum runs over all splits and α^w_w_S represents the coherency index <cit.>. Hence, Σ is a regular subdivision. For the hypersimplex Δ(3,6) we describe a LPM subdivision Σ^ in Section <ref>, illustrated in Figure <ref>, where we see that in the corresponding LPM polytope decomposition (M_1 , , M_6) shown in Figure <ref>, is obtained as common refinements of four splits namely S_1, S_2, S_3 and S_4. The weight which induces Σ^ is w_Σ^ = {0,0,0,0,0,0,0,1,1,2,0,0,0,1,1,2,2,2,3,5} and w_S_1 = {0,0,0,0,0,0,0,1,1,1,0,0,0,1,1,1,1,1,1,2} w_S_2 = {0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,1,1} w_S_3 = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1} w_S_4 = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1} are the weights which induce the splits S_1, S_2, S_3 and S_4. With this, we see an example of the result described in Theorem <ref>, with the split decomposition in the following form, w_Σ^ = w_S_1 + w_S_2 + w_S_3 + w_S_4 Let w_Σ be a weight vector for an LPM subdivision Σ of a lattice path matroid polytope _[P,Q]. Then w_Σ∈([P,Q]). Since w_Σ induces a regular and matroidal subdivsion Σ, therefore by <cit.> w_Σ lies in the Dressian ([P,Q]). We know that the Dressian is endowed with two polyhedral fan structures; one coming from the tropical prevariety definition with points satisfying Plücker relations and termed as the Plücker fan structure <cit.> on the Dressian. The other structure termed as the secondary fan structure <cit.> which comes by virtue of being a subfan of the secondary fan. Moreover, we know that these two fan structures coincide <cit.>. We now have the required setup to describe a new polyhedral fan structure for LPM subdivisions. We begin this exploration with the following definition, Let [P,Q] be a lattice path matroid. We define the LPMfan([P,Q]) to be the polyhedral fan which is the collection of all weight vectors w such that, w is a weight vector for an LPM subdivision of [P,Q]. Two weight vector w_1 and w_2 lie in the same cone C if the LPM subdivisions Σ_1 and Σ_2 are same. Clearly, LPMfan([P,Q]) ⊆Dr([P,Q]) ⊆Secfan(P_[P,Q]) where all inclusions represent inclusions as subfan. Additionally, from the definition of LPM subdivisions, given that they are obtained via refinement of split subdivisions, this makes the LPMfan sit as a subfan inside the split complex Split(P_[P,Q]), which is an abstract simplicial complex defined on the set of compatible splits of _[P,Q] <cit.>. Hence, we get this refined containment relation of subfans, LPMfan([P,Q])) ⊆Split(P_[P,Q]) ⊆Dr([P,Q]) ⊆Secfan(P_[P,Q]) An important observation is that the hypersimplex Δ(k,n) is a lattice path matroid polytope and hence all our results for LPM polytopes follow in this case, LPMfan(k,n) ⊆Split(Δ(k,n)) ⊆(k,n) ⊆Secfan(Δ(k,n)) An important avenue of research has been to understand the structure of the Dressian (k,n), particularly for certain low values of k and n, namely (3,6), (3,7), <cit.> and (3,8) <cit.>, etc. We describe LPMfans for certain values of k,n and discuss the calculations in Section <ref>. § POSITIVE TROPICAL GRASSMANNIAN AND LPM SUBDIVISIONS In this section, our aim is to highlight the consequences of the fact that LPMs are positroids, and towards the end we also are able to provide an answer to a question asked concerning finest matroidal subdivisions of the hypersimplex in <cit.>. Since it is a major theme for this section we recall the result from <cit.> which shows us that lattice path matroids are positroids, upon which we build further in this section. A lattice path matroid is a positroid. Let [P,Q]) be a LPM. For the result to be true, it is sufficient to construct a k × n matrix A such that (A_I) = 0 I ∈[n]k∖[P,Q] α I ∈[P,Q] where α > 0. Such a matrix can be constructed as follows. Let A = (a_i,j)^k,n_i,j=1,1 be the k × n Vandermonde matrix. Set a_i,j = 0 ∀ j ∈ [P_i,Q_i], where P_i and Q_i represent the i^th north step in the lattice paths P and Q, respectively. So A has the following form, a_i,j = x_i^j-1 if P_i≤ j ≤ Q_i 0 otherwise Assign values to variables x_1, , x_k such that x_1 > 1 and x_i+1 = x_i^k^2∀ i ∈ [k-1]. We denote the submatrix A_[1, , i][c_1, , c_i] as a submatrix of A which has rows indexed from 1 to i and columns indexed from c_1 to c_i. We have (A_I) > 0 if and only if A_[1, , k]I has nonzero diagonal entries, which happens if and only if I ∈[P,Q]). Lusztig <cit.> and Postnikov<cit.> introduced the notion of positivity for Grassmannians. This notion extends naturally to the tropical Grassmannian and Dressian <cit.>. In <cit.> and independently in <cit.>, the authors prove the following equality between ^+ Gr(k,n) and Dr^+(k,n). The positive tropical Grassmannian ^+(k,n) equals the positive Dressian ^+(k,n). A generalization of this theorem to the case of positive local Dressian with respect to a positroid is provided in <cit.>. An important parameterization of points residing in the positive Dressian is explained in this result, Let Σ be a regular subdivision of Δ(k,n) induced by a weight vector w_Σ. Then the following are equivalent: 1. w is a positive tropical Plücker vector. 2. Every face of Σ is a positroid. The generalization of this to local positive Dressian is provided again in <cit.>. With this parameterization, we conclude that a point inducing a LPM subdivision resides in the positive Dressian. Let Σ be an LPM subdivision of _[P,Q] and let w_Σ be the weight vector for Σ. Then w ∈^+([P,Q]) = ^+([P,Q]). We know that a point w lies in the positive Dressian if all the maximal cells of the subdivision induced by this point as a weight vector on _[P,Q] are the matroid polytopes of a positroid, i.e., w induces a positroidal subdivision <cit.>. We know that LPM are positroids, hence it also induces a positroidal subdivision, and therefore w ∈^+([P,Q]) = ^+([P,Q]). Another important result proven in <cit.> is about the classification of the finest positroidal subdivision of the hypersimplex Δ(k,n). Let Σ be a regular positroidal subdivision of Δ(k,n). Then the following are equivalent: 1. Σ is a finest subdivision. 2. Every facet of Σ is the matroid polytope of a series-parallel matroid. 3. Every octahedron in Σ is subdivided. Along with the classification, <cit.> also provides the exact number of maximal cells in a finest positroidal subdivision of Δ(k,n), Every finest positroidal subdivision of Δ(k,n) has exactly n-2k-1 facets. We also recall the following classification of connected positroids which are series-parallel, A connected positroid is series-parallel if and only if it has no uniform matroid _2,4 as a minor. In light of these results, we provide results about positroidal subdivisions of Δ(k,n) obtained from LPM. We begin with our first technical result concerning snakes, Snakes are series-parallel matroid. We acknowledge that the uniform matroid _2,4 is also an LPM as shown in Figure <ref>. Clearly, _2,4 has an interior lattice point and therefore cannot be a minor of a lattice path matroid which is a snake. Therefore, by Lemma <ref> snakes are series-parallel matroid. We also acknowledge that another proof of this result is present in <cit.>. With Lemma <ref> and Theorem <ref>, we state the following result Let Σ be an LPM subdivision of Δ(k,n) such that the underlying matroid of each maximal cell is a snake. Then, Σ is a finest positroidal subdivision of Δ(k,n) and has exactly n-2k-1 facets. With Theorem <ref> and Lemma <ref>, we also are able to provide a partial answer to Question 6.2 posed in <cit.>, [Question 6.2 <cit.>] Are all cells in the finest matroid subdivision of a hypersimplex, matroid polytopes of indecomposable matroids? The authors show that the answer to this question is affirmative in the case when the hypersimplex is Δ(2,n) <cit.>. However, we know of explicit counterexamples provided in <cit.> which show that there exist finest matroidal subdivisions of certain hypersimplices, whose cells do not correspond to indecomposable matroids. We state some technical definitions before stating the partial answer. We state the following classification for binary matroids; which are matroids representable over the field with two elements. A matroid is said to be binary if and only if it has no minor isomorphic to the uniform matroid _2,4. A matroid is said to be indecomposable if and only if its polytope does not allow a non-trivial matroid subdivision. Therefore, we obtain Corollary <ref> as an answer to Question <ref>, when restricted to the case of positroidal subdivisions of the hypersimplex. The cells of the finest positroidal subdivision of Δ(k,n) correspond to binary matroids. In particular, they are indecomposable. We know from Theorem <ref> that maximal cells of the finest positroidal subdivisions of Δ(k,n) correspond to connected series-parallel positroids and by Lemma <ref> we know that they do not have U_2,4 as a minor and therefore are also binary matroids. With Lemma <ref> it is clear that the corresponding fan structure for LPM subdivisions also resides as a subfan inside the positive Dressian LPMfan([P,Q])) ⊆Dr^+([P,Q])) = Trop^+Gr([P,Q])) LPMfan(Δ(k,n)) ⊆Dr^+(k,n) = Trop^+ Gr(k,n) Also, in <cit.> a third fan structure on the positive Dressian Dr^+() is defined as the positive fan structure. This fan structure is based on the underlying cluster algebra, studied in detail in <cit.>. We refer the reader to <cit.> for basic details concerning cluster algebras. Our aim here is to highlight the third fan structure on the positive Dressian that is induced via these clusters, although they will emerge later again in our discussion concerning minimal positroids and positive configuration space in Section <ref>. We define the notion of a cluster associated with a matroid <cit.>, A cluster 𝒞 for a matroid is a subset of that indexes a seed in the cluster structure of the cluster algebra isomorphic to ℂ[π_], where ℂ[π_] is the coordinate ring associated to the positroid variety. The positive fan structure on Dr^+() is the fan whose cones are the images of the domains of linearity for a positive parameterization by a cluster 𝒞. Two points lie in the same cone of Dr^+(), if they determine the same common domains of linearity for all the functions p_J, J ∈. The authors in <cit.> also prove that this new fan structure coincides with the previous two fan structures The three fan structures on ^+() coincide. With the sub-fan relation in place <ref> The three fan structures on LPMfan(_[P,Q]) coincide. We also want to highlight that matroid decompositions are invariant under matroid duality, which is also reflected in our description of the LPMfan, meaning if a k-dimensional cone C in the LPMfan(_[P,Q]) corresponds to an LPM decomposition _t[P^t,Q^t], then there exists a k-dimensional cone C' such that it represents the LPM decomposition {_t^*[P^t,Q^t]}, where * represents the matroid dual. This fact can be verified in the case of Δ(3,6) from Figure <ref>. § COMPUTATIONS FOR LPM POLYTOPE TEXT In this section we look at some computational examples, concentrating on the case of Δ(k,n) for k=3,4 and n=6,8 respectively. We use <cit.> for our computations. §.§ Computations for LPM polytope Δ(3,6) Figure <ref> illustrates an LPM subdivision Σ^ of Δ(3,6) with the lattice path matroids corresponding to the maximal cells also shown. We also calculate the weight vector w which induces this subdivision w = { 0,0,0,0,0,0,0,1,1,2,0,0,0,1,1,2,2,2,3,5} We illustrate the LPM polytope decomposition which corresponds to the subdivision in Figure <ref> and we see the truncated LPM polytope after each iterative step of taking a hyperplane split decomposition in Figure <ref>. We also see that Σ^LPM corresponds to a metric tree arrangement shown in Figure <ref>. It is easy to see that under the permutation 1 → 1, 2 → 5, 3 → 3, 4 → 2, 5 → 4, 6 → 6 this tree arrangement permutes to the tree arrangement shown in Figure <ref> which corresponds to Cone_4 <cit.> in the classification of all maximal cones of Dr(3,6) <cit.>. §.§ Decorated permutations and reduced plabic graphs We now connect our computations to some other parameterization of the positive Grassmannian, namely decorated permutations and reduced plabic graphs, and we rely on <cit.> for most of our definitions in this subsection. A decorated permutation of [n] is a bijection π : [n] → [n] whose fixed points are each colored either black or white. A black fixed point i is denoted by π(i) = i, and a white fixed point i by π(i) = i. An anti-excedance of the decorated permutation π is an element i ∈ [n] such that either π^-1(i) > i or π(i) = i. A decorated permutation on [n] is of type (k, n) if it has k anti-excedances. We now establish the connection between decorated permutations and positroid cells of the positive Grassmanians. Given a k × n matrix C = (c_1 , , c_n) written as a list of its columns, a decorated permutation π := π_C is associated to C as follows. Set π(i) := j to be the label of the first column j such that c_i∈spanc_i+1 , c_i+2 , , c_j. If c_i is the all-zero vector, it is called a loop and if c_i is not in the span of the other column vectors, it is called a coloop. The associated positroid cell to this decorated permutation is defined as S_π = {C ∈Gr(k,n)^≥ 0 | π_C = π} Postnikov showed that S_π is a cell, and that the positive Grassmannian Gr(k,n)^≥ 0 is the union of cells S_π where π ranges over decorated permutations of type (k, n) <cit.>. A plabic graph is an undirected planar graph G drawn inside a disk (considered modulo homotopy) with n boundary vertices on the boundary of the disk, labeled 1, , n in clockwise order, as well as some internal vertices. Each boundary vertex is incident to a single edge, and each internal vertex is colored either black or white. If a boundary vertex is incident to a leaf (a vertex of degree 1), it is called a lollipop. A perfect orientation 𝒪 of a plabic graph G is a choice of orientation of each of its edges such that each black internal vertex u is incident to exactly one edge directed away from u; and each white internal vertex v is incident to exactly one edge directed toward v. A plabic graph is called perfectly orientable if it admits a perfect orientation. Let G_𝒪 denote the directed graph associated with a perfect orientation 𝒪 of G. The source set I_𝒪⊆ [n] of a perfect orientation 𝒪 is the set of i which are sources of the directed graph G_𝒪. Similarly, if j ∈I_𝒪 := [n] - I_𝒪, then j is a sink of 𝒪. The following result links positroids with plabic graphs <cit.>. Let G be a plabic graph of type (k, n). Then we have a positroid M_G on [n] defined by M_G = { I_𝒪 | 𝒪 is a perfect orientation of G } where I_𝒪 is the set of sources of 𝒪. Moreover, every positroid cell has the form S _M_G for some plabic graph G. If a plabic graph G is reduced <cit.> we have that S_M_G = S_π_G , where π_G is the decorated permutation defined as follows. Let G be a reduced plabic graph with boundary vertices 1, , n. For each boundary vertex i ∈ [n], we follow a path along the edges of G starting at i, turning (maximally) right at every internal black vertex, and (maximally) left at every internal white vertex. This path ends at some boundary vertex π(i). The fact that G is reduced implies that each fixed point of π is attached to a lollipop; we color each fixed point by the color of its lollipop. This defines a decorated permutation, called the decorated trip permutation π_G = π of G. In <cit.>, the following result elaborates on the way to compute the associated decorated permutations of an LPM. Let I and J be two lattice paths starting at the origin and terminating at (k,n-k), such that I never crosses J. Let I = {i_1 < < i_k} and J = { j_1 < < j_k}∈[n]k. Denote [n] ∖ J = {d_1 < < d_n - k} and [n] ∖ I = { c_1 < < c_n - k} . Then [I,J] is a positroid and its decorated permutation π_[I,J] is given by: π_ℳ[I,J](j_r) = i_r ∀ r ∈ [k] π_ℳ[I,J](d_r) = c_r ∀ r ∈ [n-k] if π_ℳ[I,J](t) = t, then, col(t) = { -1 if t ∈ J 1 otherwise} where col() represents the coloring map for loop and co-loop elements of the permutation. Figure <ref> lists the decorated permutations and the reduced plabic graphs corresponding to the snakes in the snake decomposition of _3,6 described in Figure <ref>. §.§ LPMfan(3,6) We first inspect the f-vector of the fans associated to _3,6 <cit.> f-vector ((3,6)) = (1,65,535,1350,1005) f-vector (((3,6))) = (1,65,550,1395,1035) Out of the 65 rays of the Dressian Dr(3,6), 35 correspond to splits and lie in the split complex, whereas the other 30 correspond to coarsest subdivisions of Δ(3,6) into three maximal cells. Restricting to the positive tropical Grassmannian, we get the following vector <cit.>, <cit.>, <cit.> where F_3,6 is the fan associated to Trop^+(Gr(3,6)). f-vector (F_3,6) = (1,16, 66, 98, 48) Out of these 16 rays, five occur in the LPMfan in the form of S_1,S_2,S_3,S_4 and S_5 which we see in Figure <ref>. The f-vector of the LPMfan for Δ(3,6) is listed below where all cones are obtained as refinements of the five splits S_1,S_2,S_3,S_4 and S_5, illustrated in Figure <ref>, where the edges between cones labeled signify the combination of the corresponding splits. f-vector (LPMfan(Δ(3,6)) = (1,5,7,3,1) The LPMfan(3,6) sits inside the Split subcomplex generated by the refinements of splits S_1,S_2,S_3,S_4 and S_5. Also, to reiterate the cones are defined as secondary cones with rays defined by the corresponding splits, i.e., the collection of all the weight vectors which induce the same LPM subdivision lie in the same cone. We refer to a lattice path matroidal subdivision which is a split with a snake as a maximal cell as snake split subdivision and we refer to the snakes appearing in a snake split subdivision as split snakes. We point out the LPM decompositions for 𝒰_(3,6) other than the ones shown in Figure <ref>, and these are depicted in Figure <ref>, and one of the weight vectors inducing the split subdivision S_5 is the zero vector. w_S' = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0} We also want to point out to the reader that there exists a natural action of the symmetric group S_n on the cones of the Dressian Dr(Δ(k,n)) well documented in <cit.> and well described in their computations, and that is why with respect to this action there are only 7 maximal cells of (3,6) <cit.>. Our description of the LPMfan implicitly incorporates this symmetry, for example, the weight vectors w_1 = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1} and w_2 = {1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} both induce the split S_3, but we know that both of them are equivalent under the action of S_6. §.§ Computations for LPM polytope Δ(4,8) The subdivision is described in Figure <ref> and Figure <ref> and this subdivision is induced by the weight vector w = { 0,0,0,0,0,0,0,0,0,1,1,1,2,2,3,0,0,0,0,1,1,1,2,2,3,2, 2,2,3,3,4,5,5,6,8,0,0,0,0,1,1,1,2,2,3,2,2,2,3, 3,4,5,5,6,8,3,3,3,4,4,5,6,6,7,9,8,8,9,11,14 } A subsequent computation for the LPMfan(Δ(4,8)) is more intricate than the computation of LPMfan(Δ(3,6)) and hence we leave it for future work as we believe it would be nice to utilize the symmetric group action also into the computation to produce bigger examples. All the files containing the code used for all these computations can be found at the following link <https://github.com/Ayush-Tewari13/LPM_SUBDIVISIONS> § AMPLITUHEDRON AND POSITIVE CONFIGURATION SPACES We now describe an important implication of our results and connections to topics in Physics, which in recent times have gained immense interest. In <cit.>, Arkani-Hamed et. al introduced the notion of the amplituhedron which is obtained from the positive Grassmannian via the amplituhedron map. It has been noted that the amplituhedron encodes information concerning scattering amplitudes in 𝒩=4 super Yang-Mills theory, which in turn explains the etymology of the term. In <cit.>, the authors introduce the notion of positroid dissections for the hypersimplex Δ(k+1,n) and the Grasstopes dissection for the amplituhedron and explain the ways in these two dissections can be related via a duality map. We begin with the definition of amplituhedron <cit.>, <cit.>, For a ≤ b, define Mat^>0_a,b as the set of real a × b matrices whose a × a minors are all positive. Let Z ∈Mat^>0_n,k+m. The amplituhedron map Z : Gr(k,n)^≥ 0→Gr(k,k+m) is defined by Z := CZ, where C is a k × n matrix representing an element of Gr(k,n)^≥ 0 and CZ is a k × (k + m) matrix representing an element of Gr(k,k+m) . The amplituhedron 𝒜^≥ 0_n,k,m(Z) ⊆Gr(k,k+m) is the image Z(Gr(k,n)^≥ 0). We briefly state some of the results from <cit.> to sketch the outline of their discussion, Let 𝒞 = {Γ_π} be a collection of positroid polytopes, and let S_π be the collection of corresponding positroid cells. 𝒞 is a positroid dissection of Δ(k,n) if * dim(Γ_π) = n-1 for each Γ_π∈𝒞 * pairs of two distinct positroid polytopes Γ^o_π = μ(S_π) and Γ^o_π' = μ(S_π') are pairwise disjoint, and * ∪_πΓ = Δ(k,n). Let A be a k × n matrix representing a point in Gr(k,n)^≥ 0. The moment map μ: Gr(k,n)^≥0→ℝ^n is defined by μ(A) = ∑_I ∈[n]k|p_I(A)|^2e_I/∑_I ∈[n]k|p_I(A)|^2 A positroid dissection is called a positroid tiling if μ is injective on each S_π. As can be seen from the definition, dissections are a more generalized notion of a polytopal subdivision for a hypersimplex, with no restrictions on how individual pieces meet at the boundary, although the notion of good dissections <cit.> exactly agrees with the notion of a subdivision, Let 𝒞 = {Γ_π^(1) , , Γ_π^(l)} be a dissection of Δ(k+1,n) . We say that 𝒞 is a good dissection of Δ(k+1,n) if the following condition is satisfied: for i j, if Γ_π^(i)∩Γ_π^(j) has codimension one, then Γ_π^(i)∩Γ_π^(j) equals Γ_π, where Γ_π is a facet of both Γ_π^(i) and Γ_π^(j). In <cit.> a dissection for the hypersimplex is provided inspired by BCFW recurrence relations for tilings of the m=4 amplituhedron, which is referred as the BCFW-style recurrence Let 𝒞_k+1,n-1 (respectively 𝒞_k,n-1) be a collection of positroid polytopes that dissects the hypersimplex Δ(k+1,n-1) (respectively Δ(k,n-1)). Then 𝒞_k+1,n = i_pre (𝒞_k+1,n-1) ∪ i_inc(𝒞_k,n-1) dissects Δ(k+1,n),where i_pre and i_inc are maps defined on reduced plabic graphs in <cit.>. §.§ Matroidal definition for BCFW dissections of hypersimplex We now try to build a purely matroidal relation for BCFW-style recurrence dissection for hypersimplices. We provide some context to our notations. For a positroid polytope 𝒫, we refer to the underlying positroid as 𝒫 = (ℳ), where represents taking the convex hull of the indicator vectors of the bases of ℳ. We now provide the matroidal definition for BCFW style recurrence dissections for the hypersimplex. Let 𝒞_k+1,n be a collection of positroid polytopes that dissects the hypersimplex Δ(k+1,n) = (𝒰_k+1,n). Then, 𝒞_k+1,n = ((𝒞_k+1,n) / e_i) ∪ ((𝒞_k+1,n) ∖ e_i) and the set (𝒞_k+1,n) / e_i) provides a positroid dissection of Δ(k,n-1) and (𝒞_k+1,n) ∖ e_i) provides a positroid dissection of Δ(k+1,n-1) , where '/' represents contraction and '∖' represents the deletion operations on matroids. Firstly we note that the hypersimplex Δ(k+1,n), is a 0-1 polytope obtained by the intersection of the unit cube [0,1]^n with the affine hyperplane ∑_i=1^n x_i = k+1. We note that the facet corresponding to the hyperplane x_i=0 is termed as the i-th deletion facet of Δ(k+1,n) and is isomorphic to Δ(k+1,n-1). Similarly, the facet corresponding to the hyperplane x_i=1 is termed as the i-th contraction facet of Δ(k+1,n) and is isomorphic to Δ(k,n-1). Also, these facets can be obtained as deletion and contraction respectively on the uniform matroid 𝒰_k,n <cit.>. With these definitions, the notions of contraction and deletion extend to respective dissections and subdivisions and this fact is used in <cit.>. We point out the natural dissections of hypersimplex into two minors provided by contraction and deletion. Let v ∈Vert(Δ(k+1,n)) then since each dissection into ((ℳ_k+1,n) / e_i) and ((ℳ_k+1,n) ∖ e_i) is defined by hyperplanes x_i = 0 or x_i=1, therefore every vertex v lies in either ((ℳ_k+1,n) / e_i) or ((ℳ_k+1,n) ∖ e_i). Given a positroid dissection 𝒞_k+1,n-1 we consider the minors with respect to an element i ∈ [n] and obtain minors ((𝒞_k+1,n) / e_i) and ((𝒞_k+1,n) ∖ e_i). We recognize that these minors also correspond to the dissections induced on the respective contraction and deletion facets of Δ(k+1,n) respectively, each of which is isomorphic to Δ(k,n-1) and Δ(k+1,n-1) respectively, which give us the two required positroid dissections. We point out that Theorem <ref> provides a matroidal formulation of BCFW style relations for hypersimplex, and proves an almost converse statement of Theorem <ref>, and we say that this is almost a converse statement since we know that not all positroid dissections of Δ(k+1,n) occur from BCFW style recursions <cit.>, whereas the statement of Theorem <ref> involves matroidal operations, therefore for any positroid dissection of Δ(k+1,n) we can obtain dissections of Δ(k,n-1) and Δ(k+1,n-1) in this way. We point out that it is not obvious that there exist matroidal operations equivalent to the operations i_pre and i_inc used in Theorem <ref>. We also wish to explore a possible generalization of the statement for Theorem <ref> to matroid dissections and not necessarily positroid dissections. However, such a discussion would require an appropriate definition of a matroid dissection and a generalization of Theorem <ref> to the case of matroid dissections, as the non-trivial part of the proof of Theorem <ref> rests on a refined description of facets of positroid polytopes defined by Postnikov, described in <cit.>. We again consider the snake polytope decomposition of Δ(3,6) described in Figure <ref>. As we know that this is also a regular positroidal subdivision, or equivalently a regular positroid good dissection. We now perform the contraction and deletion with respect to the element i = 1 on this subdivision, and obtain two collections in which we see that { M_2∖{1}, , M_6∖{1}} provides a positroidal subdivision (equivalently a positroid good dissection) of Δ(3,5) on letters [6] ∖{1} = {2,3,4,5,6} and { M_1 / {1}, M_2 / {1}, M_3 / {1}} provides a positroidal subdivision on Δ(2,5) (cf. from Figure <ref>.) §.§ BCFW cells correspond to lattice path matroids We want to point out that in the discussion in this section, we would only be focusing on the m=4 amplituhedron. In a recent breakthrough work <cit.>, the authors prove the conjecture that BCFW cells provide a triangulation of the amplituhedron 𝒜_n,k,4. In <cit.> and <cit.> the authors establish the equivalence between BCFW cells and noncrossing lattice walks (paths). We use this observation to explore the connection between BCFW triangulations and lattice path matroids. We borrow mostly our notation from <cit.>. Let ℒ_n,k,4 denote the set of all pairs (P_ℒ, Q_ℒ) of noncrossing lattice paths inside a k × (n - k - 4) rectangle, where the notion of noncrossing is the same as P never going above Q implicit in Definition <ref>. Therefore, we state one of our first conclusions in the form of Corollary <ref>. Let (P_ℒ, Q_ℒ) ∈ℒ_n,k,4 be a pair of noncrossing lattice paths. Then (P_ℒ, Q_ℒ) determine a lattice path matroid ℳ[P_ℒ, Q_ℒ] which lies inside the lattice path matroid 𝒰_k,n-4. We describe the connection between non-crossing lattice paths and BCFW cells of 𝒜_k,n,4. Firstly, in<cit.> the authors introduce the notion of a ⊕-diagram of type (k,n), which are defined as follows <cit.> Fix 0 ≤ k ≤ n. Given a partition λ, we let Y_λ denote the Young diagram of λ. A ⊕-diagram of type (k,n) is a filling D of a Young diagram Y_λ fitting inside a k × (n - k) rectangle with the symbols 0 and + (such that each box of Y is filled with exactly one symbol) and λ is called the shape of D (cf. Figure <ref>). The rules according to which the filling in a ⊕-diagram is obtained are elaborated in <cit.>. Let 𝒟_n,k,4 be the space of ⊕-diagram of type (k,n). We infer the following result from <cit.> There exists an bijection Ω_ℒ𝒟 such that Ω_ℒ𝒟 : ℒ_n,k,4→𝒟_n,k,4 The ⊕-diagrams 𝒟_n,k,4 index the (k, n)-BCFW cells 𝒞_n,k,4 . This theorem is proven by using another bijection between the space of binary rooted trees 𝒯_n,k,4 and ℒ_n,k,4 and the authors use reduced plabic graphs to produce decorated permutations for the ⊕-diagrams. We point the reader to <cit.> to explore these concepts and proofs in full detail. Our interest develops with Corollary <ref> and this inspires us to enquire about the existence of a duality between cells of the amplituhedron and dissections of the hypersimplex, which is established via T-duality in the case of m=2 amplituhedron in <cit.>. In <cit.> the following result concerning BCFW cells is proven, which was stated as a conjecture in <cit.>. For every k ≥ 1 and n ≥ k+4, the (k, n)-BCFW cells form a triangulation of the amplituhedron 𝒜_n,k,4. We now state our result based on this discussion, Each triangulation of the amplituhedron 𝒜_n,k,4 into (k, n)-BCFW cells provides a positroid dissection {Γ_i} of the hypersimplex Δ(k,n-4), where each BCFW cell corresponds to a lattice path matroid polytope Γ_i. By Corollary <ref> we already know that each (k,n) BCFW cell corresponds to a LPM ℳ[P_ℒ,Q_ℒ] inside 𝒰_k+4,n, where (P_ℒ,Q_ℒ) ∈ℒ_n,k,4. Therefore, each (k,n) BCFW cell corresponds to a lattice path matroid polytope 𝒫(ℳ[P_ℒ,Q_ℒ]) which lies inside Δ(k+4,n) = (𝒰_k+4,n). Therefore, a triangulation of 𝒜_n,k,4 into (k, n)-BCFW cells corresponds to a collection of all lattice path matroids which lie inside the uniform matroid Δ(k+4,n) = (𝒰_k+4,n), which is clearly a positroid dissection from Definition <ref>. With Theorem <ref> we establish a first notion in the direction of T-duality for a m=4 amplituhedron, where in the case of m=2 amplituhedron <cit.> shows that subdivisions of the amplituhedron correspond to positroid dissections of the corresponding hypersimplex. We provide this in the case of m=4 amplituhedron for the BCFW triangulation which inspires for the exploration of the case of other triangulations and subdivision of 𝒜_k,n,4. Also, BCFW style dissections enjoy a recursive description and can be understood as coming from splits as discussed in the case of the m=2 amplituhedron in <cit.>, and we believe that a positroid dissection in LPM cells captures this in essence as well owing to the recursive definition of LPM polytope decompositions. §.§ Positive configuration spaces, weakly separated collections and connected minimal positroids We highlight some of the connections between our study on LPM's and <cit.>. Firstly, in <cit.> the authors relate the positive Chow cells of the Chow quotient of the Grassmannian with positroidal subdivisions. Let Ch(k,n)_≥ 0 denote the nonnegative part of the Chow quotient of the Grassmannian. There are canonical bijections between the following sets. * The set {Θ_Δ > 0} of positive Chow cells of Ch(k,n)_≥ 0 * The set D(k,n) of regular positroidal subdivisions of Δ(k,n). * The set of cones in the positive tropical Grassmannian Trop Gr^+(k, n), the space of valuations of positive Puiseux series points Gr(k, n)(ℛ > 0) * The set of cones in the positive Dressian Dr(k,n), which satisfy the three term positive Plücker relations. As LPM'S are positroids too, all these equivalences also are true when restricted to the LPMfan. We also delve into the connection between cluster of a matroid, weakly separated collections <cit.> and snakes. We fix some notations relevant to our discussion. We define the cyclic ordering <cit.> (referred as the t-th Gale order in <cit.>) ≤ _t on [n] for some t ∈ [n] by the total order t ≤_t t + 1 ≤_t≤_t n ≤_t 1 ≤_t t - 1. For I ,J ∈[n]k, where I = { i_1 , , i_k}, i_1≤_t i_2≤_t i_k and J = { j_1 , , j_k}, j_1≤_t j_2≤_t j_k then I ≤_t J if and only if i_1≤_t j_1 , , i_k≤_t j_k For each I ∈[n]k and t ∈ [n] , we define the cyclically shifted Schubert matroid as SM_I^t = { J ∈[n]k | I ≤_t J } We recall the definition for weakly separated sets from <cit.>, Let I and J be two subsets of [n]. I and J are said to be weakly separated if either * |I| ≤ |J| and I ∖ J can be partitioned as I_1∪ I_2 such that I_1≺ J ∖ I ≺ I_2 or * |J| ≤ |I| and J ∖ I can be partitioned as J_1∪ J_2 such that J_1≺ I ∖ J ≺ J_2 where A ≺ B indicates that every element of A is less than every element of B. Equivalently, the definition can be stated as the sets I and J ∈ [n]k are said to be weakly separated if we cannot find cyclically ordered elements a,b,c,d such that a,c ∈ I ∖ J and b,d ∈ J ∖ I (also along with the symmetrical statement for I and J swapped). We also recall the definition of Grassmann necklaces <cit.>. A Grassmann necklace is a sequence I = (I_1 , , I_n) of subsets I_r⊆ [n] such that: * if i ∈ I_i then I_i + 1 = (I_i∖{i }) ∪{ j } for some j ∈ [n] , * if i ∉I_i then I_i + 1 = I_i The indices are taken modulo n. In particular, we have | I_1 | = = | I_n | . There exists a canonical bijection between positroids and Grasmann necklaces. We state the characterization of the cluster of a matroid (Definition <ref>), in terms of weakly separated sets and Grassmann necklaces <cit.> A subset 𝒞⊆ℳ is a cluster if it is pairwise weakly separated, has size dim(ℳ) + 1, and contains the Grassmann necklace ℐ of M. Any pairwise weakly-separated subset of [n]k can be extended to a cluster. As one of the takeaways in <cit.>, the authors state this result concerning minimal connected positroids and clusters for them A connected positroid ℳ is minimal if and only if the associated reduced plabic graph G(C) is a tree, for some cluster 𝒞 of ℳ. In this case, ℳ has a unique cluster 𝒞⊆ℳ. We already know that for lattice path matroids, snakes are minimal matroids. Hence, by Lemma <ref>, we obtain a unique cluster in this case. We explain this with one of our running examples; the snake decomposition of 𝒰_3,6 shown in Figure <ref>. We obtain the cluster 𝒞_1 for the snake M_1 𝒞_1 = {123,234,134,124,125,126} It is easy to verify that C_1 is weakly separated, contains the Grassmann necklace for ℳ_1 and is of cardinality = dim(ℳ_1) +1 = 5+1 = 6. Likewise, we obtain unique clusters for all the snakes. The corresponding graphs for these snakes have been described in Figure <ref>. We conclude with another interesting observation. For both the snake split (Definition <ref>) matroids, we notice that they exactly contain k(n-k) +1 elements. This is exactly equal to the cardinality of the maximal weakly separated collection, which is the maximal collection of pairwise weakly separated elements inside the matroid ℳ and the bound on its cardinality was famously conjectured by Leclerc and Zelevinsky and proven to be true in <cit.>. However, we realize that the elements in a snake split are not all pairwise weakly separated, so they are not examples of maximal weakly separated collections. For example for the snake decomposition of 𝒰_3,6, the snake split M_1 has the elements 124 and 135 which are not pairwise weakly separated. § FUTURE PERSPECTIVES We utilize this section to condense our discussion and to highlight the takeaways from our results. We also point to subsequent questions which arise from our work. Firstly, we want to mention a recent work concerning lattice path matroid decompositions into snakes and alcoved triangulation <cit.> in which the authors prove results based on the snake decomposition of LPM and also discuss results on Ehrhart theory of LPM. They prove that the alcoved triangulation of an alcoved polytope is regular. We observe that our discussion pertaining to lattice path matroidal subdivisions being regular generalizes this result for LPM. We point the reader to Figure 1 in <cit.> to understand the context of where LPM lie with respect to other well-known families of matroids. We also want to point the reader to <cit.>, where the authors show that there exist finest matroid subdivisions of matroid polytopes that do not contain matroid polytopes of indecomposable matroids as maximal cells. Hence, it might be a worthwhile question to ask for which other families of matroids apart from positroids, a result like Corollary <ref> can be obtained, for example, the class of transversal matroids might be a good candidate to be considered. Also, a natural generalization of this question would be to consider the finest subdivisions of Dressians for arbitrary matroids, and not necessarily the hypersimplex, and see if we can recover some of these results. With the introduction of the notion of LPMfan we believe that there are many questions that can be asked just pertaining to its structure and we believe this might interest readers for further research. Some of the interesting queries could be, to understand if there exists a bound on the number of LPM splits and how it behaves with respect to the Dressian, computation of the dimension of the LPMfan, etc. We believe there is much more to analyze about the LPMfan and we aim to fulfill this in future work. We also acknowledge via <cit.> recursive relations between LPM's defined by quotients and direct sums. This could be really interesting to understand LPM subdivisions for larger LPM polytopes. We wish to employ such a technique for computing LPMfan recursively. Additionally, it would be interesting to inquire about the specific Plücker relations that are satisfied by points corresponding to LPM subdivisions, which lie in the Dressian. We already know that they would be satisfying the positive Plücker relations owing to the fact that LPM are positroids, but they might be even further refined and this could be done by analyzing the forbidden minors for a matroid to be LPM, classified in <cit.>. One of our future goals is also to find an equivalent of Theorem <ref> for LPM dissections. Also, in <cit.>, the authors provide a characterization of the positroid polytope in the form of the following statement Let M be a matroid of rank k, and consider the matroid polytope P_M. It is a positroid polytope if and only if all of its two-dimensional faces are positroid polytopes. We are able to obtain a one-way implication similar to this in the case of LPM polytopes as follows, The faces of a lattice path matroid polytope P_M[P,Q] are also lattice path matroid polytopes. It is clear that it is sufficient to prove the claim for the facets of an LPM polytope P_M[P,Q]. We utilize the characterization of facets of matroids polytopes described in <cit.> which says that facets of a matroid polytope are either induced by hypersimplex facets or hypersimplex splits. If the facet is induced by a hypersimplex facet, we know that these correspond to matroidal deletions and contractions, and LPM are closed under these operations <cit.>. Hence, the facet of an LPM polytope is again an LPM polytope in this case. Alternatively, if the facet is induced by a hypersimplex split, we know that it is induced by a F-hyperplane <cit.> where F is a flat of the LPM M[P,Q], such that 0 < rank(F) < #F, in which case the facet can be described as P_M[P,Q](F) = P_(M[P,Q] | F ⊕ M[P,Q]/ F). Since LPM are also closed under direct sum and restrictions <cit.>, therefore this facet is again a LPM polytope. We do highlight the fact that a characterization of snake polytopes does exist, and it concludes that snake polytopes are unimodular equivalent to order polytopes of zig-zag posets <cit.>. Additionally, the facial structure of LPM has also been classified in terms of certain sets of deletions, contractions and direct sums in <cit.>. Lemma <ref> appears also as a result in <cit.> however the argument there appears incomplete since they only consider hypersimplex facets in their proof and not the facets that get induced via hyperplane splits. Another important connection to our results which we want to highlight is the work of Fink and Rincon on Stiefel tropical linear spaces <cit.>. For the uninitiated, the Stiefel map assigns a k × n matrix over a field 𝕂 to an element in the Grassmannian Gr(k,n). The authors study the tropicalization of this map and also study the properties of its image, called the Stiefel image, inside the tropical Grassmannian. The authors in <cit.> relate the points inside the Stiefel image, to the class of regular transversal matroid subdivisions, which as the name suggests is the class of regular matroidal subdivisions where each maximal cell corresponds to a transversal matroid. Since LPM are transversal, LPM subdivisions are also transversal matroid subdivisions. Additionally, we obtain the following corollary as a direct consequence of <cit.> Let L be the tropical linear space dual to a LPM subdivision. Then L lies in the corresponding Stiefel image. In <cit.> a facet description for transversal matroid polytopes is provided and <cit.> provides a partial characterization of transversal matroids in terms of its facets. Based on these results, we propose the following question Let P_M be the matroid polytope of a matroid M, such that all of its faces are LPM polytopes. Does this imply that M is also a LPM polytope? We observe that an affirmative answer, along with Lemma <ref> would provide a full characterization of LPM polytopes in terms of their faces. Also, we already know due to prior results that with the assumptions in the question, M is both transversal <cit.> and a positroid <cit.>. Hence, it is also worthwhile to inquire about the ways in which the three different classes of matroids namely; transversal matroids, lattice path matroids and positroids interact. A subsequent study of relations between Stiefel tropical linear spaces and LPM subdivisions will be explored elsewhere. We recall that a matroidal subdivision is completely determined by its 3-skeleton <cit.>. In recent work <cit.>, the authors introduce the class of permutahedral subdivisions which is the class of polyhedral subdivisions of generalized permutahedra into cells that are generalized permutahedra. They also that the 2-skeleton of a permutahedral subdivision does not completely determine the subdivision. In the background of these results, we would be happy to understand how the class of LPM subdivisions which we introduced in this paper, behave and possibly find out the criterion which completely determines a LPM subdivision. We also comment on the location of the positroid cells corresponding to LPM in the stratification of the positive Grassmannian. We consider two well-known families of cells in the positive Grassmannian <cit.> A positroid cell Π is called a Schubert cell if a generic point U ∈Π gives rise to a representable matroid ℳ_I = ([n], ℬ) where B ∈ℬ if and only if I <_1 B, where <_1 is the usual total order on [n]. A positroid cell Π is called a Richardson cell if a generic point U ∈Π gives rise to a representable matroid ℳ_I^J = ([n], ℬ) where B ∈ℬ if and only if I <_1 B <_1 J, where <_1 is the usual total order on [n]. Schubert matroids correspond to Schubert cells and lattice path matroids correspond to Richardson cells. We wish to understand these Richardson cells in depth, given the context of lattice path matroids and in the light of questions from algebraic geometry concerning positroid and Richardson varieties as mentioned in <cit.>. We are currently working on a sequel to our work here, in the context of the new definition of lattice path flag matroids <cit.> and to look at equivalent questions in the realm of flag matroids, along with the flag matroid equivalent of the Dressian, i.e, Flag Dressian <cit.> and the associated tropical flag variety in <cit.>. For our results about the amplituhedron, our results have two facets. Firstly, we provide a matroidal treatment to the well-known BCFW style recurrence relations for positroidal dissections of the hypersimplex. For the m=2 amplituhedron, via T-duality described in <cit.>, these dissections correspond to a dissection of the amplituhedron in terms of Grasstopes <cit.>. However, not much is known about the relations between triangulations of the amplituhedron and dissections of the hypersimplex in the case of the m=4 amplituhedron. We provide a first counterpart of positroid dissections of the hypersimplex for BCFW triangulations of 𝒜_n,k,4. We wish to explore the possibility of equivalent notions of T-duality for the m=4 amplituhedron as well. We also wish to examine connections between combinatorial objects and LPM's other than the ones discussed here for example chord diagrams and domino bases described in <cit.>. We also point the reader to recent work done on weakly separated collections and matroidal subdivisions <cit.>, which also correlates to some of your observations and is an interesting avenue for further exploration. siam
http://arxiv.org/abs/2307.05820v1
20230711215850
Symmetry-Resolved Entanglement: General considerations, calculation from correlation functions, and bounds for symmetry-protected topological phases
[ "K. Monkman", "J. Sirker" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "cond-mat.str-el", "quant-ph" ]
Symmetry-Resolved Entanglement: General considerations, ...]Symmetry-Resolved Entanglement: General considerations, calculation from correlation functions, and bounds for symmetry-protected topological phases Department of Physics and Astronomy and Manitoba Quantum Institute, University of Manitoba, Winnipeg, Canada R3T 2N2 We discuss some general properties of the symmetry-resolved von-Neumann entanglement entropy in systems with particle number conservation and describe how to obtain the entanglement components from correlation functions for Gaussian systems. We introduce majorization as an important tool to derive entanglement bounds. As an application, we derive lower bounds both for the number and the configurational entropy for chiral and C_n-symmetric topological phases. In some cases, our considerations also lead to an improvement of the previously known lower bounds for the entanglement entropy in such systems. Keywords: Entanglement Entropy, Configurational Entropy, Number Entropy, Entanglement Bounds, Symmetry-Protected Topological Phases, Topological Crystalline Insulators [ Kyle Monkman and Jesko Sirker August 12, 2023 ================================= § INTRODUCTION In the presence of particle number conservation, the reduced density matrix ρ_A of a system S=A∪ B has block structure, i.e., [ρ_A,N_A]=0 where N_A is the particle number operator for subsystem A. As a consequence, the von-Neumann entanglement entropy can be rewritten in a symmetry-resolved manner S[ρ_A] = -ρ_Alnρ_A = -∑_n ρ_nlnρ_n . Here ρ_n is the block of ρ_A with particle number n. Studying the symmetry-resolved entanglement has been a subject of recent interest. One of the main results of these studies is that in one-dimensional critical systems, there is an equipartition of the entanglement between the different symmetry sectors <cit.>. One can, furthermore, also separate the entanglement entropy for a system with particle number conservation into two distinct components. To do so, one can write ρ_n=p_nρ̃_n with p_n=ρ_n being the probability to find n particles in subsystem A and ρ̃_n=1. Plugging this expression into Eq. (<ref>) leads to S[ρ_A] = -∑_n p_nln p_n_S_N+∑_n p_n S[ρ̃_n]_S_c . The first part is the Shannon entropy of the particle number distribution which is called the number entropy S_N while the second part is the configurational entropy S_c <cit.>. The number entropy is the part of the entanglement which can often be accessed experimentally without requiring a full quantum tomography of the state <cit.>. The configurational entropy, on the other hand, is a special case of the operational entanglement entropy—first introduced by Wiseman and Vacaro <cit.> as the entanglement extractable from a quantum many-body system of indistinguishable particles and transferable to a quantum register—for the case of a bipartition of a pure quantum state. The separation of the entanglement entropy in the presence of particle number conservation into these two components has been used to study study many-body physics <cit.>, quantum field theories <cit.>, and topological systems <cit.>. The fact that the von-Neumann entropy is extensive is a fundamental result of Statistical Mechanics. However, the same is not true for the symmetry-resolved components. As we will show here, it is nevertheless possible to derive inequalities for the number and configurational entropies of general subsystems. For Gaussian systems, these inequalities imply that minimal bounds on the entanglement can be found by considering only a subset of the single-particle entanglement eigenvalues. To calculate such bounds, we express the number and the configurational entropy for fermionic Gaussian systems in terms of the eigenvalues of the correlation matrix for the subsystem A. This is a generalization of the methods developed by Peschel <cit.> to the symmetry-resolved case. Optimization via majorization is often used for entangled systems since entropy is a concave function <cit.>. Previously known was that majorization of the single-particle entanglement spectrum minimizes the von-Neumann entanglement entropy. Here we use the Shepp-Olkin Majorization Theorem <cit.> to show that the number entropy is also a concave function of the single-particle entanglement eigenvalues. For the configurational entropy we hypothesize, based on results for small subsets of eigenvalues, that it is concave in the single-particle eigenvalues as well. Thus majorizing the single-particle entanglement spectrum not only minimizes the von-Neumann entanglement entropy but also the number and configurational entropies. As an application, we use these methods to find lower entanglement bounds both for chiral insulators <cit.> as well as for C_n symmetric topological crystalline insulators <cit.>. The latter extends previously known results for lower bounds on the symmetry-resolved entanglement in C_2-symmetric topological insulators <cit.>. For certain cuts of C_n-symmetric systems we find, using majorization, additional restrictions on the entanglement spectrum beyond what is currently known. In addition to novel bounds on the number and configurational entropy, these restrictions imply also a new, stronger bound on the von-Neumann entanglement entropy. Our paper is organized as follows: In Sec. <ref>, we prove some general inequalities for the symmetry resolved entanglement valid for any system. In Sec. <ref>, we then specifically consider fermionic Gaussian systems, express the symmetry-resolved entanglement components in terms of the eigenvalues of the correlation matrix, and discuss majorization techniques. This allows us to establish lower bounds for the symmetry-resolved entanglement of chiral and C_n-symmetric topological insulators which are discussed in Sec. <ref>. In Sec. <ref>, we analyze these bounds for specific examples of C_4-symmetric insulators. The final section summarizes the obtained results and provides an outlook on some of the remaining open questions. § SYMMETRY-RESOLVED ENTANGLEMENT The von-Neumann entanglement entropy, number entropy, and configurational entropy are all functions of some density matrix ρ, see Eq. (<ref>). In this paper, we will be interested in the case where ρ is a reduced density matrix obtained after splitting a system into two subsystems and tracing out one of them. However, for the following general considerations this does not matter and ρ can be any density matrix. Let us assume that ρ=ρ^X⊗ρ^Y with [ρ^X,N^X]=[ρ^Y,N^Y]=[ρ,N]=0 where N^X and N^Y are the particle number operators for X and Y, respectively, and N=N^X+N^Y. Furthermore, ρ^X and ρ^Y are proper density matrices, in particular, ρ^X=ρ^Y=1. For the von-Neumann entropy it follows that S = -ρlnρ =-ł{ł(ρ^X⊗ρ^Y)̊lnł(ρ^X⊗ρ^Y)̊}̊ = -ł{ł(ρ^X⊗ρ^Y)̊lnρ^X}̊-ł{ł(ρ^X⊗ρ^Y)̊lnρ^Y}̊ = S[ρ^X] + S[ρ^Y] which is its well-known extensivity property. However, the symmetry-resolved components (<ref>) are not extensive. In the following we will show that S_c[ρ] ≥ S_c[ρ^X]+S_c[ρ^Y], S_N[ρ] ≥{ S_N[ρ^X],S_N[ρ^Y] }. To show these relations, we start by noting that—according to our assumptions—ρ, ρ^X, and ρ^Y all have block structure with respect to their respective particle numbers. We have, in particular, ρ̃_n = ∑_r=0^np_r^X p_n-r^Y/p_n ρ̃_r^X ⊗ρ̃_n-r^Y, p_n = ∑_r=0^n p_r^X p_n-r^Y where ρ̃_n denotes normalized density matrices with ρ̃_n=1 and p_n is the probability to have n particles. We can diagonalize the blocks ρ̃^X and ρ̃^Y at the same time, implying that the eigenvalues of ρ̃_n are given by λ_r(i,j)= p_r^X p_n-r^Y/p_nλ_r^X(i)λ_n-r^Y(j) for r=0,…,n. For the configurational entropy, this implies that S_c[ρ] = -∑_n p_nρ̃_nlnρ̃_n =-∑_n,r,i,jp_nλ_r(i,j)lnλ_r(i,j) ≥ -∑_n,r,i,j p_r^X p_n-r^Y λ_r^X(i)λ_n-r^Y(j) lnł(λ_r^X(i)λ_n-r^Y(j) )̊ = S_c[ρ^X] + S_c[ρ^Y] where we have used p_r^X p_n-r^Y/p_n≤ 1. This proves the configurational entropy bound (<ref>). Now we focus on showing the inequality (<ref>) for the number entropy S_N. Defining a concave function f(p)=-p ln(p), our goal is to demonstrate that S_N[ρ]=∑_n=0^M f(p_n) ≥ S_N[ρ^X]=∑_n=0^M f(p_n^X). If we can show the above inequality, then it will also be true for S_N[ρ^Y] and thus for the maximum of the number entropies of the two subsystems. Since f(p) is concave, we can use Karamata's inequality <cit.>. Let σ(i) and α(i) be permutations of (0, 1, … M) such that x_0=p_σ(0)^X≥ x_1=p_σ(1)^X≥…≥ x_M=p_σ(M)^X and y_0=p_α(0)≥ y_1=p_α(1)≥…≥ y_M=p_α(M). If we can show that x⃗=(x_0, x_1, … x_M) majorizes y⃗=(y_0, y_1, … y_M), then (<ref>) is true by Karamata's inequality. Since they are probability distributions, we know that ∑_i=0^M x_i = ∑_i=0^M y_i = 1. We now simply need to show that ∑_i=0^s y_i ≤∑_i=0^s x_i for all s = 0, 1, … M-1. So ∑_i=0^s y_i = ∑_i=0^s p_α(i) =∑_i=0^s ∑_j=0^α(i) p_j^Y p_α(i)-j^X =∑_j=0^M p_j^Y (∑_i=0^s p_α(i)-j^X) ≤ ∑_j=0^M p_j^Y (∑_i=0^s x_i) = ∑_i=0^s x_i. Therefore x⃗ majorizes y⃗. Thus the inequality for the number entropy (<ref>) has been proven. § FERMIONIC GAUSSIAN SYSTEMS It is well known that for Gaussian systems all properties of a subsystem can be calculated from the correlation matrix C of the subsystem with matrix elements C_nm=⟨ c_n^† c_m⟩ <cit.>. Here we will focus on fermionic Gaussian systems with Hamiltonian H=-∑_n,m t_nm c_n^† c_m where t_nm are hopping amplitudes and c_n fermionic annihilation operators. We note that it should be possible to generalize the results below to bosonic Gaussian systems as well. §.§ Reduced density matrix and correlation functions Following Ref. <cit.>, we know that for fermionic Gaussian systems a reduced density matrix of a subsystem with M sites can be written as ρ=e^-ℋ/Z=1/Zexpł(-∑_k=1^M ε_k a_k^† a_k)̊, Z=∏_k=1^M ł(1+e^-ε_k)̊ . The eigenvalues ε_k of the so-called bilinear entanglement Hamiltonian ℋ can be related to the eigenvalues C_k of the correlation matrix C by exp(-ε_k)=C_k/1-C_k . From Eq. (<ref>) it follows that we can write ρ as a diagonal 2^M× 2^M matrix ρ=1/Z⊗_k=1^M ł([ 1 0; 0 e^-ε_k ])̊=⊗_k=1^M ł([ 1-C_k 0; 0 C_k ])̊ . Thus, the 2^M eigenvalues of ρ are given by λ_{n_k} =1/Z∏_k=1^M ł(e^-ε_k)̊^n_k = ∏_k=1^M C_k^n_k(1-C_k)^1-n_k where {n_k} is a list of length M with n_k=0,1. The Mn eigenvalues in the n-particle block of the density matrix can then be written as λ^(n)_j = C_s_j(1)… C_s_j(n)(1-C_s_j(n+1))… (1-C_s_j(M)) where s_j are permutations of the numbers {1,…,M} with s_j(1)<…<s_j(n) and s_j(n+1)<…<s_j(M). The probability p_n to find n particles in the subsystem is then given by p_n = ∑_j λ^(n)_j = ∑_j C_s_j(1)… C_s_j(n)(1-C_s_j(n+1))… (1-C_s_j(M)) where the same conditions on the permutations s_j apply as before. In particular, p_0=Z^-1=∏_k=1^M(1-C_k) and p_M=∏_k=1^M C_k. We note that Eq. (<ref>) is the probability mass function of the Poisson binomial distribution, i.e., it can be interpreted as the probability to have n successful tries (particle present) out of a total of M tries. Since the sum in Eq. (<ref>) contains Mn terms, this sum is impractical to calculate for large M. For the sake of compeleteness, we note that the probability mass function p_n of the Poisson binomial distribution can be more efficiently calculated by a discrete Fourier transform p_n=1/M+1∑_l=0^M T_l^-n∏_m=1^M (1+(T_l-1)p_m) with T_l=exp(-2π l/(M+1)). This approach of calculating the probabilities p_n was used for the number entropy S_N in Ref. <cit.>. Using Eq. (<ref>), we can write the von-Neumann entanglement entropy as <cit.> S=∑_kł{lnł(e^-ε_k+1)̊+ε_k/e^ε_k+1}̊ =-∑_kł{ C_kln C_k +(1-C_k)ln(1-C_k) }̊ . According to Eq. (<ref>), we can express the probabilities as p_n=p_n(C_1,…,C_M). Thus, the number entropy S_N=-∑_n p_nln p_n is also a function of the correlation matrix eigenvalues { C_k }. The configurational entropy can then be obtained as S_c=S-S_N and is a function of the { C_k } as well. In terms of deriving bounds for the symmetry-resolved entanglement, we can draw the following conclusions: (i) From Eq. (<ref>) we see that we can treat each eigenvalue of the correlation matrix as an 'independent' contribution. I.e., the inequalities (<ref>) are applicable to any subset of eigenvalues. (ii) The symmetry-resolved entanglement components can be expressed explicitly by the correlation matrix eigenvalues {C_k}. The entanglement entropy (<ref>) is a concave function of {C_k}. If we can show that S_N and S_c are concave functions of the {C_k} as well, then we can apply majorization techniques to derive bounds for all three entanglement measures. We will discuss this approach in the following subsections. §.§ Majorization of S and S_N The entanglement entropy (<ref>) is a sum of concave functions of the {C_k}. We now define two vectors C⃗=(C_1,C_2,…, C_M) and C⃗'=(C_1',C_2',…, C_M'), ordered such that C_1≥ C_2≥…≥ C_M and C_1'≥ C_2'≥…≥ C_M'. We then say that C⃗' majorizes C⃗ written as C⃗' ≻C⃗ if ∑_k=1^M C_k = ∑_k=1^M C_k' =⟨ N⟩ , where ⟨ N⟩ is the average particle number in the subsystem and ∑_k=1^n C_k'≥∑_k=1^n C_k n=1,…,M-1 . Under the conditions (<ref>) and (<ref>), Karamata's inequality is applicable to the concave entanglement entropy function (<ref>) and we have S(C_1',C_2',…,C_M') ≤ S(C_1,C_2,…,C_M) . Without any additional restrictions on the eigenvalues of the correlation matrix, this implies the intuitive result that the maximum entropy is obtained when C_k=⟨ N ⟩ / M for all k. The minimum entropy S=0 can be obtained by making each C_k equal to either 0 or 1. Applying majorization to the number entropy is more complicated. While S_N is a sum of concave functions in the probabilities {p_n} it is not immediately obvious if it is concave in the {C_k} as well. Here we will make use of some more advanced concepts and results in majorization and probability theory. We first note that the number of particles N found in the subsystem in a measurement is a sum of Bernoulli random variables N=X_1 +X_2+…+X_M which occur with probabilities P(X_i=1)=C_i and P(X_i=0)=1-C_i. The p_n({C_k}) are then, as already eluded to earlier, the probability mass function of the Poisson binomial distribution with parameters {C_k}. The number entropy S_N is the Shannon entropy of this mass function. It has been conjectured by Shepp and Olkin <cit.> and proven by Hillion and Johnson <cit.> that the Shannon entropy in this case is not only a concave function of the p_n but of the parameters {C_k} as well. This result is known as the Shepp-Olkin theorem. We therefore find again that if C⃗' ≻C⃗, we have S_N(C_1',C_2',…,C_M') ≤ S_N(C_1,C_2,…,C_M). Without further restrictions on the {C_k}, this implies again that the maximum number entropy is obtained if C_k=⟨ N ⟩ / M for all k and the minimum number entropy S_N=0 if each C_k is equal to either 0 or 1. §.§ A conjecture on the concavity of S_c We note that the configurational entropy S_c=S+(-S_N) is a sum of a concave and a convex function. Therefore, one cannot directly apply majorization to S_c based on the already obtained results for S and S_N. However, this does not rule our that S_c is also a concave function of the parameters {C_k}. Similar to the original Shepp-Olkin paper <cit.> we will here just investigate the cases M=2 and M=3. Based on these results, we conjecture that S_c({C_k}) is indeed Schur concave and majorization thus applies. §.§.§ The case M=2: From Eq. (<ref>), we find that in this case the eigenvalues of the reduced density matrix are given by λ^(0) = p_0=(1-C_1)(1-C_2), λ^(2)=p_2=C_1C_2 λ^(1)_1 = C_1(1-C_2), λ^(1)_2=(1-C_1)C_2, p_1 = C_1(1-C_2)+(1-C_1)C_2 The configurational entropy is thus given by S_c=-λ^(1)_1ln(λ^(1)_1)-λ^(1)_2ln(λ^(1)_2)+p_1ln p_1 . As can be seen in Fig. <ref>, the configurational entropy is indeed concave in {C_1,C_2}. This can also be checked by using the Schur-Ostrowski criterion (C_i-C_j)ł(∂S_c/∂ C_i-∂S_c/∂ C_j)̊≤ 0 for 1≤ i≠ j≤ M. The maximal values for all three entropies for an average filling ⟨ N⟩ = 1 are thus obtained for (C_1,C_2)=(1/2,1/2) and are given by S=2ln 2, S_N=3/2ln 2, S_c=1/2ln 2 . §.§.§ The case M=3: We introduce the notation C̅_k = (1-C_k). Then the eigenvalues of the reduced density matrix for M=3 are given by λ^(0) = p_0=C̅_1 C̅_2 C̅_3, λ^(3)=p_3=C_1C_2C_3 λ^(1)_1 = C_1C̅_2 C̅_3, λ^(1)_2=C̅_1 C_2 C̅_3, λ^(1)_3=C̅_1 C̅_2 C_3 λ^(2)_1 = C_1C_2C̅_3, λ^(2)_2=C_1 C̅_2 C_3, λ^(2)_3=C̅_1 C_2C_3 with p_1=∑_j λ_j^(1) and p_2=∑_j λ_j^(2). We can now calculate S_N(C_1,C_2,C_3) and S_c(C_1,C_2,C_3). Plots of both quantities for a fixed value of C_3 are shown in Fig. <ref>. We find again that S_c(C_1,C_2,C_3) is concave, a conclusion which we also checked using the Schur-Ostrowski criterion (<ref>). In the case M=3 we then have the following non-trivial maximal values for the entropies: For filling ⟨ N⟩ =3/2, for example, we have S=3ln 2, S_N=3ln 2-3/4ln 3, S_c=3/4ln 3 , while for fillings ⟨ N⟩ =1 and ⟨ N⟩ =2 we have S=3ln 3-2ln 2, S_N=7/3ln 3-2ln 2, S_c=2/3ln 3 . For the half-filled case, ⟨ N⟩ =M/2, the entropies are always maximized by setting C_k=1/2 for k=1,…,M. This means, according to Eq. (<ref>), that all eigenvalues of the reduced density matrix are equal and given by λ=1/2^M. Since each n-particle block has Mn eigenvalues, the probabilities are given by p_n=Mn/2^M. The maximal entropies are therefore given by S=Mln 2, S_c=1/2^M∑_n=0^M B^M_n ln B^M_n, S_N=S-S_c . where we have defined B^M_n=Mn. To briefly summarize: One can use the structure of the reduced density matrix in the Gaussian case, see Eq. (<ref>), together with the inequalities (<ref>) to derive lower bounds for the symmetry-resolved entanglement components based on a subset of the eigenvalues of the correlation matrix. Furthermore, majorization is applicable because all entanglement components are Schur concave functions. So far, we have discussed the case where the only restriction on the correlation matrix eigenvalues C_k is the average particle number in the subsystem, ⟨ N⟩ = ∑_k C_k. The maximal entropies for all of S, S_c, and S_N are then obtained if C_k=⟨ N⟩/M for all k. If ⟨ N⟩ is an integer, then the minimum entropy is S=0, obtained by setting the appropriate number of C_k=1 and the rest to zero. Interestingly, if ⟨ N⟩ is not an integer then there will be a non-trivial lower entanglement bound. I.e., such non-integer, average particle numbers are only possible in entangled states. This is an obvious example for a case where there is a non-trivial lower entanglement bound but it does not tell us in which types of systems such bounds exist. It is thus more interesting to ask the question if, starting from a fermionic Gaussian Hamiltonian, there are cases where the ground state cannot be adiabatically deformed to the atomic limit with a trivial entanglement bound and an integer particle number. The answer is, of course, yes. For an insulator in a symmetry-protected topological phase, such a deformation to a trivial state is not possible without closing the gap or breaking the symmetry. The tools we have developed so far, can thus be used to establish non-trivial lower bounds for the symmetry-resolved entanglement. We will discuss some examples in the next section. § SYMMETRY-PROTECTED TOPOLOGICAL PHASES In symmetry-protected topological phases, the ground state cannot adiabatically be connected to the atomic limit without closing the excitation gap or breaking the symmetry. There are, broadly speaking, two types of symmetry-protected topological order: (i) Order protected by non-spatial symmetries, in particular time reversal, charge conjugation, and chiral symmetry, which leads to the tenfold classification <cit.>. (ii) Spatial symmetries such as inversion, mirror, or rotational symmetries. In the latter case, one often speaks about topological crystalline insulators. In both cases we expect that in a topological non-trivial phase, there is a non-trivial minimal bound for the entanglement components. Here we want to provide some examples how the methods developed in the previous sections can be used to obtain bounds both for topological phases phases protected by non-spatial and by spatial symmetries. §.§ Chiral Symmetry A bilinear fermionic system with chiral symmetry can be written as H=∑_k (Ψ_k^a † Ψ_k^b † )ł([ 0 h_k; h_k^† 0 ])̊ł([ Ψ^a_k; Ψ^b_k ])̊ where Ψ^a_k=(a^1_k… a^N_k) and similarly for Ψ_k^b. I.e., each unit cell has 2N elements with N elements belonging to sublattice A and N elements belonging to sublattice B. Due to the chiral symmetry, hopping occurs only between a and b elements, leading to the off-diagonal structure of the Hamiltonian matrix in Eq. (<ref>). The topological phases of such a gapped system can be characterized by a winding number ℐ∈ℤ. For this system with periodic boundary conditions, we have recently proven that a topologically non-trivial phase, ℐ≠ 0, has 2|ℐ| protected eigenvalues at 1/2 in the spectrum of the correlation matrix which is sometimes also called the single-particle entanglement spectrum <cit.>. From this, we concluded that the entanglement entropy has a non-trivial lower bound S≥ 2|ℐ|ln 2. Since the Hamiltonian (<ref>) does conserve the particle number, the entropy can be separated into number and configurational entropy. With the methods from the previous sections, we can now also obtain bounds for these symmetry-resolved components. Eq. (<ref>), in particular, tells us that we can derive a bound based on the protected eigenvalues of the correlation matrix at 1/2 alone. This is then essentially the case of Eq. (<ref>) with M=2|ℐ|. I.e., the bounds are S ≥ 2|ℐ|ln 2, S_c≥1/2^2|ℐ|∑_n=0^2|ℐ|2|ℐ|nlnł[2|ℐ|n]̊ S_N ≥ 2|ℐ|ln 2-1/2^2|ℐ|∑_n=0^2|ℐ|2|ℐ|nlnł[2|ℐ|n]̊ with the additional condition S=S_N+S_c, meaning that the three bounds are not all independent of each other. §.§ C_n-symmetric insulators Next, we want to discuss an example for a spatial, crystalline symmetry leading to topologically non-trivial phases. Let Ĉ_n be a generator of the cyclic group ℤ_n. Ĉ_n is a conserved operator, acting on a particular Hilbert space, which is unitary, Hermitian and which fulfills (Ĉ_n)^n=1. Suppose that we have a non-interacting, gapped Hamiltonian Ĥ with a Ĉ_n symmetry i.e. Ĉ_̂n̂ĤĈ_̂n̂^-1=Ĥ. Each single particle eigenstate |E_j⟩ of Ĥ is then also an eigenstate of the Ĉ_̂n̂ operator. That is, for some integer ℓ_j, Ĉ_̂n̂ |E_j ⟩ = e^2π i ℓ_j / n |E_j ⟩. The phase e^2π i ℓ_j / n is called the angular momentum of the state, labelled by quantum numbers ℓ_j. For fixed particle N, the normalized, many-particle ground state |Ψ⟩ of the Hamiltonian has a ℤ^n invariant Z=(Z_1,Z_2,…,Z_n), where Z_j is the number of filled states with angular momentum quantum number ℓ_j. An insulating ground state can then be described by |Ψ⟩ = ∏_j=1^Z_1|Z_1,j ⟩∏_j=1^Z_2|Z_2,j ⟩…∏_j=1^Z_n|Z_n,j ⟩ where each |Z_r , j ⟩ state describes an orthogonal single particle state of momentum ℓ_r and energy E_r,j. The total number of particles is then N=∑_i=1^n Z_i. §.§.§ C_2 symmetry: The simplest case is that of a C_2 symmetry which, in a lattice system, could for example be due to an inversion or a mirror symmetry. In this case C_2 |E_j ⟩ = ± |E_j ⟩, i.e., single particle eigenstates are either even or odd and the numbers Z_1,2 of filled symmetric and antisymmetric states are topological invariants. The relevant quantity for the protected entanglement obtained from a reduced density matrix when the system is cut into to equal halves is Δ=|Z_1-Z_2|. In Ref. <cit.> we have recently proven—directly based on properties of the ground state and independent of the correlation matrix spectrum—that for Δ≠ 0, non-trivial lower bounds for S, S_N, and S_c exist. With the methods developed in this article, we can actually rederive this result in a different way. From Ref. <cit.> it is known that a C_2 symmetric system which is cut in half has Δ protected eigenvalues of the correlation matrix at 1/2. Thus, we can use the same arguments as in the chiral case and the bounds from Ref. <cit.> are reproduced by replacing 2|ℐ|→Δ in Eq. (<ref>). §.§.§ The general C_n case: In the C_n case there are, in general, multiple cuts of the system which are consistent with the symmetry. We will specify these cuts by first defining integers m_1 and m_2 such that both m_1 and m_2 divide n. The cut, denoted as A^m_2_1/m_1, will divide the system into subsystems A and B. Let M be the total number of states in the subsystem A^m_2_1/m_1. We require A^m_2_1/m_1 to have the following properties: (a) The number of single-particle states in A is 1/m_1 times the total number of states. (b) The subspace A has a Ĉ_m_2 symmetry. That is, there is a Ĉ_m_2 symmetry that maps all single-particle states |A_i ⟩ back into the same subspace A. (c) The single particle states in the total system can be generated by acting with the operator Ĉ_m_1 m_2 on the |A_i ⟩ states. We will now discuss two different methods to obtain bounds for the entanglement components. Method 1: Here, we follow Ref. <cit.> where it was shown that for any symmetry-respecting cut of a C_n symmetric system, there are protected eigenvalues in the spectrum of the correlation matrix. In particular, it was found that there are Δ eigenvalues in the range [1/m_1,1-1/m_1], where the Δ values for various symmetries and cuts are given in table <ref>. Using the results derived here, we can obtain bounds for the entanglement components based on these protected eigenvalues alone. In the case m_1=2 (system is cut in half), we have Δ protected eigenvalues exactly at 1/2. In this case we obtain again the bounds (<ref>) with 2|ℐ|→Δ and Δ given in table <ref>. When m_1 ≠2, we cannot simply apply majorization to the protected eigenvalues in the range [1/m_1,1-1/m_1] since the sum of these eigenvalues is not fixed. What we can do, however, is vary the sum of the eigenvalues and consider the maximally majorized entropy for each fixed sum. For real numbers x, a and b, we define the function F[x,a,b] = 0 x<a x-a a ≤ x ≤ b b-a b<x. We can parameterize the eigenvalue sum as ∑_i=1^Δ C_i=Δ/m_1+x. Let μ=(1-2/m_1). Then, for a fixed x, the maximal majorization occurs when C_i = 1/m_1+F[x,(i-1)μ,iμ] . The average particle number in the subsystem is given by ⟨ N⟩ = ∑_i=1^Δ C_i + ∑_Δ+1^M C_i, i.e., it is given by a sum of the protected and unprotected eigenvalues. If we assume that we can always obtain the given average particle number while freely varying the sum of the protected eigenvalues, we can then ask the question what the global minimum as a function of x is if we choose the C_i giving maximal majorization for each x according to Eq. (<ref>). An example for Δ=4 and m_1=6 is shown in Fig. <ref>. The local minima in all three entanglement entropies correspond to having all the C_i at the boundaries of the interval [1/m_1,1-1/m_1]. This can be understood by recalling that all three entropy components are concave functions in the { C_i}. The question then becomes for which boundary configuration the entropies become minimal. We note that if C_i=1/m_1 then (1-C_i)=1-1/m_1. According to Eq. (<ref>), we therefore get exactly the same contribution to the entanglement entropy S({C_i}) whether an eigenvalue sits at the lower or upper boundary. Thus, any arrangement of the {C_i} at the two boundaries of the interval leads to the same minimum. For the number entropy S_N, the two cases where all the eigenvalues sit at 1/m_1 or all the eigenvalues sit at 1-1/m_1 will yield the same result and correspond, physically, to counting either particles or holes. If we put the eigenvalues C_1,…,C_Δ-1 at 1/m_1 then S_N is a function of C_Δ with a minimum at 1/m_1. We thus conclude that the global minimum of S_N is obtained if all the C_i sit at 1/m_1 or all the C_i sit at 1-1/m_1. For this case, we can obtain a closed form expression for the probabilities p_n to have n particles in the subsystem. In order to have n particles we need n factors of C_i=1/m_1 and Δ-n factors of 1-C_i=m_1-1/m_1. There are Δn possible combinations. Thus p_n = ł(1/m_1)̊^nł(m_1-1/m_1)̊^Δ-nΔn = (m_1-1)^Δ-n/m_1^Δ B_n^Δ where we have again used B_n^Δ=Δn. This allows us to obtain a lower bound for the number entropy S_N for any C_n symmetry and any of the cuts shown in table <ref>. Finally, we consider also the minimum for the configurational entropy S_c. As can be seen in Fig. <ref>, the global minimum does not occur for the same configuration of the C_i's as for the number entropy. Based on the M=2,3 (equivalent to Δ=2,3 in this context) cases analyzed earlier and several additional examples we considered numerically, we hypothesize that the global minimum for S_c is obtained if Δ/2 [(Δ± 1)/2] eigenvalues are placed at 1/m_1 and Δ/2 [(Δ∓ 1)/2] eigenvalues at 1-1/m_1 for Δ even [odd]. While this method does provide lower bounds for the symmetry-resolved entanglement, these bounds are, in general, not optimal. We will demonstrate this next for one particular type of cut. Method 2: Consider the cut A_1/n^1, i.e., m_1=n and the subsystem is the 1/n-th part of the system. Then the state is given by Eq. (<ref>) and the reduced single-particle correlation matrix can be written as C = C(Z_1)+C(Z_2)+…+C(Z_n) where C(Z_r) is the reduced correlation matrix of the state ∏_j=1^Z_r|Z_r,j ⟩. These states are orthogonal and there are Z_r eigenvalues at 1/n. Now we define a vector Z̃⃗̃=(Z̃_1,Z̃_2,…,Z̃_n) which is an ordered version of the invariant Z⃗. That is, the values Z̃_j are a reordering of Z_j such that Z̃_1≥Z̃_2≥…≥Z̃_n. Then we define invariant differences Δ_1=(Z̃_1-Z̃_2), Δ_2=(Z̃_2-Z̃_3), …, Δ_n-1=(Z̃_n-1-Z̃_n), Δ_n=Z̃_n. Next, we define a non-increasing vector x⃗ with Δ_r values at r/n for r=1,2,…,n and the remaining values equal to zero. Then, applying the majorization theorem <cit.> for sums of Hermitian matrices, we find that all sets of possible eigenvalues of C are majorized by x⃗. Thus, we find that the state with minimal S, S_N, and S_c has Δ_r eigenvalues at r/n for r=1,2,…,n. This comes to a total of (Z̃_1 - Z̃_n) eigenvalues which are non-zero. The previous known minimum for S <cit.> is also known to have (Z̃_1 - Z̃_n) non-zero eigenvalues but at a value of 1/n or n-1/n. Thus, by using majorization, we have found in this particular case stronger lower entropy bounds than previously known. In fact, because we consider all the eigenvalues of the correlation matrix and x⃗ majorizes all possible arrangements of these eigenvalues we know that this bound is optimal. For cuts besides the A_1/n^1 cut, this majorization method is not directly applicable. § C_4 SYMMETRY EXAMPLES In this section, we consider two examples for fermionic Gaussian systems with C_4 symmetry. In the first example, we will demonstrate an adiabatic deformation of the Hamiltonian and monitor all aspects of entanglement including the symmetry-resolved components and the single-particle entanglement spectrum. In the second example, we will consider a two-dimensional plaquette model and show that by choosing specific cuts, consistent with the C_4 symmetry, the entanglement entropy can be reduced to the lower bounds found earlier. §.§ Example 1: Adiabatic Deformation First, we will consider an C_4 symmetric example with invariant Z⃗=(1,1,0,0). Applying method 1 from the previous section for a A_1/4^1 cut, we have Δ=1, see table <ref>. Thus, there is one protected eigenvalue in the range [1/4,3/4]. The lower bounds are then obtained by placing this eigenvalue at 1/4. We will see from the solution of the example that this bound is not optimal. For the A_1/2^2 cut, we have Δ=2 and two eigenvalues protected at 1/2. This bound is optimal. For the A_1/2^1 cut, we have Δ=0 and method 1 places no restrictions on the eigenvalues, i.e., the lower bounds are trivial. Using method 2, we can only consider the A_1/4^1 cut. This method predicts lower bounds for all entanglement components which are obtained from a single eigenvalue at C_1=1/2, which results in optimal bounds. Let us now describe the first example. Consider a system of two stacked 2× 2 plaquettes, i.e., a system consisting of a total of 8 sites. Operators acting on the lower plaquette are denoted by a_L, b_L, c_L, d_L and those acting on the upper plaquette by a_U, b_U, c_U, d_U. We now define the following operators ℓ_j^1 =a_j+b_j+c_j+d_j/2 , ℓ_j^2 =a_j+ b_j-c_j- d_j/2 ℓ_j^3 =a_j-b_j+c_j-d_j/2 , ℓ_j^4 =a_j- b_j-c_j+ d_j/2 for j=L,U. The Ĉ_4 symmetry operator maps a_j → b_j, b_j → c_j, c_j → d_j, d_j → a_j. The Hamiltonian we want to consider is given by Ĥ(θ)=(-ℓ_L^1^†ℓ_L^1 + ℓ_U^1^†ℓ_U^1) +(cos 2θ)(ℓ_L^2^†ℓ_L^2 - ℓ_U^2^†ℓ_U^2) + (sin 2θ) (ℓ_L^2^†ℓ_U^2 + ℓ_U^2^†ℓ_L^2). The 2-particle ground state is given by |Ψ⟩ = ℓ_L^1^† ( ℓ_L^2^†sinθ- ℓ_U^2^†cosθ) |0⟩ where |0⟩ is the vaccuum state and does depend on the parameter θ. The A_1/4^1 cut consists of both a_j lattice points. The A_1/2^2 cut consists of both a_j and c_j lattice points. Lastly, the A_1/2^1 cut consists of both a_j and b_j lattice points. The single-particle entanglement spectrum for each cut is given by A_1/4^1 : C_1 = 1-sinθ/4 , C_2 = 1+sinθ/4 A_1/2^2 : C_1 = 1/2 , C_2 = 1/2 A_1/2^1 : C_1 = 2-√(2)sinθ/4 , C_2 = 2+√(2)sinθ/4. We can now use the formulas (<ref>) to obtain the eigenvalues of the reduced density matrix and the symmetry-resolved entanglement. Using these formulas, we find that all three symmetry-resolved entanglement components are minimized at the spectrum limits. This occurs when θ=π/2 as shown in Fig. <ref>. For the A^1_1/4 cut we see that method 2 indeed gives optimal lower bounds of S=S_N=ln 2 and S_c=0. For the A^2_1/2 cut the two eigenvalues are always fixed at 1/2 and method 1 gives optimal lower bounds which correspond to those values given in Eq. (<ref>). In the A^1_1/2 case method 1 gives only trivial bounds and method 2 is not applicable. We find, however, a very similar picture to the A^1_1/4 cut with the two eigenvalues coupled to each other and non-trivial lower entanglement values obtained for θ=π/2. §.§ Example 2: Selection of the cut As our second example, we will consider again a C_4-symmetric system but this time with invariant Z⃗=(6,3,4,3). Applying method 1 for the A_1/4^1 cut, we have Δ=3, see table <ref>. This predicts that three eigenvalues are protected in the range [1/4,3/4]. The lower bounds for S and S_N are thus obtained for C=(1/4,1/4,1/4) while the lower bound for S_c is obtained for C=(1/4,1/4,3/4). We will see that these bounds are not optimal. For the A_1/2^2 cut, we have Δ=2 and two eigenvalues protected at 1/2. For the A_1/2^1 cut, we have Δ=4 and four eigenvalues protected at 1/2. In the latter two cases the bounds will be optimal. Using method 2, we can improve on the bounds for the A_1/4^1 cut. This method predicts that better lower bounds for all entanglement components are obtained for C=(1/4,1/4,1/2). In this example, we will show that the lower bounds can be reached not only by an adiabatic deformation of the Hamiltonian as in the first example, but also by a deformation of the cut. The A_1/m_1^m_2 cuts shown in Fig. <ref> are the most natural cuts but lead to entanglement which is larger than the bounds. If we, however, deform these cuts to B_1/m_1^m_2, also shown in Fig. <ref>, then the single particle entanglement spectra correspond to those used to obtain the lower bounds. The fermionic model we will consider consists of a unit cell with operators a,b,c,d and is given by Ĥ = t_1∑_x,y (a_x,y^† b_x,y + a_x,y^† c_x,y+b_x,y^† d_x,y+c_x,y^† d_x,y) + t_2 ∑_x,ył(b_x,y^† a_x+1,y + c_x,y^† a_x,y+1.̊ + ł.d_x,y^† b_x,y+1+d_x,y^† c_x+1,y)̊ + h.c. where x,y enumerate the unit cells. Here we will consider the case of a two-dimensional system with 4× 4 unit cells (plaquettes) with 16 fermions where t_1=0.1 and t_2=1.9. Shown in Fig. <ref> are only the strong bonds t_2 for various cuts. For the two cases A^1_1/2 and A^2_1/2, method 1 predicts optimal lower bounds which correspond to Δ eigenvalues fixed at 1/2. For the case B_1/4^1 state, both method 1 and method 2 provide correct lower bounds for the entanglement entropy components but only method 2 provides the optimal bound corresponding to eigenvalues C={1/4,1/4,1/2}, see Fig. <ref>. § CONCLUSION In a system with particle number conservation, the entanglement entropy can be split into two components: the number entropy and the configuational entropy. The first main result we did establish in this paper is that while the entanglement entropy is extensive, the symmetry-resolved components are not and rather fulfill the inequalities (<ref>). These inequalities are general and apply to any system, fermionic or bosonic, non-interacting or interacting, so long as particle number conservation is respected. In the rest of the paper, we concentrated on obtaining rigorous results for the symmetry-resolved entanglement components in fermionic Gaussian systems. We introduced two techniques: First, we noticed that each eigenvalue of the correlation matrix can be treated as its own 'independent subsystem', thus allowing to obtain bounds based on the knowledge of a subset of these eigenvalues together with the inequalities (<ref>). Second, we introduced majorization as a technique to obtain strict bounds on the entanglement components. To do so, one needs to prove first that the entanglement functions are concave in the eigenvalues of the correlation matrix {C_k}. This is obvious and well-known for the entanglement entropy. Here, we have proven using the Shepp-Olkin theorem that the number entropy S_N is also concave in the {C_k}. For the configurational entropy we were not able to establish a full proof that this entropy is concave in the {C_k} as well but hypothesize that it is based on the results for a small subsystems. Without further restrictions on the eigenvalues, majorization then allows to obtain upper bounds for all entanglement components for a given average filling of the subsystem. It is physically even more interesting to apply these techniques to derive non-trivial lower bounds for systems which cannot be adiabatically connected to the atomic limit. This is, for example, the case for Gaussian fermionic systems with symmetry-protected insulating topological phases. As examples, we derived lower bounds for systems with a chiral symmetry as well as for systems with a spatial C_n symmetry. In the latter case we discussed two methods. The first was based on results obtained in Ref. <cit.>. These results showed that in C_n symmetric systems, a certain number of correlation matrix eigenvalues are restricted to a range [1/m_1,1-1/m_1] where 1/m_1 is the fraction of the total system the subsystem consists of. We could show that the lower bound for the number entropy is then obtained by placing all the eigenvalues at the same boundary while the lower bound for the configurational entropy is obtained by equally distributing the eigenvalues between the two boundary values. While these bounds are always valid, they are not necessarily optimal. We could show, in particular, in our second approach how for a specific cut better and, indeed, optimal bounds can be obtained. We illustrated these results by considering two concrete examples for C_4-symmetric systems. In the first example we considered, we have also seen that lower bounds for the entanglement components can exist even in cases where the methods used here only give trivial bounds. This clearly indicates that while some progress has been achieved here, the problem of giving optimal bounds for symmetry-protected topological phases is far from being fully solved. Another interesting problem is, of course, topological phases in interacting fermionic or bosonic systems. However, then the single-particle correlation matrix is no longer directly related to the reduced density matrix and completely new and different tools need to be developed to establish bounds for the symmetry-resolved entanglement. § ACKNOWLEDGMENT The authors acknowledge support by the Natural Sciences and Engineering Research Council (NSERC, Canada). K.M. acknowledges support by the Vanier Canada Graduate Scholarships Program. J.S. acknowledges by the Deutsche Forschungsgemeinschaft (DFG) via Research Unit FOR 2316. K.M. would like to thank A. Urichuk for helpful discussions. § REFERENCES 10 url<#>1#1urlprefixURL Xavier Xavier J C, Alcaraz F C and Sierra G 2018 Phys. Rev. B 98(4) 041106 <https://link.aps.org/doi/10.1103/PhysRevB.98.041106> CFTCalabrese Turkeshi X, Ruggiero P, Alba V and Calabrese P 2020 Phys. Rev. B 102(1) 014455 <https://link.aps.org/doi/10.1103/PhysRevB.102.014455> Wiseman Wiseman H M and Vaccaro J A 2003 Phys. Rev. Lett. 91(9) 097902 <https://link.aps.org/doi/10.1103/PhysRevLett.91.097902> Greiner Lukin A, Rispoli M, Schittko R, Tai M E, Kaufman A M, Choi S, Khemani V, Léonard J and Greiner M 2019 Science 364 256–260 kiefer2020bounds Kiefer-Emmanouilidis M, Unanyan R, Sirker J and Fleischhauer M 2020 SciPost Physics 8 083 MonkmanSirkerEdge Monkman K and Sirker J 2020 Phys. Rev. Research 2(4) 043191 <https://link.aps.org/doi/10.1103/PhysRevResearch.2.043191> KieferSirker1 Kiefer-Emmanouilidis M, Unanyan R, Fleischhauer M and Sirker J 2020 Phys. Rev. Lett. 124(24) 243601 <https://link.aps.org/doi/10.1103/PhysRevLett.124.243601> KieferSirker2 Kiefer-Emmanouilidis M, Unanyan R, Fleischhauer M and Sirker J 2021 Phys. Rev. B 103(2) 024203 <https://link.aps.org/doi/10.1103/PhysRevB.103.024203> Parez Parez G, Bonsignori R and Calabrese P 2021 J. Stat. Mech. 2021 093102 <https://doi.org/10.1088/1742-5468/ac21d7> Bonsignori Bonsignori R, Ruggiero P and Calabrese P 2019 J. Phys. A: Math. Theor. 52 475302 <https://doi.org/10.1088/1751-8121/ab4b77> Sela Goldstein M and Sela E 2018 Phys. Rev. Lett. 120(20) 200602 <https://link.aps.org/doi/10.1103/PhysRevLett.120.200602> field1 Murciano S, Di Giulio G and Calabrese P 2020 J. High Energy Phys. 2020(8) 073 <https://doi.org/10.1007/JHEP08(2020)073> field2 Horváth D X and Calabrese P 2020 J. High Energy Phys. 2020(11) 131 <https://doi.org/10.1007/JHEP11(2020)131> field3 Parez G, Bonsignori R and Calabrese P 2021 Phys. Rev. B 103(4) L041104 <https://link.aps.org/doi/10.1103/PhysRevB.103.L041104> field4 Bonsignori R and Calabrese P 2020 J. Phys. A: Theor. 54 015005 <https://doi.org/10.1088/1751-8121/abcc3a> field5 Murciano S, Bonsignori R, and Calabrese P 2021 SciPost Phys. 10 111 <https://scipost.org/10.21468/SciPostPhys.10.5.111> topology1 Garrido A P, Plastino Á R, de Guevara M L L and Apel V M 2022 Annalen der Physik 534(10) 2200201 <https://doi.org/10.1002/andp.202200201> topology2 Oblak B, Regnault N and Estienne B 2022 Phys. Rev. B 105(11) 115131 <https://link.aps.org/doi/10.1103/PhysRevB.105.115131> peschel Peschel I 2003 Journal of Physics A: Mathematical and General 36 L205 <https://iopscience.iop.org/article/10.1088/0305-4470/36/14/101> nielsen2002quantum Nielsen M A and Chuang I 2016 Quantum computation and quantum information (Cambridge Univ. Press) nielsen2001majorization Nielsen M A and Vidal G 2001 Quantum Info. Comput. 1 76-93 <https://dl.acm.org/doi/abs/10.5555/2011326.2011331> SheppOlkin Shepp L and Olkin I 1981 Entropy of the sum of independent bernoulli random variables and of the multinomial distribution Contributions to Probability ed GANI J and ROHATGI V (Academic Press) pp 201–206 <https://www.sciencedirect.com/science/article/pii/B9780122744600500229> Olkin Marshall A W, Olkin I and Arnold B C 1979 Inequalities: theory of majorization and its applications (Springer) <https://link.springer.com/book/10.1007/978-0-387-68276-1> HillionJohnson Hillion E and Johnson O 2017 Bernoulli 23(4B) 3638 MonkmanSirkerSpectrum Monkman K and Sirker J 2022 arXiv:2207.10558 <https://arxiv.org/abs/2207.10558> Bernevig2011 Hughes T L, Prodan E and Bernevig B A 2011 Phys. Rev. B 83(24) 245132 <https://link.aps.org/doi/10.1103/PhysRevB.83.245132> Bernevig2013 Fang C, Gilbert M J and Bernevig B A 2013 Phys. Rev. B 87(3) 035119 <https://link.aps.org/doi/10.1103/PhysRevB.87.035119> Bernevig2014 Alexandradinata A, Dai X and Bernevig B A 2014 Phys. Rev. B 89(15) 155114 <https://link.aps.org/doi/10.1103/PhysRevB.89.155114> C2MonkmanSirker Monkman K and Sirker J 2023 Phys. Rev. B 107(12) 125108 <https://link.aps.org/doi/10.1103/PhysRevB.107.125108> Karamata1 Karamata J 1932 Publ. Math. Univ. Belgrade 1 145–148 <http://eudml.org/doc/254514> Karamata2 Furuichi S, Moradi H R and Zardadi A 2019 Rep. Math. Phys. 84(2) 201–214 <https://doi.org/10.1016/S0034-4877(19)30083-7> Karamata3 Kadelburg Z, Dukić D, Lukić M and Matic I 2005 The Teaching of Mathematics 8 31–45 tenfold1 Ryu S, Schnyder A P, Furusaki A and Ludwig A W 2010 New J. Phys. 12 065010 <https://doi.org/10.1088/1367-2630/12/6/065010> tenfold2 Chiu C K, Teo J C Y, Schnyder A P and Ryu S 2016 Rev. Mod. Phys. 88(3) 035005 <https://link.aps.org/doi/10.1103/RevModPhys.88.035005> fulton1998eigenvalues Fulton W 1997-1998 Seminaire Bourbaki 40 255–269 <https://eudml.org/doc/110247>
http://arxiv.org/abs/2307.05031v1
20230711060739
Compressive single-pixel read-out of single-photon quantum walks on a polymer photonic chip
[ "Aveek Chandra", "Shuin Jian Wu", "Angelina Frank", "James A. Grieve" ]
quant-ph
[ "quant-ph" ]
IEEE PHOTONICS JOURNAL Compressive single-pixel read-out of single-photon quantum walks Compressive single-pixel read-out of single-photon quantum walks on a polymer photonic chip Aveek Chandra, Shuin Jian Wu, Angelina Frank, James A. Grieve Manuscript received 23 May 2023; accepted 26 May 2023. Date of publication 1 June 2023; date of current version 16 June 2023. This work was supported in part by the Ministry of Education (MOE), Singapore, through AcRFTier3 under Grant MOE2018-T3-1-005, in part by the National Research Foundation, Singapore, and in part by A*STAR under CQT Bridging Grant.(Corresponding author: Aveek Chandra) Aveek Chandra, Shuin Jian Wu, and Angelina Frank are with the Centre for Quantum Technologies, National University of Singapore, Singapore 117543 (e-mail: [email protected]; [email protected]; [email protected]). James A. Grieve is with the Quantum Research Centre, Technology Innovation Institute, Abu Dhabi 9639, UAE, and also with the Center for Quantum Technologies, National University of Singapore, Singapore 117543 (e-mail: [email protected]). Digital Object Identifier 10.1109/JPHOT.2023.3281830 August 12, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Quantum photonic devices operating in the single photon regime require the detection and characterization of quantum states of light. Chip-scale, waveguide-based devices are a key enabling technology for increasing the scale and complexity of such systems. Collecting single photons from multiple outputs at the end-face of such a chip is a core task that is frequently non-trivial, especially when output ports are densely spaced. We demonstrate a novel, inexpensive method to efficiently image and route individual output modes of a polymer photonic chip, where single photons undergo a quantum walk. The method makes use of single-pixel imaging (SPI) with a digital micromirror device (DMD). By implementing a series of masks on the DMD and collecting the reflected signal into single-photon detectors, the spatial distribution of the single photons can be reconstructed with high accuracy. We also demonstrate the feasibility of optimization strategies based on compressive sensing. Single-pixel imaging (SPI), polymer photonic chip, photonic quantum walk (QW), compressed sensing. § INTRODUCTION THE integration of quantum photonic circuits into chip-scale devices (QPICs) <cit.> is a key enabler of increasingly multifunctional and reconfigurable systems. In a photonic chip, light is confined in waveguides, propagating as guided optical modes. Engineering efforts over the last two decades have realized a diverse array of components such as on-chip couplers, interferometers, and resonators. In a quantum photonic device, these components are adopted and combined to implement gate-based devices and devices based on coherent quantum walks. In quantum devices, it is frequently of interest to measure photon correlations at the output waveguide ports of the chip, to test the device and verify the desired outcome. This requires that light emitted from each output waveguide mode be collected and routed to single photon detectors, for example avalanche photodiodes (APDs) or superconducting nanowire single-photon detectors (SNSPDs)<cit.>. While it is possible to couple each output to a discrete detector channel, this may be tedious, and quickly becomes unfeasible when the number of output ports is large, or the ports are densely spaced. In this work, we demonstrate an inexpensive and versatile method to image and map the output modes of a multi-port chip by using a single pixel imaging (SPI) strategy <cit.>. In the context of remote sensing, SPI with heralded single photons<cit.> has been shown to be noise-robust and loss-tolerant in detecting signal that would otherwise be obscured by strong background illumination. In our work, we investigate a heralded single-photon quantum walk on a polymer chip by single-pixel imaging of the intensity distribution at the output facet of the device. Our method is low-cost and works both for imaging as well as measuring photon correlations among the output modes. The use of a click-based detector with high timing resolution (timing jitter ∼ 300 ps) enables us to distinguish pairs of photons from other events (e.g. detector dark counts, stray light, uncorrelated photons). Additionally, we can extend our method to image a two-dimensional geometry of distributed optical modes in a multi-layer or three-dimensional architecture<cit.>. Finally, our approach can be simply adapted to image wavelengths spanning from visible to infrared, and a wide range of light intensities (from single photons to bright illumination), as long as an appropriate detector is operated below saturation. While in principle multi-pixel devices such as EMCCDs (with a combination of short acquisition time and nanosecond triggering) could be deployed to similar effect <cit.>, these devices are comparatively expensive and not well suited to identifying correlated events. Quantum walks (QWs) have become an integral tool for development of algorithms in quantum simulation<cit.> and quantum computation<cit.> as well as study of high-dimensional and reconfigurable graphs, disorder and topological phenomena in photonics<cit.>. Recently, topologically protected modes have been demonstrated using waveguide arrays on a polymer chip<cit.>. Our method could be of use in the implementation of quantum Boson sampling<cit.> and quantum neural networks<cit.>, as well as opening avenues to explore complex topological geometries<cit.> in multi-dimensional, multi-port waveguide systems. § EXPERIMENTAL METHODS Light propagation in a waveguide array with nearest-neighbor coupling can be understood as continuous-time, discrete-space quantum walk on a 1D graph, with its Hamiltonian given by Ĥ = c∑_i=1^∞β_i a_i^† a_i + κ_i,i-1a_i-1^† a_i + κ_i,i+1a_i+1^† a_i, where β_i is the propagation constant, a_i^† and a_i are the Bosonic creation and annihilation operators for photons in waveguide i, κ_i,i+1=κ_i,i-1=γ are coupling coefficients that describe coupling between adjacent waveguides and γ and β_i are constant for uniform array. For a single particle in waveguide i, a_i^†|0⟩ =|1⟩ at initial time t_0=0 hopping to an adjacent waveguide j, a_j^†|0⟩ =|1⟩ at time t, its transition amplitude is given by unitary evolution U_i,j(t) =⟨1|_i exp(-iĤt)|1⟩_j. The single-particle transition probability, p_i,j(t)=|U_i,j(t)|^2 describes the photon intensity distribution across the waveguide array for a single-waveguide input <cit.>. Interestingly, this light intensity distribution can be shown to be the same irrespective of whether a single-photon state, a Fock state |n⟩ or a coherent state is injected at the single-waveguide input<cit.>. At the heart of our method is a DMD, comprising a rectangular grid of micromirrors (called `pixels'), with the ability to orient each pixel to an `ON' or `OFF' state. In our case (model DLP LightCrafter^TM from Texas Instruments), 415872 pixels are arranged in a 608 (columns) × 684 (rows) diamond pattern with each pixel having a diameter of 7.64 μm and a pitch (spacing) of 10.8 μm. The experimental setup is shown in Fig. <ref> and the details can be found in Appendix <ref>. Photon pairs (frequency-degenerate with center wavelength at 810 nm) are produced via a type-0 spontaneous parametric down-conversion (SPDC) process by using a 10 mm-long periodically poled potassium titanyl phosphate(ppKTP) crystal, pumped by a CW laser at 405 nm. The pair is split by a wedge mirror and are coupled into SM fibers, as described in <cit.>. One photon (`idler') from the pair makes its way to the fiber-coupled APD 1 while the other one (`signal') is injected into a polymer waveguide array via a lensed fiber. The waveguide device is fabricated in-house on a polymer platform as described in Appendix <ref> and in ref.<cit.>. The polarization of both photons can be adjusted using waveplates positioned between the wedge mirror and fiber couplers. Light emerging from array is collected by an infinity-corrected microscope objective (magnification 10X), and imaged onto the active area of the DMD. Light reflected from the `ON' pixels of the DMD is directed and focused by lenses onto a free-space APD 2, which has an active area of diameter 180 μm. Detection events from the APDs are encoded as electronic pulses, and routed to an AND gate, allowing us to accumulate “coincident detection events” in a counter. The photon pair source produces about 2 million pairs/s per mW of pump power under optimal phase-matching conditions, at crystal temperature 25.7^∘C. The pairs/singles ratio (“heralding efficiency”) is maintained at 0.25. The photon pair source is at first characterized by sending both idler and signal photons to fiber-coupled APDs. Once it has been optimally calibrated, the fiber-coupled APD in the signal arm is disconnected and the signal photon is redirected to the waveguide array chip via a lensed fiber. In the experiment, 250 μ W of pump power is used to produce 5×10^5 photon pairs per second at the source. Owing to 2% overall transmission, single-channel rates of S1=1.9 million events per second on the herald (idler) arm and rates of S2=4500 events per second on the signal arm are observed. The coincident event channel measures 1000 coincidences per second between the signal and the idler arms. With a coincidence window of approximately 16 ns (constrained by the width of the digitized APD signal), the rate of accidental or background coincidences can be estimated to be ∼137 per second, resulting in a signal-to-noise ratio above 6. All event rates reported in this letter, unless stated otherwise, are uncorrected, with detection losses or inefficiencies included by default.The overall system transmission i.e. the probability of a photon inserted into the chip finally impinging onto the detector is ∼2%, representing 17 dB loss. This figure is dominated by two main sources: diffractive losses from the DMD (approximately -6 dB), and insertion loss from the fiber to the waveguide (approximately -10 dB). While our current experiment is constrained by these losses, there exist clear optimizations which could largely mitigate both, for example the adoption of tapered edge-couplers in our waveguide devices. Losses in our system originate mainly from two places: insertion loss from lensed fiber to the waveguide array chip (amounting to about 90%) and diffraction losses from the DMD (amounting to 75-80%). In the experiment, we are concerned with imaging 13 waveguide modes at the end face of our photonic chip. Light from these optical modes falls on a small section of DMD, a region corresponding to 64×16=1024 pixels. For our system magnification, each mode can be addressed by a few (∼4) DMD pixels. We refer to these groupings as “superpixels”, which may be switched between ON and OFF states to effectively mask- or gate-out individual waveguide modes. Before this strategy can be deployed, it is necessary to gain precise knowledge on the location of waveguide modes in DMD pixel coordinates. This necessitates the reconstruction of the optical intensity profile at the DMD, which we perform using imaging techniques borrowed from the single pixel camera literature <cit.>. In single pixel imaging (SPI), spatial information about the image is reconstructed using a single “bucket” photo-detector and a large number of “mask” elements placed in a conjugate image plane between the detector and the image. The strategy is particularly useful where a specialized sensor is needed, and a large array of such sensors is impractical. As measurements must be taken in “series”, acquisition times may be long in comparison to multi-pixel approaches, and scales linearly with the active area that must be imaged. In our experiment, an image area corresponding to 64×16=1024 pixels have to be reconstructed using 1024 masks in total, with 2048 measurements required for full reconstruction (we must measure the signal from both positive and negative masks). At the single-photon level, acquisition time is constrained by signal-to-noise considerations – for our devices 1 second gives acceptable performance. In total, the acquisition time for a full reconstruction is approximately 40 minutes. The CS-SPI approach can be expressed mathematically as follows. For a fully vectorized signal x ϵ ℝ^n×1 of interest, we assume an orthonormal basis Ψ ϵ ℝ^n× n where the signal has sparse vector representation given by s ϵ ℝ ^n×1. This can be expressed as: x=Ψ s Now, if the signal is sampled by a sensing matrix Φ (Φ ϵ ℝ^m× n) then we have, y=Φ x = ΨΦ s = Θ s where y is the vector obtained by projection of sampled signal or data on the sensing basis, Φ. Now, Θ = ΨΦ is an m× n matrix, referred to as the reconstruction (or measurement) matrix. It works as follows: using the measured values, y, from the single-pixel detector and with the knowledge of the measurement matrix Θ one can solve the equation Eq. <ref> to find the sparse vector s and by taking its inverse transform Eq. <ref> the original image, x, can be reconstructed. To expedite this process and reduce the required acquisition time we implement compressive single-pixel imaging (CS-SPI), where only first M out of N rows of an optimally ordered set of masks (for example the well-known Hadamard basis) are used for image reconstructions. Though the acquisition here is not “fast” by imaging standards, here the image plane is relatively static (affected only by alignment drift). In general, we found it quite feasible to capture data over several hours, and this could be extended further by engineering a more stable fiber-to-chip coupling. While CS-SPI has been explored in the literature for natural images, this is to our knowledge the first demonstration of CS-SPI in the context of read-out of a photonic device. In this work we investigate two orderings: the Cake-cutting <cit.> and Russian Dolls <cit.> schemes, with the Cake-cutting strategy detailed below (the Russian Dolls ordering can be considered to be the same, with additional steps detailed in Appendix <ref>). * Rows of Hadamard matrix, H_1024 are reshaped into 64×16 matrices in order to facilitate use as binary masks. * The Cake-cutting algorithm identifies and counts connected regions in a mask (i.e. a continuous block of either 1 or -1). * Masks are assigned a rank based on the number of constituent blocks and sorted in ascending order. See Appendix Fig. <ref> for an illustration. Intuitively, the Cake-cutting “rank” value can be thought of as approximately proportional to the spatial frequencies associated with each mask <cit.>. Consequently this represents an ordering in which the lower frequencies are sampled first, on the assumption that they contribute significantly to the signal compared to higher frequencies. We perform SPI of our waveguide chips as follows: for each mask implemented on the DMD, single and coincident event rates are recorded with 1 s integration time. The order in which the masks appear is determined a priori by the algorithm mentioned above. The image is reconstructed progressively in real-time (using data collected so far) and recorded as a function of different reconstruction (compression) ratios (i.e. fraction of masks measured). We make use of the total variation minimization by augmented Lagrangian and alternating direction algorithms, abbreviated as 'TVAL3' <cit.>, an open source MATLAB package <cit.> from Rice University that has been developed for CS-SPI. The algorithm is commonly utilized in the SPI community for its speed of execution and is known to obtain excellent image reconstruction with low mean squared error (MSE). § RESULTS AND DISCUSSION Single pixel imaging of the mode structure of a uniform waveguide array is summarized in Fig. <ref>. We use a lensed fiber to inject light from an 802 nm laser source into the central waveguide (index 0), as shown in Fig. <ref>(a). The waveguides support both TE and TM spatial modes at zero order, and we ensure operation in the TE regime throughout this work. The nearest-neighbour coupling coefficient γ is estimated to be 0.0085 mm^-1 for our array, with 3 μm waveguide separation chosen to enable us to resolve individual modes at the output facet. With bright light used as an input, the distribution of optical mdoes at the output can be directly visualized using a standard silicon CCD as shown in Fig. <ref>(b). Constructive and destructive interference, characteristic of the quantum walk of a coherent-state input is evident in the symmetric distribution. For a heralded single-photon input, the full reconstruction using single-pixel imaging is shown in Fig. <ref>(c). The reconstructed images of modes are visible in a rectangular grid of 64 × 16 pixels. Also shown is the extracted intensity profile based on summation of intensities by column for the corresponding image. By fitting this data to a weighted sum of Gaussian functions, we extract a set of intensities I for all modes, giving us the mode spectrum. Eq. <ref> defines the Mean Squared Error (MSE) metric that we use to quantify similarity between two mode spectra represented by I_orig (`original' dataset) and I_recon (`recontructed' dataset): MSE = 1/N∑_i=1^N (I_recon - I_orig)^2, where N refers to the number of intensity values compared (i.e. the number of modes), 13 for this chip. An MSE of zero would imply identical intensity profiles. The MSE can be used to compare reconstructed spectra with data captured using a multi-pixel device, assuming identical resolution and normalization. As the single-photon quantum walk is same as the quantum walk of a coherent state, we compare the fully reconstructed single-photon QW shown in Fig. <ref>(b) with the camera-captured QW of a coherent state Fig. <ref>(c). The computed MSE is 0.02, and we believe some of this discrepancy is related to spectral characteristics of the different light sources used. Fig. <ref>(d) is the reconstruction of the same heralded single-photon quantum walk via raster scan of superpixels, in other words, switching 'ON' superpixels one-by-one to gate out individual waveguide modes. The data has been recorded the same way, except now the integration time of APDs is set to 10 s for a enhanced signal to noise ratio. The datasets from Fig. <ref>(c) and Fig. <ref>(d) when compared, seem to agree fairly well with a computed MSE of 0.02. Though `superpixel raster scan' has been used here to verify the result obtained from single-pixel imaging method, it is clear that a reasonably high resolution reconstruction is a pre-requisite for this approach, in order to properly define the superpixel locations. Fig. <ref> shows the results of compressive single-pixel imaging (CS-SPI) using the Cake-cutting and the Russian Dolls orderings at various reconstruction ratios - (a) 100%, (b) 50%, (c) 25% and (d) 12.5%. The data derived from partial reconstructions is compared quantitatively via the MSE, contrasting the partial reconstruction with the spectrum obtained using the full dataset. MSE is plotted against reconstruction ratio (as a percentage) in 5% increments (approximately 51 masks) in Fig. <ref>, for both sorting strategies. At lower reconstruction (compression) ratios, the Cake-cutting order clearly outperforms the Russian Dolls order, with the MSE of Cake-cutting case remaining below 0.03 even at 25% reconstruction. At higher reconstruction ratios (50% and above), the basis orderings perform very similarly, with high fidelity (approximately zero MSE). We believe this is mainly due to the relatively large area of the optical modes (which are imaged onto multiple DMD pixels) and the relatively sparse information content of the reconstructed images. The second half (latter 50%) of the patterns in the sorted Hadamard set are generally associated with “high spatial frequencies”, with a corresponding high block number (see Fig. <ref>. This implies that their use would increase sensitivity to information in the image that has a correspondingly fine structure. Since our modes are relatively large, there is not much additional information to be gained by sampling these patterns. We expect the compressive sensing strategy to be particularly compelling where there are large number of output ports filling a larger region on the DMD. A 50% compression reduces the acquisition time by 20 minutes per device in our case, and this reduction in time would scale roughly linearly with the number of outputs on the chip. In principle, raw acquisition time can be improved by reducing the image magnification factor (such that the modes fall on fewer DMD pixels). This would speed up data gathering, but it might also be expected to reduce the degree of compression possible (i.e. it might require more than half the masks to be sampled). It would also be expected to increase the sensitivity of the read-out to optical alignment, so an overall saving in experimental time cannot be assumed. Fortunately in our case, the fiber-chip mechanical system is sufficiently stable to permit approximately two hours of data acquisition in a fully automated process. In addition to the reconstruction of single-photon quantum walks, we have experimented with coupling two-photon states into the waveguide array (see Appendix <ref> for details). While system losses did not permit the observation of non-classical correlations, we believe that this is a promising area of future research. There are several areas in which the losses could be reduced, including adopting inverse-taper edge couplers on the polymer platform. Improved transmission will facilitate single-pixel imaging of two-photon quantum walk and measurement of two-photon correlations not only in a simple one-dimensional array like ours but in complex, two-dimensional geometries. § CONCLUSION In conclusion, we have demonstrated the effective use of single-pixel imaging techniques in the read-out of weak optical signals at the end-facet of a waveguide chip. We further employ compressive sensing strategies to reduce the acquisition time. Although system losses currently restrict the technique, we believe that the scheme can be applied to the study of non-classical two-photon states, particularly if low-noise detectors (such as superconducting nanowire devices <cit.>) are available. While we have chosen to focus on a one-dimensional uniform waveguide array, this strategy is clearly applicable for arbitrary waveguide geometries, for example the two-dimensional output facets of direct-written chips <cit.>. For QPIC devices with a large number of output modes, the reduction in acquisition time afforded by the compressive sensing (CS-SPI) approach will be particularly important. Finally, our method allows for addressing individual output modes by definition of “superpixels”, which allows the system to be dynamically reconfigured for single-output measurements. It is our hope that this powerful new application of single pixel imaging can open up new opportunities in chip-scale quantum devices. § ACKNOWLEDGEMENT The authors would like to thank Alexander Ling and his team for all the support and fruitful discussions. Filip Auksztol, Paul Thrane and Adithyan Radhakrishnan contributed to the early stages of the project. We thank Mohamed Riadh Rebhi, Isa Ahmadalidokht and David Phillips for discussions too. All experimental work was carried out at the Centre for Quantum Technologies, National University of Singapore. §.§ Methods Photon pair source - Our photon pair source is constructed using the geometry described in ref. <cit.>. Degenerate photon pairs are produced via Type 0 spontaneous parametric downconversion (SPDC) in a 10 mm periodically poled potassium titanyl phosphate crystal (PPKTP, Raicol Crystals Ltd.) with poling period 3.45 μ m in a single pass configuration (see Fig. <ref>, dashed box). 405 nm light from a stabilized laser diode (ONDAX) is cleaned via a fluorescence bandpass filter and focused to a 120 μm waist at the center of the PPKTP crystal. A long pass filter (at 736 nm) and dichroic mirror placed after the crystal remove residual pump light. Photon pairs are produced degenerate around 810 nm with typical 20 nm bandwidth, with identical polarization. Pairs are separated into two paths using a wedge mirror, with each path coupled into a single-mode fiber. Quarter-wave and half-wave plates in each path enable independent polarization control. To reconstruct a heralded single-photon quantum walk, one photon (`idler') is directly connected to a fiber-coupled APD, while the other (`signal') is coupled into the waveguide array by means of a lensed fiber (OZ Optics 630-HP). For the signal photon, polarization is set to be horizontal at the waveguide array. Before performing the single-photon experiment, light from a laser diode is coupled into the waveguide array to facilitate fiber-to-chip alignment. A beamsplitter in the optical path (shown as a dashed square in Fig. <ref>) is inserted to image part of the light onto a CCD camera. With the CCD sensor and DMD located in conjugate focal planes, the position of the chip and microscope objective can be fine-tuned to achieve focusing of light on the DMD as shown in Fig. <ref>(b). For single-pixel imaging (SPI), the beam splitter is removed from the path and all light is directed to either a PIN photodiode (for bright light) or APD (for single photons). Polymer waveguide - The waveguide chip is produced in-house following the processes described in <cit.>. It is composed of two polymers: Gelest OE50 (Mitsubishi Chemicals) and Sylgard 184 (Dow) with refractive indices of 1.50 and 1.41 respectively. Gelest OE50 (the high refractive index layer) is spin-coated to a thickness of 1 μm onto a “mold” written into a photoresist via UV-lithography. A matrix of Sylgard 184 (low refractive index layer) with a thickness of ∼2 millimeters is then applied on top and cured at 70 ^∘ C for one hour. The structure is removed from the mold by immersion in dimethyl sulfoxide for several hours. The refractive index difference and waveguide dimensions are designed to achieve single-mode guiding. When working with evanescently coupled arrays, light is edge-coupled into a “feeder” waveguide, which extends from the array to the chip edge as shown in Fig. <ref>(a). The array comprises of uniform straight waveguides with 3 μm pitch and length of approximately 9 mm. Light in the waveguides propagates as a fundamental TE mode with coupling coefficient γ estimated to be 0.0085 mm^-1. §.§ Ordering of Hadamard basis In our experiment, for full reconstruction of the 16x64 pixel image we employed 1024 masks in total. To illustrate how different basis orderings work, an example is helpful. Beginning with a 16× 16 Hadamard matrix, we choose the lowest order matrix of rank 8×2 – see Fig. <ref>. Each row of the matrix is reshaped into a 8×2 array with the connected regions (or blocks) of 1 and -1 counted and shown alongside. Elements containing 1 and -1 are shown in white and black respectively for ease of visualization. The complete set of 16 basis masks along with the number of connected blocks are shown for the natural order in Fig. <ref>(b), the Cake-cutting order in Fig. <ref>(c) and the Russian Dolls order in Fig. <ref>(d). As mentioned previously, the Cake-cutting order is based on sorting the arrays by the number of blocks, in ascending order. Fig. <ref>(d) shows the Russian Dolls order, where the masks are split into H_4 and H_16 groups, with each group subsequently sorted based on number of blocks. Here, all Hadamard matrices, from lowest rank to the given rank are grouped in a library of basis patterns. For a matrix H_2^2n, there are lower order matrices H_2^2n-1, H_2^2n-2 and so on. Each mask is compared with masks in this library and ones that are identified with that of the lowest ranked matrix will appear first, followed by masks that are identified with matrix of next higher order and so on. Masks not associated with any lower order matrix appear at the end. Finally, the masks in each of these ordered groups of matrices are sorted in ascending order. §.§ Two photon quantum walks We explored the possibility of injecting two-photon states into adjacent waveguides via lensed fibers, as shown in Fig. <ref>(a). In order to address two waveguides independently by optical fiber, it was necessary to introduce a “fan out” region to the input side of the chip. Two central waveguides, indexed 0 and 1, are curved adiabatically over 8 mm until they achieve a pitch of about 221 μ m, that is compatible with a pair of optical fibers held by a v-groove. Two lensed fibers are used to inject a pair of frequency-degenerate photons with horizontal polarization (i.e. the photons are indistinguishable). The output of the chip is again imaged using the SPI system, this time with the inclusion of a non-polarizing beamsplitter allowing us to route the masked output onto two independent avalanche photodiodes, after which the experiment proceeds as in the heralded case (Fig. <ref>), with coincident detection events accumulated in electronic counters. Fig. <ref>(b) shows the reconstruction of modes from these two-photon events. The observed intensity distribution is described by the convolution of two individual single-photon quantum walks and a two-photon quantum walk. Due to system losses (estimated at ∼40 dB for the successful propagation of two photons), accidental coincidences (noise) are on the same magnitude as `true' two-photon events. A signal-to-noise ratio slightly above unity is neither sufficient to observe the two-photon quantum walk, nor does it allow for measuring non-classical two-photon correlations by pairwise selection of superpixels. As discussed in the Experimental Methods, system losses could be reduced by using e.g. tapered edge-couplers on the waveguide chip. One can also redesign the post-DMD portion of the optical path to reduce “field stop” effects, however this was not feasible using our existing hardware. While it is clear that this method could be a powerful tool for investigating such two-photon systems, further refinements to reduce this loss will be required. IEEEtran 99 Wang2020J. Wang, F. Sciarrino, A. Laing & M. Thompson, Integrated photonic quantum technologies, Nature Photonics, 14, 273-284 (2020). Moody2021G. Moody, Roadmap on integrated quantum photonics, J. Phys. Photonics, 4 (2022). Crespi2013A. Crespi, R. Osellame, R. Ramponi, V. Giovannetti, R. Fazio, L. Sansoni, F. De Nicola, F. Sciarrino & P. Mataloni, Anderson localization of entangled photons in an integrated quantum walk, Nature Photonics, 7, 322-328 (2013). Adcock2019J. Adcock, C. Vigliar, R. Santagati, J. Silverstone & M. Thompson, Programmable four-photon graph states on a silicon chip, Nature Communications 2019 10:1, 10, 1-6 (2019). Johnson2022S. Johnson, A. McMillan, C. Torre, S. Frick, J. Rarity & M. Padgett, Single-pixel imaging with heralded single photons, Optics Continuum, Vol. 1, Issue 4, Pp. 826-833, 1, 826-833 (2022). Duarte2008M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly & R. Baraniuk, Single-pixel imaging via compressive sampling, IEEE Signal Processing Magazine, 25, 83-91 (2008). Kim2021J. Kim, T. Jeong, S. Lee, D. Kim, D. Kim, S. Lee, Y. Ihn, Z. Kim & Y. Jo, Heralded single-pixel imaging with high loss-resistance and noise-robustness, Applied Physics Letters, 119, 244002 (2021). Alberti2017A. Alberti, C. Robens, O. Daigle, C. Carignan, J. Gach, E. Bolduc, D. Faccio & J. Leach, Acquisition of multiple photon pairs with an EMCCD camera., Journal Of Optics, 19, 054006 (2017). Lustig2022E. Lustig, L. Maczewsky, J. Beck, T. Biesenthal, M. Heinrich, Z. Yang, Y. Plotnik, A. Szameit & M. Segev, Photonic topological insulator induced by a dislocation in three dimensions, Nature, 609, 931-935 (2022). Aspuru-Guzik2012A. Aspuru-Guzik & P. Walther, Photonic quantum simulators Nature Physics, 8, 285-291 (2012). Portugal2013R. Portugal, Quantum Walks and Search Algorithms, Springer, 2nd Edition, (2018). Childs2009A. Childs, Universal computation by quantum walk, Phys. Rev. Lett., 102, 180501 (2009). Kitagawa2010T. Kitagawa, M. Rudner, E. Berg & E. Demler, Exploring topological phases with quantum walks, Physical Review A - Atomic, Molecular, And Optical Physics, 82, 033429 (2010). Kitagawa2012T. Kitagawa, M. Broome, A. Fedrizzi, M. Rudner, E. Berg, I. Kassal, A. Aspuru-Guzik, E. Demler & A. White, Observation of topologically protected bound states in photonic quantum walks, Nature Communications, 3 (2012). Frank2022A. Frank, D. Leykam, D. Smirnova, D. Angelakis & A. Ling, Boosting topological zero modes using elastomer waveguide arrays, Optics Letters, Vol. 47, Issue 18, Pp. 4620-4623, 47, 4620-4623 (2022). Spring2013J. Spring, B. Metcalf, P. Humphreys, W. Kolthammer, X. Jin, M. Barbieri, A. Datta, N. Thomas-Peter, N. Langford, D. Kundys, J. Gates, B. Smith, P. Smith & I. Walmsley, I. Boson sampling on a photonic chip, Science, 339, 798-801 (2013). Zhao2021R. Zhao & S. Wang, A review of Quantum Neural Networks: Methods, Models, Dilemma; arxiv.org/abs/2109.01840. Kondakci2017H. E. Kondakci, A. Abouraddy & B. Saleh, Lattice topology dictates photon statistics, Scientific Reports 2017 7:1, 7, 1-10 (2017). Tang2018H. Tang, X. Lin, Z. Feng, J. Chen, J. Gao, K. Sun, C. Wang, P. Lai, X. Xu, Y. Wang, L. Qiao, A. Yang & X. Jin, Experimental two-dimensional quantum walk on a photonic chip, Science Advances, 4 (2018). Meinecke2013J. Meinecke, K. Poulios, A. Politi, J. Matthews, A. Peruzzo, N. Ismail, K. Wörhoff, J. O'Brien & M. Thompson, Coherent time evolution and boundary conditions of two-photon quantum walks in waveguide arrays, Physical Review A - Atomic, Molecular, And Optical Physics, 88, 2-7 (2013). Bromberg2009Y. Bromberg, Y. Lahini, R. Morandotti & Y. Silberberg, Quantum and classical correlations in waveguide lattices, Physical Review Letters, 102, 1-4 (2009). Perumangatt2020C. Perumangatt, A. Lohrmann & A. Ling, Experimental conversion of position correlation into polarization entanglement, Physical Review A, 102 pp. 12404 (2020). Grieve2017aJ. Grieve, K. Ng, M. Rodrigues, J. Viana-Gomes & A. Ling, Mechanically tunable integrated beamsplitters on a flexible polymer platform, Applied Physics Letters, 111 (2017). Frank2022aA. Frank, J. Zhou, J. Grieve, I. Verzhbitskiy, J. Viana-Gomes, L. Loh, M. Schmid, K. Watanabe, T. Taniguchi, G. Eda & A. Ling, Mode-Center Placement of Monolayer WS2 in a Photonic Polymer Waveguide, Advanced Optical Materials, 10, 2101684 (2022). Edgar2019M. Edgar, G. Gibson & M. Padgett, Principles and prospects for single-pixel imaging, Nature Photonics, 13, 13-20 (2019). Gibson2020aG. Gibson, S. Johnson & M. Padgett, Single-pixel imaging 12 years on: a review, Optics Express. 28, 28190-28208 (2020). Yu2019bW. Yu, Super sub-nyquist single-pixel imaging by means of cake-cutting hadamard basis sort, Sensors (Switzerland), 19, 1-22 (2019). Sun2017M. Sun, L. Meng, M. Edgar, M. Padgett & N. Radwell, A Russian Dolls ordering of the Hadamard basis for compressive single-pixel imaging, Scientific Reports, 7, 3464 (2017). Vaz2020P. Vaz, D. Amaral, L. Requicha Ferreira, M. Morgado & J. Cardoso, Image quality of compressive single-pixel imaging using different Hadamard orderings, Optics Express, 28, 11666 (2020). Li2013C. Li, W. Yin, H. Jiang & Y. Zhang, An efficient augmented Lagrangian method with applications to total variation minimization, Computational Optimization And Applications 2013 56:3, 56, 507-530 (2013) Li2010C. Li, W. Yin & Y. Zhang, User's Guide for TVAL3: Tv minimization by augmented lagrangian and alternating direction algorithms, https://usermanual.wiki/Document/UserGuide.90875630.pdf (2010). Wolff2020M. Wolff, S. Vogel, L. Splitthoff & C. Schuck, Superconducting nanowire single-photon detectors integrated with tantalum pentoxide waveguides, Scientific Reports 2020 10:1, 10, 1-9 (2020). EsmaeilZadeh2021I. Esmaeil Zadeh, J. Chang, J. Los, S. Gyger, A. Elshaari, S. Steinhauer, S. Dorenbos & V. Zwiller, Superconducting nanowire single-photon detectors: A perspective on evolution, state-of-the-art, future developments, and applications; Applied Physics Letters, 118, 190502 (2021). Poulios2014K. Poulios, R. Keil, D. Fry, J. Meinecke, J. Matthews, A. Politi, M. Lobino, M. Gräfe, M. Heinrich, S. Nolte, A. Szameit & J. O'Brien, Quantum walks of correlated photon pairs in two-dimensional waveguide arrays, Physical Review Letters, 112, 143604 (2014). Meany2015T. Meany, M. Gräfe, R. Heilmann, A. Perez-Leija, S. Gross, M. Steel, M. Withford & A. Szameit, Laser written circuits for quantum photonics, Laser And Photonics Reviews, 9, 363-384 (2015).
http://arxiv.org/abs/2307.04456v1
20230710101101
Invex Programs: First Order Algorithms and Their Convergence
[ "Adarsh Barik", "Suvrit Sra", "Jean Honorio" ]
math.OC
[ "math.OC", "cs.LG" ]
Analyzing the Evolution of Inter-package Dependencies in Operating Systems: A Case Study of Ubuntu Victor Prokhorenko1,2 Chadni Islam3 Muhammad Ali Babar1,2 August 12, 2023 ================================================================================================== Invex programs are a special kind of non-convex problems which attain global minima at every stationary point. While classical first-order gradient descent methods can solve them, they converge very slowly. In this paper, we propose new first-order algorithms to solve the general class of invex problems. We identify sufficient conditions for convergence of our algorithms and provide rates of convergence. Furthermore, we go beyond unconstrained problems and provide a novel projected gradient method for constrained invex programs with convergence rate guarantees. We compare and contrast our results with existing first-order algorithms for a variety of unconstrained and constrained invex problems. To the best of our knowledge, our proposed algorithm is the first algorithm to solve constrained invex programs. § INTRODUCTION Many learning problems are modeled as optimization problems. With the explosion in deep learning, many of these problems are modeled as non-convex optimization problems — either by using non-convex objective functions or by the addition of non-convex constraints. While well-studied algorithms with fast convergence guarantees are available for convex problems, such mathematical tools are more limited for non-convex problems. In fact, the general class of non-convex optimization problems is known to be NP-hard <cit.>. Coming up with global certificates of optimality is the major difficulty in solving non-convex problems. In this paper, we take the first steps towards solving a special class of non-convex problems, called invex problems, which attain global minima at every stationary point <cit.>. Invex problems are tractable in the sense that we can use local certificates of optimality to establish the global optimality conditions. Related work. First-order gradient descent methods are the most well-known algorithms to solve convex optimization problems. While they can also solve invex optimization problems under certain conditions, they can be really slow in their convergence due to their inability to use any underlying `invex' geometry. <cit.> have studied the minimization of a special class of unconstrained invex functions – called geodesically convex functions. They provide convergence rate guarantees for their algorithms assuming upper bounds on the sectional curvature of the manifold. Such algorithms have also been studied by <cit.> in their work on optimization methods on Riemannian manifolds, albeit with a focus on asymptotic convergence. The simplest instance of geodesically convex optimization is more commonly known as geometric programming <cit.>. The algorithms solving geodesically convex problems use topological properties of geodesic curves for their convergence. Often, finding the underlying geodesic curves and characterizing the manifold prove to be the bottleneck for solving such problems. These difficulties extend naturally to the general class of invex problems where topological properties are difficult to establish. In this work, while we do connect properties of invex functions with the topology of the domain, we also develop algebraic methods for implementing our proposed algorithm. Our focus in this work is to develop first-order methods with provable global convergence rates for a broader class of invex problems. Our method reduces to classical gradient descent (Riemannian gradient descent <cit.>) if the underlying function is convex (geodesically convex). Many optimization problems can be classified as invex problems by using the simple characterization by <cit.>. We provide some such examples that have been studied in recent years as motivation. <cit.> showed that geodesically convex functions are invex. This means that problems such as matrix Karcher mean problem <cit.>, power control <cit.>, optimal doping profile <cit.> and non-convex matrix factorization <cit.> are invex. Any function which satisfies PL-inequality <cit.> is an invex function. This implies that averaged-out non-convex functions <cit.> are also invex. Similarly, quasar-convex functions <cit.> can also be shown to be invex. Recent studies have shown that many machine learning problems such as learning output kernels <cit.>, multiple tasks learning <cit.>, minimum distance lasso <cit.>, reinforcement learning with general utilities <cit.>, fair sparse regression <cit.>, sparse mixed linear regression <cit.>, imaging with invex regularizers <cit.> and DAG learning <cit.> are also invex. Identifying a problem to be invex is relatively a simple task, whereas coming up with an efficient algorithm to solve such a problem by leveraging the invexity is quite challenging. Furthermore, convergence rate analysis of such algorithms becomes even more tedious. To the best of our knowledge, we are not aware of any provably convergent general algorithm to solve invex problems. In this paper, we present first-order methods to solve invex problems with provable convergence rate guarantees under some natural technical conditions. Summary of contributions * We present a first-order gradient descent algorithm for invex problems (Algorithm <ref>). We demonstrate the feasibility of our update rule over a wide variety of examples (Section <ref>). * As an extension, we propose a projected gradient descent algorithm for constrained invex problems (Algorithm <ref>). We show that our algorithm works for constrained geodesically convex programs in Hadamard manifolds (Section <ref>). * We provide convergence rate guarantees (Table <ref>) for our proposed algorithms (Theorem <ref>, <ref>, <ref>, <ref>, <ref>) under varying degree of assumptions. We identify sufficient technical conditions needed for the convergence of our proposed algorithm. * Finally, we show the applicability of our algorithms on both unconstrained <cit.> and constrained <cit.> machine learning problems in Section <ref>. We show that under the same initialization and hyperparameters, our algorithms outperform the standard gradient descent algorithms. § INVEXITY In this section, we formally define the invex function and relate it with convexity along the curve. Consider a differentiable function ϕ(x) defined on a Riemannian manifold . Let ··_x be the inner product in the tangent space T_x of x induced by the Riemannian metric. Let ϕ(x) be a differentiable function defined on . Let η be a vector valued function defined in × such that η(y, x)∇ϕ(x)_x is well-defined ∀ x, y ∈. Then, ϕ(x) is an η-invex function if ϕ(y) - ϕ(x) ≥η(y, x)∇ϕ(x)_x, ∀ x, y ∈ . If the manifold is ^n, then we get the standard definition of invex functions <cit.>. Convex functions are invex functions on ^n with η(y, x) = y - x. In that sense, invex functions are a generalization of convex functions. <cit.> proved the sufficient and necessary condition that any stationary point of the invex function is the global minima. It follows that (at least in ^n) any algorithm that converges to a stationary point, in principle, can solve unconstrained invex problems. However, convergence rate guarantees are not available for any such algorithms. Similarly, geodesically convex functions on the Riemannian manifold are η-invex with η(y, x) = ^-1_x(y) where ^-1_x is the inverse of the exponential map y = _x(v) for some v in the tangent space at point x ∈. This motivates to characterize invex functions by treating them as convex functions along a curve. More formally, we provide the following proposition from <cit.>. A differentiable real function ϕ(x) defined on is η-invex if and only if for every x, y ∈, the real function g_x, y(t) = f(γ_x, y(t)) is convex on [0, 1] for some curve γ_x,y such that γ_x, y(0) = x, γ_x, y(1) = y, γ̇_x, y(u) (t - u) = η(γ_x, y(t), γ_x, y(u)), ∀ t, u ∈ [0, 1] . Proposition <ref> immediately provides a setting for η(y, x) in terms of underlying curve, i.e., η(y, x) = γ̇_x, y(0). For convex functions, the underlying curve γ_x, y(t) = x + t (y - x). Similarly, for a geodesically convex function, the underlying curve γ_x, y(t) is the geodesic curve joining x and y. We notice, however, that finding the underlying curve for any given η-invex function may not be an easy task. We observe that proposition <ref> allows us to connect invexity of a function to a geometric property (underlying curves) of the domain of the function. This leads us to define invex sets as a natural extension of convex sets. A set ⊆ is called η-invex set if contains every curve γ_x, y of as defined in proposition <ref> whose endpoints x and y are in . It is also possible to characterize invex sets using η(y, x) functions by using the relationship between γ_x, y and η(y, x) from equation (<ref>). Thus, we sometimes refer to the invex set as η-invex set with the assumption that η(y, x) is computed using γ_x, y. We note that our definition is a slight departure from the definition of the invex set used in <cit.>. However, we find our definition more natural for our purpose. Using definition <ref>, we can redefine invex functions on an invex set ⊆ as following: Let ⊆ be an invex set. A real-valued differentiable function ϕ:→ is called invex if ϕ(γ_x, y(t)) ≤ (1 - t) ϕ(x) + t ϕ(y), ∀ x, y ∈, ∀ t ∈ [0, 1] Definitions <ref> and <ref> are connected with each other through the relationship between γ_x, y and η(y, x) in equation (<ref>). In the next sections, we will build up our definition of invex sets to define invex programs. § INVEX PROGRAM In this section, we will define the optimization problem that we are trying to solve. Our optimization problem involves minimizing an η-invex function over an η-invex set. In the remaining paper, we would assume to be an η-invex set unless stated otherwise. Let f: → be an η_1-invex function, and g_i: →, ∀ i ∈{ 1, ⋯, m } be η_2-invex functions, then the optimization problem min_x ∈ f(x), such that g_i(x) ≤ 0 , ∀ i ∈{ 1, ⋯, m } is called an invex program. It is possible to include equality constraints in the program, but we opt for only inequality constraints for simplicity. Before we begin to solve the optimization problem (<ref>), we will prove some technical results to understand the problem in a better way. First, we will show that the constraint set is indeed an η_2-invex set. We will do it in two parts. Let ϕ:→ be an η-invex function, then ϕ(x) ≤ c is an η-invex set for any c ∈. Next, we use Lemma <ref> to show that the constraint set is an η-invex set. Let g_i:→, ∀ i ∈{ 1, ⋯, m } be η-invex functions, then the set = ∩_i=1^m _i is η-invex where _i = { x ∈ | g_i(x) ≤ 0 }. Invex programs without any constraints are called unconstrained invex programs. In the next section, we propose a first-order method to solve invex programs. § NEW FIRST ORDER ALGORITHM In this section, we develop first-order gradient descent methods for invex programs. We start with the unconstrained version of problem (<ref>) and then gradually build up our method for the constrained version. §.§ Invex gradient descent method The main task in our algorithm is to figure out a y ∈ for a given x ∈ and a direction v ∈ T_x such that η(y, x) = v. Such a y need not be unique and we are only interested in finding one y (of possibly many) that satisfies η(y, x) = v. We provide the following gradient descent algorithms to solve invex programs. [t]0.45 [t]0.5 In Algorithm <ref>, T is the maximum number of iterations and α_k is the step size which depends upon the particular implementation of the algorithm. We will specify a particular choice of α_k in the convergence rate analysis of Algorithm <ref>. Without any information on underlying curve γ_x, y(t), the update step of Algorithm <ref>, i.e., finding a y ∈ such that η(y, x) = v is a problem-dependent task. Below we provide an array of examples to explain this observation. [Convex case] For convex problems, y = x + v. [Geodesically convex case] For geodesically convex problems, y = _x(v) where is exponential map as defined in <cit.>. [PL inequality] It is known that the functions satisfying PL-inequality are invex <cit.>. However, this characterization does not readily lead to a good η(y, x). We provide an η function in the following lemma which can be used in the update step. Let f(x) be an L-smooth function that satisfies PL inequality for some μ > 0. Then it is η-invex for η(y, x) = 1/μ(∇ f(y) + L y - x /∇ f(x) ∇ f(x) ). The proof of Lemma <ref> and further discussion is deferred to Appendix <ref>. [Quasar Convex Functions] <cit.> showed that quasar convexity implies invexity. However, they do not provide any η for the quasar convex functions. In the following lemma, we provide an η for quasar convex functions. For any ν≥ 0, there exists a β∈ [0, 1] such that quasar convex functions are η-invex for η(y, x) = β/ν(1 - β) (y - x). This leads to the update y = x + ν1 - β/β v. We provide the proof of Lemma <ref> in Appendix <ref>. [Connection with Bregman divergence and Mirror descent] Let B_ψ(y, x) be the Bregman divergence associated with a strongly convex and differentiable function ψ(x) such that B_ψ(y, x) = ψ(y) - ψ(x) - ∇ψ(x)y - x. Let η(y, x) = ∇ B_ψ(y, x) = ∇ψ(y) - ∇ψ(x), i.e., η(y, x) is a conservative field and B_ψ(y, x) is its potential function. Then a typical mirror descent update <cit.> can be used to compute y, i.e., y = inf_u B_ψ(u, x) + α∇ f(x)u - x. [Recent Invex Problems] Some recently studied problems in invexity such as <cit.> and <cit.> are invex for a particular form of η(y, x). In particular, consider any point x ∈^n of the form x = [ x_1 x_2 ]^ where x_1 ∈^n_1, x_2 ∈^n_2 such that n_1 + n_2 = n. Then for any two x, y ∈^n, η(y, x) takes the form η(y, x) = [ y_1 - x_1 A(y_1, x_1) (y_2 - x_2) ]^ where A(y_1, x_1) ∈^n_2 × n_2 and A(y_1, x_1) ≻0, ∀ y_1, x_1 ∈^n_1. For such problems, update step in Algorithm <ref> becomes y_1 = x_1 + v, y_2 = x_2 + A(y_1, x_1)^-1 v. [A generic approach using function inverse] A generic approach to compute y such that η(y, x) = v is to treat η(y, x) = g(y) for a fixed x and then compute y = g^-1(v). This approach works as long as we have explicit closed-form expression for g^-1(v). For our purpose, we ignore the uniqueness of y = g^-1(v) and allow any y as long as g(y) = v. §.§ Convergence of invex gradient descent method We start the convergence analysis of Algorithm <ref> with the weakest set of assumptions, and then we gradually add stronger conditions to get a better convergence rate. Before we delve into our first result of convergence, we define a notion of smoothness in the invex manifold . A differentiable function f:→ is called L-smooth on an η-invex set if ∀ x, y ∈ f(y) ≤ f(x) + η(y, x)∇ f(x)_x + L/2η(y, x) ^2, where norm · is induced by the Riemannian metric at x. Note that a function f need not be an invex function to be an L-smooth function. Our first convergence guarantee is for L-smooth functions. Let f be a L-smooth function and f^* = min_x ∈ f(x) ≥ B for some B > - ∞. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, 2/L), then lim_k →∞∇ f(x_k) = 0. Theorem <ref> states that Algorithm <ref> converges to a stationary point even if the function is not invex. Our next task is to achieve a better convergence rate by adding the assumption that f is an invex function. However, to do that, we need to impose an extra condition on the choice of η(·, ·) which in turn imposes an extra condition on the geometry of . To make Algorithm <ref> amenable to rigorous convergence rate analysis, we impose a sufficient condition on the geometry of which is analogous to triangle inequality in Euclidean space. [Triangle Inequality] Let x, y, z ∈, then for some b, c > 0 η(y, z) ^2 ≤ η(x, z) ^2 + b η(y, x) ^2 - c η(y, x) η(z, x)_x . The triangle inequality assumption is an assumption on the geometry of manifold . We also note that Euclidean spaces clearly satisfy Assumption <ref> by simply taking b=1 and c = 2. <cit.> showed that any Riemannian manifold with sectional curvature upper bounded by κ≤ 0 also satisfies Assumption <ref>. Now, we are ready to state our second convergence result. Let f: → be an L-smooth η-invex function such that satisfies Assumption <ref>. Furthermore, let x^* = min_x ∈ f(x) such that f(x^*) > -∞ and η(x_0, x^*) ≤ M < ∞. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, 2/L), then f(x_k) converges to f(x^*) at the rate (1/k). We further improve convergence rate results by imposing even more conditions on function f. We define μ-strongly η-invex functions as a natural extension to μ-strongly convex functions as follows. A differentiable function f:→ is called μ-strongly η-invex function for some μ > 0 if f(y) ≥ f(x) + η(y, x)∇ f(x)_x + μ/2η(y, x) ^2, where norm · is induced by the Riemannian metric at x. We provide the following convergence results for the μ-strongly η-invex functions. Let f:→ be an L-smooth μ-strongly η-invex function such that satisfies Assumption <ref>. Furthermore, let x^* = min_x ∈ f(x) such that f(x^*) > -∞, η(x_0, x^*) ≤ M < ∞ and η(y, x) _x^2 ≤ R η(x, y) _y^2, ∀ x, y ∈ for some R > 0. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, min (2/ R μ c , c/2bL)), then η(x_k+1 , x^*) ^2 ≤(1 - c α R μ/2)^k+1 M^2 . We have intentionally chosen to show convergence results for a constant step size of α_t for simplicity. It is not difficult to get better convergence rates by carefully choosing α_t. It is easy to verify that all our results hold for the convex case. They also extend nicely to all the results in <cit.> for geodesically convex case. §.§ Projected invex gradient descent method Now that we have shown convergence results for unconstrained invex programs. We can extend these results to constrained case by providing a projected invex gradient descent method. We first discuss projection on an invex set before providing the algorithm. Let ⊆ be an η-invex set. We define the projection of x ∈ on as a retraction. Let γ_x, y(t) be the curve connecting x ∈ to y ∈ such that γ_x, y(0) = x and γ_x, y(1) = y. Projection ρ_η(x) of x on is defined as ρ_η(x) = min_ y ∈η(y, x). It is easy to see that for convex sets, projection reduces to finding y ∈ which is closest to x in Euclidean distance. Also, notice that if x ∈, then ρ_η(x) = x. First, observe that in the invex program as defined in <ref> objective function f is η_1-invex while the constraint set is η_2-invex. Thus, we make update in two steps. The first step works in η_1-invex set and then it is projected back to η_2-invex set. The convergence rates of the invex gradient descent algorithm can be extended to the projected invex gradient descent algorithm under the following condition (Details in Appendix <ref>). [Contraction] Let x, y ∈ and ρ_η_2(x), ρ_η_2(y) are their projection on an η_2-invex set respectively. Then, η_1( ρ_η_2(y), ρ_η_2(x) ) _ρ_η_2(x)≤η_1( y, x ) _x. Next, we will discuss the convergence of Algorithm <ref>. §.§ Convergence of Projected Invex Gradient Descent Method To guarantee convergence of Algorithm <ref>, we need to place extra technical conditions on the projection operator. In particular, the following condition suffices to ensure convergence. [Contraction] Let x, y ∈ and ρ_η_2(x), ρ_η_2(y) are their projection on an η_2-invex set respectively. Then, η_1( ρ_η_2(y), ρ_η_2(x) ) _ρ_η_2(x)≤η_1( y, x ) _x. Next, we will show that once Assumption <ref> is satisfied by the projection operator, results from Theorems <ref> and <ref> extend nicely to Algorithm <ref>. Let f: → be an L-smooth η_1-invex function such that satisfies Assumption <ref>. Let ⊆ be an η_2-invex set. Furthermore, let x^* = min_x ∈ f(x) such that f(x^*) > -∞ and η_1(x_0, x^*) ≤ M < ∞. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, 2/L) and with projection operator satisfying Assumption <ref>, then f(x_k) converges to f(x^*) at the rate (1/k). We have a similar result for μ-strongly η-invex functions. Let f:→ be an L-smooth μ-strongly η_1-invex function such that satisfies Assumption <ref>. Let ⊆ be an η_2-invex set. Furthermore, let x^* = min_x ∈ f(x) such that f(x^*) > -∞, η_1(x_0, x^*) ≤ M < ∞ and η_1(y, x) _x^2 ≤ R η_1(x, y) _y^2, ∀ x, y ∈ for some R > 0. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, min (2/ R μ c , c/2bL)) and with projection operator satisfying Assumption <ref>, then η_1(x_k+1 , x^*) ^2 ≤(1 - c α R μ/2)^k+1 M^2 . Assumption <ref> clearly holds for convex objective functions on convex constraints and thus, it is a natural choice of assumption to impose on the general case of constrained invex programs. In fact, in the next subsection, we show that it also holds for geodesically convex programs. §.§ Constrained geodesically convex problem In recent literature, there has been a lot of focus on constrained geodesically convex problems (with both the objective function and constraints being geodesically convex) <cit.>. Our projected gradient algorithm <ref> works for constrained case of geodesically convex optimization problems with sectional curvature upper bounded by κ≤ 0. To that end, we can show that Assumption <ref> holds in this particular case. Let ρ be the projection operator as defined in <ref> on a closed geodesically convex subset of a simply connected Riemannian manifold with sectional curvature upper bounded by κ≤ 0. Then the projection satisfies Assumption <ref>. Thus, we can use Algorithm <ref> to solve constrained geodesically convex problems with sectional curvature upper bounded by κ≤ 0. This extends all the results from <cit.> to the constrained case and provides a novel method to solve constrained geodesically convex problems. § APPLICATIONS In this section, we provide specific examples of invex programs to validate our theory. Our task is to provide a working η(y, x) for all the problems and explicitly construct the update step and projection step (if needed). Finally, we compare the performance of our algorithm with the gradient descent (or projected gradient descent) algorithm. The latter of which provides no convergence rate guarantees for invex problems. We chose to go with the vanilla implementation for both algorithms, i.e., without any performance enhancement tricks such as line search. This was done to ensure that the comparison remains fair as our algorithms can also be adapted to include such tricks to further boost its performance. However, that is not our focus in this work. §.§ Log-determinant acyclicity characterization for DAGs We start with an unconstrained invex program. <cit.> provided a novel characterization of the acyclicity of DAGs in their recent work. Their characterization employs a log-determinant function and is stated in Theorem 1 <cit.>. Let 𝒲 = ≜{ W ∈ℝ^d × d| s > r(W ∘ W) }. Their log-determinant acyclicity characterization of DAGs uses the function h(W) = - log (sI - W ∘ W) + d log s where W ∈𝒲, I is identity matrix, r(·) denotes the spectral radius and A ∘ B denotes Hadamard product between two matrices. We take s to be 1 without loss of generality and thus h(W) = - log (I - W ∘ W). They show in Corollary 3 <cit.> that h(W) is an invex function. However, they do not provide any specific η for invexity. Next, we will provide a possible η for the problem but before that, we need to define Hadamard division of two same-sized matrices A, B ∈^d × d as (A ⊘ B)_ij = A_ij/B_ij when B_ij 0 and 0 otherwise. Now we are ready to state the following lemma. The function h(W) = - log (I - W ∘ W), ∀ W ∈𝒲 is η-invex for η(U, W) = -1/2 ((I - W ∘ W) (log(I - U ∘ U) - log(I - W ∘ W))) ⊘ W. We can use our proposed η to construct updates for Algorithm <ref>. Observe that for an stepsize α, update step in Algorithm <ref> is η(W_k+1, W_k) = - α∇ h(W_k). We take M = (I - W_k+1∘ W_k+1) and N = (I - W_k ∘ W_k) for clarity. Then the update step becomes -1/2 (N (log M - log N)) ⊘ W_k = - 2 α N^-∘ W_k and we get M = exp( log N + 4 α N^-1 ((N^-∘ W_k) ∘ W_k )) which provides the update step. We used this update step to implement Algorithm <ref>. The performance of our algorithm was compared against the standard gradient descent algorithm. Both the algorithms were run with a random initialization of W which was kept the same for both algorithms. We found that the gradient descent algorithm failed to converge in several instances but our algorithm converged towards zero objective function value as predicted by <cit.> (See Figure <ref>). §.§ Fair sparse regression Our next example is a constrained invex program. <cit.> proposed a novel invex relaxation for fair sparse regression problem. In this problem, each data point is associated with one of the two groups, and the response variable is generated with a signed bias term based on the membership to the group. They use the following generative model: y_i = X_i^ w^* + γ z_i^* + e_i, ∀ i ∈{1, ⋯, n}, where e_i is a zero mean independent additive noise and z_i^* is the group membership. The task is to identify regression vector w^* ∈^d along with z_i^* for every data point. <cit.> proposed the following invex relaxation for this problem. min_w, Z M(w)Z + λ_n w _1 such that (Z) = 1, Z ≽ 0 , where M(w) ≜[ 1/n(Xw - y)^ (Xw - y) + 1 γ/n(Xw - y)^; γ/n(Xw - y) (γ^2/n + 1) I ] with X ∈^n × d being the data matrix and I being the identity matrix of appropriate dimension. They provide an η_1 for the objective function and it is obvious that constraints are convex (we ignore the dimension of the matrices for succinct representation). Thus, η_1((w, Z), (w, Z)) = [ w - w; M(w)^-1 M(w) (Z - Z) ], η_2((w, Z), (w, Z)) = [ w - w; Z - Z ] . We used these η functions to construct updates and projection for Algorithm <ref>. Let f(w, Z) = M(w)Z. Let ∇_w f(w, Z) = ∂M(w)Z/∂ w and ∇_Z f(w, Z) = M(w), then using the η functions and step-size α we write the following update steps for this problem: w_t+1 = ∏_λ(w_t - α∇_w f(w_t, Z_t)), Z̅_t+1 = Z_t - α M(w_t+1)^-1M(w_t) ∇_Z f(w_t, Z_t) , where ∏_λ(·) is the projection operator which uses soft thresholding to deal with ℓ_1-regularization. We need to project Z̅_t+1 on constraints to get the final Z_t+1. Z_t+1 = min_Z Z - Z̅_t+1_F^2 such that (Z) = 1, Z ≽ 0 We used update rules from equation (<ref>) and (<ref>) to implement Algorithm <ref>. We compared its performance against the projected gradient descent algorithm. The hyper-parameters (such as λ and α) and initial values of w and Z were kept the same across both algorithms. We report our results in Figure <ref>. We see that both algorithms perform in a similar manner. We expect this behavior as when w_t is close to w_t+1 the update rules are the same for both the algorithms. §.§ Mixed linear regression In mixed linear regression, measurements come from one of the two different linear regression models and the task is to identify two regression vectors and the model associated with each data point. Mathematically, each data point is generated as follows: y_i = z_i^* X_iβ_1^* + (1 - z_i^*) X_iβ_2^* + e_i, ∀ i ∈{1, ⋯, n } where β_1^* and β_2^* are d-dimensional vectors. <cit.> proposed an invex program to solve this problem. Let f(t, W, U) = ∑_i=1^n 1/2S_iW + U + 1/2 t_i S_iW - U, g(t, W, U) = (W) _1 and h(t, W, U) = (U) _1, where S_i = [ X_i; -y_i ][ X_i^ -y_i ] and operator (.) vectorizes the matrix. Their invex formulation is given as: min_t, W, U ∑_i=1^n f(t, W, U) + λ_1 g(t, W, U) + λ_2 h(t, W, U) such that W ≽ 0, U ≽ 0, W_d+1, d+1 = 1, U_d+1, d+1 = 1, t _∞≤ 1 The constraints of the problem are clearly convex. <cit.> also provide an η_1 for the objective function, but it does not lend well to construct update rules required for Algorithm <ref>. We bypass this problem by showing that when W U then the objective function is invex for a different η_1. When W = U, we revert to the η provided by <cit.>. To that end, we prove the following lemma. Assume that WU, then functions f(t, W, U) = ∑_i=1^n 1/2S_iW + U + 1/2 t_i S_iW - U, g(t, W, U) = (W) _1 and h(t, W, U) = (U) _1 are η_1-invex for η_1((t, W, U), (t, W, U)) = [ τ∘ (t - t); W - W; U - U ] , where τ(W, U, W, U) ∈^n such that τ_i = S_iW - U/S_iW - U. Now we are ready to construct update and projection rules. Let ∇_t f(t, W, U)_i = 1/2S_iW - U, ∀ i={1, ⋯, n}, ∇_W f(t, W, U) = ∑_i=1^n t_i + 1/2 S_i and ∇_U f(t, W, U) = ∑_i=1^n 1 - t_i/2 S_i, then we propose the following update steps for step-size α: W̅_k+1 = ∏_λ_1(W_k - α∇_W f(t_k, W_k, U_k)), U̅_k+1 = ∏_λ_2(U_k - α∇_W f(t_k, W_k, U_k)) t̅_k+1 = t_k - α∇_Z f(t_k, W_k, U_k) ⊘τ(W_k+1, U_k+1, W_k, U_k) , where ∏_λ(·) is the projection operator which uses soft thresholding to deal with ℓ_1-regularization. We use the following projection steps to get W_k+1, U_k+1 and t_k+1. W_k+1 = min_W W - W̅_k+1_F^2 such that W_d+1, d+1 = 1, W ≽ 0 , U_k+1 = min_U U - U̅_k+1_F^2 such that U_d+1, d+1 = 1, U ≽ 0 t_k+1 = min_t t - t̅_k+1_2^2 such that t _∞≤ 1 We implemented Algorithm <ref> using update and projection rules from equation (<ref>) and equation (<ref>). Like before, we compared the performance of our algorithm with the projected gradient descent method with the same set of hyperparameters and initialization. We report our results in Figure <ref>. We see that our algorithm converges faster than the projected gradient descent algorithm. § CONCLUSION AND FUTURE WORK In this work, we have taken the first steps towards providing algorithms to solve constrained and unconstrained invex programs within certain technical conditions. We show that our algorithm can be used to solve constrained geodesically convex optimization problems with provable convergence rate guarantees. We also show the applicability of our proposed algorithm in a variety of machine-learning applications. Our work employs some natural assumptions for mathematical analysis. But these are only sufficient conditions for convergence. As the future direction, it would be interesting to see if these assumptions can be relaxed without losing on the convergence rate guarantees. From an application point of view, it would also be interesting to come up with an explicit form of update rules and the projection operator from a given η for a large class of invex problems. Another direction of research could be to study the accelerated version of our algorithms. While already for the subclass of invex problems, namely, geodesically convex ones, it is known that without further assumptions/restrictions, global acceleration similar to the Euclidean Nesterov acceleration does not hold. However, it is a valuable question to explore conditions under which such an acceleration holds for our setting. apalike § FUNCTIONS SATISFYING PL INEQUALITY PL functions are a special class of (possibly nonconvex) functions that satisfy the following property: ∇ f(x) ^2 ≥μ (f(x) - f(x^*)) , <cit.> showed that if an L-smooth function satisfies PL inequality, then it can be shown that it achieves an exponential convergence rate. These functions are known to be invex. Here we provide a characterization of their invexity by providing an η which can be used to construct updates in Algorithm <ref>. To that end, we show the validity of Lemma <ref>. Lemma <ref> Let f(x) be an L-smooth function which satisfies PL inequality for some μ > 0. Then it is η-invex for the following η: η(y, x) = 1/μ(∇ f(y) + L y - x /∇ f(x) ∇ f(x) ) Since f(x) follows PL inequality for some μ > 0, the following inequality holds <cit.> for all x in the domain D of f(x): ∇ f(x) ^2 ≥μ (f(x) - f(x^*)) , where x^* = min_x ∈ D f(x). Using Taylor series expansion of g(y) = ∇ f(y)^ v around x and then substituting for v = ∇ f(x), we can write ∇ f(x) ^2 = ∇ f(x)^∇ f(y) + ∇ f(x)^∇^2 f(z) (x - y) for some z= y + t (x - y), t ∈ [0, 1]. It follows that ∇ f(x)∇ f(y) + ∇^2 f(z) (x - y) ≥μ (f(x) - f(x^*)) ≥μ (f(x) - f(y)) Given that f(x) is L-smooth and max_ M _2 ≤ L u^ M v = L u v , we can write ∇ f(x)∇ f(y) + L y - x /∇ f(x) ∇ f(x) ≥μ (f(x) - f(x^*)) ≥μ (f(x) - f(y)) This completes our proof. § QUASAR CONVEX FUNCTIONS Quasar convex functions <cit.> are another interesting class of possibly nonconvex functions which achieve global minima at all their stationary points. Thus, they fall under the class of invex functions. Here, we propose an η for the class of quasar convex functions which can be used to construct updates in Algorithm <ref>. Below, we prove Lemma <ref>. Lemma <ref> For any ν≥ 0, there exists a β∈ [0, 1] such that quasar convex functions are η-invex for η(y, x) = β/ν(1 - β) (y - x). First, we use the result from Lemma 2 of <cit.> to show that for any ν≥ 0, there exists a β∈ [0, 1], such that β∇ f(x)^ (y - x - β y/1 - β) ≤ν (f(y) - f(x)) We can simplify equation (<ref>) to write f(y) ≥ f(x) + ∇ f(x)^β/ν (1 - β) (y - x) This completes our proof. We note that β can be computed efficiently using a binary-search algorithm(refer to Algorithm 2 of <cit.>). § PROOF OF THEOREMS AND LEMMAS Before we begin to solve the optimization problem (<ref>), we will prove some technical results to understand the problem in a better way. First, we will show that the constraint set is indeed an η_2-invex set. We will do it in two parts. Let ϕ:→ be an η-invex function, then ϕ(x) ≤ c is an η-invex set for any c ∈. Let γ_x, y be the underlying curve connecting x, y ∈ corresponding to η(y, x) satisfying equation (<ref>). Using definition <ref>, we can redefine invex functions on an invex set ⊆ as following: Let ⊆ be an invex set. A real-valued differentiable function ϕ:→ is called invex if ϕ(γ_x, y(t)) ≤ (1 - t) ϕ(x) + t ϕ(y), ∀ x, y ∈, ∀ t ∈ [0, 1] Definitions <ref> and <ref> are connected with each other through the relationship between γ_x, y and η(y, x) in equation (<ref>). Let = { x ∈ | ϕ(x) ≤ c }. We take x, y ∈. We will then need to show that γ_x, y(t) ∈, ∀ t ∈ [0, 1]. Using definition <ref> ϕ(γ_x,y(t)) ≤ (1 - t) ϕ(x) + t ϕ(y), ∀ x, y ∈, ∀ t ∈ [0, 1] ≤ (1 - t) c + t c = c It follows that γ_x, y(t) ∈, ∀ t ∈ [0, 1]. Next, we use Lemma <ref> to show that the constraint set is an η-invex set. Let g_i:→, ∀ i ∈{ 1, ⋯, m } be η-invex functions, then the set = ∩_i=1^m _i is η-invex where _i = { x ∈ | g_i(x) ≤ 0 }. Let x, y ∈, then by definition x, y ∈_i, ∀ i ∈{ 1, ⋯, m }. We know from Lemma <ref>, that _i's are η-invex set. Let γ_x, y be the underlying curve connecting x, y. Then, it follows that γ_x, y(t) ∈_i, ∀ i ∈{1, ⋯, m}, ∀ t ∈ [0, 1]. Thus, γ_x, y(t) ∈, ∀ t ∈ [0, 1]. Theorem <ref>(Convergence of L-smooth functions.) Let f be a L-smooth function and f^* = min_x ∈ f(x) ≥ B for some B > - ∞. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, 2/L), then lim_k →∞∇ f(x_k) = 0 , Since f is an L-smooth function. We have f(x_k+1) ≤ f(x_k) + η(x_k+1, x_k)∇ f(x_k)_x_k + L/2η(x_k+1, x_k) ^2 Using Algorithm <ref>, we have η(x_k+1, x_k) = - α_k ∇ f(x_k). Thus, f(x_k+1) ≤ f(x_k) - α∇ f(x_k) ∇ f(x_k)_x_k + L α^2/2∇ f(x_k) ^2 Since α∈ (0, 2/L), it follows that α (1 - Lα/2) ∇ f(x_k) ^2 ≤ f(x_k) - f(x_k+1) After telescoping sum and simplification, we get ∑_k=0^∞∇ f(x_k) ^2 ≤f(x_0) - B/α (1 - L α/2) Since right-hand side of equation (<ref>) is finite, it follows that lim_k →∞∇ f(x_k) = 0. Theorem <ref>(Convergence of invex functions.) Let f: → be an L-smooth η-invex function such that satisfies Assumption <ref>. Furthermore, let x^* = min_x ∈ f(x) such that f(x^*) > -∞ and η(x_0, x^*) ≤ M < ∞. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, 2/L), then f(x_k) converges to f(x^*) at the rate (1/k). We apply Equation (<ref>) by taking x = x_k, y = x_k+1 and z = x^*. η(x_k+1, x^*) ^2 ≤ η(x_k, x^*) ^2 + b η(x_k+1, x_k) ^2 - c η(x_k+1, x_k) η(x^*, x_k)_x_k From Algorithm <ref>, η(x_k+1, x_k) = -α∇ f(x_k). η(x_k+1, x^*) ^2 ≤ η(x_k, x^*) ^2 + b α^2 ∇ f(x_k) ^2 + c α∇ f(x_k) η(x^*, x_k)_x_k Note that since f is η-invex, we have f(x^*) ≥ f(x_k) + η(x^*, x_k)∇ f(x_k). Thus, η(x_k+1, x^*) ^2 ≤η(x_k, x^*) ^2 + b α^2 ∇ f(x_k) ^2 + c α (f(x^*) - f(x_k)) c α (f(x_k) - f(x^*)) ≤η(x_k, x^*) ^2 - η(x_k+1, x^*) ^2 + b α^2 ∇ f(x_k) ^2 Summing over T terms, we get c α∑_k=0^T (f(x_k) - f(x^*)) ≤η(x_0, x^*) ^2 - η(x_T+1, x^*) ^2 + b α^2 ∑_k=0^T ∇ f(x_k) ^2 Since α∈ (0, 2/L), we make two observations from equation (<ref>), ∑_k=0^T ∇ f(x_k) ^2 ≤ f(x_0) - f(x_T+1)/α (1 - Lα/2) f(x_k+1) - f(x_k) ≤α (-1 + Lα/2) ∇ f(x_k) ^2 ≤ 0 By using the observations in equation (<ref>), it follows that c α T ( f(x_T) - f(x^*) ) ≤η(x_0, x^*) ^2 + b α (f(x_0) - f(x^*)) /1 - Lα/2 . Note that using L-smoothness condition and noticing that ∇ f(x^*) = 0, we can show that f(x_0) - f(x^*) ≤L/2η(x_0, x^*) ^2 Thus, f(x_T) - f(x^*) ≤1/T1/cα (1+ b α L /2 - Lα) M^2 This proves our claim. Theorem <ref>(Convergence of strongly invex functions.) Let f:→ be an L-smooth μ-strongly η-invex function such that satisfies Assumption <ref>. Furthermore, let x^* = min_x ∈ f(x) such that f(x^*) > -∞, η(x_0, x^*) ≤ M < ∞ and η(y, x) _x^2 ≤ R η(x, y) _y^2, ∀ x, y ∈ for some R > 0. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, min (2/ R μ c , c/2bL)), then η(x_k+1 , x^*) ^2 ≤ (1 - c α R μ/2)^k+1 M^2 . We begin our proof by proving an auxiliary lemma. If f is L-smooth then f(x^*) - f(x) ≤ -1/2L∇ f(x) ^2, ∀ x ∈ . We can always find a y ∈ such that η(y, x) = - 1/L∇ f(x). Using equation (<ref>) we have, f(y) ≤ f(x) - 1/L∇ f(x)∇ f(x)_x + L/21/L∇ f(x) ^2 =f(x) - 1/2L∇ f(x) ^2 Clearly, f(x^*) ≤ f(y), thus f(x^*) - f(x) ≤ -1/2L∇ f(x) ^2 . This proves our claim. Now using Assumption <ref> for x = x_k, y = x_k+1 and z = x^*, we have η(x_k+1, x^*) ^2 ≤ η(x_k, x^*) ^2 + b η(x_k+1, x_k) ^2 - c η(x_k+1, x_k) η(x^*, x_k)_x_k Now since η(x_k+1, x_k) = -α∇ f(x_k), we have η(x_k+1, x^*) ^2 ≤ η(x_k, x^*) ^2 + b α^2 ∇ f(x_k) ^2 + c α∇ f(x_k) η(x^*, x_k)_x_k Using the strong convexity of f and setting y = x^* and x = x_k, we have η(x_k+1, x^*) ^2 ≤ η(x_k, x^*) ^2 + b α^2 ∇ f(x_k) ^2 + c α (f(x^*) - f(x_k) - μ/2η(x^*, x_k) ^2) Using the condition that η(x^*, x_k) ^2 ≤ R η(x_k, x^*) ^2, we have η(x_k+1, x^*) ^2 ≤ (1 -c α R μ/2 ) η(x_k, x^*) ^2 + b α^2 ∇ f(x_k) ^2 + c α (f(x^*) - f(x_k)) Using L-smoothness of f and Lemma <ref>, we have η(x_k+1, x^*) ^2 ≤ (1 -c α R μ/2 ) η(x_k, x^*) ^2 - α ( - 2 b α L + c )(f(x_k) - f(x^*)) Taking α≤min (2/ R μ c , c/2bL), we get η(x_k+1, x^*) ^2 ≤ (1 -c α R μ/2 ) η(x_k, x^*) ^2 We prove our result by unrolling the recurrence in equation (<ref>). §.§ Convergence of Projected Invex Gradient Descent Method Next, we will show that once Assumption <ref> is satisfied by the projection operator, results from Theorems <ref> and <ref> extend nicely to Algorithm <ref>. Let f: → be an L-smooth η_1-invex function such that satisfies Assumption <ref>. Let ⊆ be an η_2-invex set. Furthermore, let x^* = min_x ∈ f(x) such that f(x^*) > -∞ and η_1(x_0, x^*) ≤ M < ∞. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, 2/L) and with projection operator satisfying Assumption <ref>, then f(x_k) converges to f(x^*) at the rate (1/k). First notice that since x^* ∈, we have ρ_η_2(x^*) = x^*. We follow the same proof technique as Theorem <ref> until equation (<ref>) which becomes: c α (f(x_k) - f(x^*)) ≤η_1(x_k, x^*) ^2 - η_1(y_k+1, x^*) ^2 + b α^2 ∇ f(x_k) ^2 Using Assumption <ref>, we know that η_1(y_k+1, x^*) ^2 ≥η_1(ρ_η_2(y_k+1), x^*) ^2 = η_1(x_k+1, x^*) ^2 and thus remaining steps of the proof follow. We have a similar result for μ-strongly η-invex functions. Let f:→ be an L-smooth μ-strongly η_1-invex function such that satisfies Assumption <ref>. Let ⊆ be an η_2-invex set. Furthermore, let x^* = min_x ∈ f(x) such that f(x^*) > -∞, η_1(x_0, x^*) ≤ M < ∞ and η_1(y, x) _x^2 ≤ R η_1(x, y) _y^2, ∀ x, y ∈ for some R > 0. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, min (2/ R μ c , c/2bL)) and with projection operator satisfying Assumption <ref>, then η_1(x_k+1 , x^*) ^2 ≤(1 - c α R μ/2)^k+1 M^2 . Again, we notice that since x^* ∈, we have ρ_η_2(x^*) = x^*. We follow the same proof technique as Theorem <ref> until equation (<ref>) which becomes: η_1(y_k+1, x^*) ^2 ≤ (1 -c α R μ/2 ) η_1(x_k, x^*) ^2 Using Assumption <ref>, we note that η_1(x_k+1, x^*) ^2 = η_1(ρ_η_2(y_k+1), x^*) ^2 ≤η_1(y_k+1, x^*) ^2 and thus remaining steps of the proof follow. Theorem <ref>(Projection contracts for geodesically convex sets with negative curvature.) Let ρ be the projection operator as defined in <ref> on a closed geodesically convex subset of a simply connected Riemannian manifold with sectional curvature upper bounded by κ≤ 0. Then the projection satisfies Assumption <ref>. First note that since geodesics are constant velocity curves, there exists a parameterization such that d(y, x) = γ̇_x, y(0) = η(y, x) where d(y, x) is the length of geodesic between x and y. Thus, it only remains to show that d(y, x) contracts for geodesically convex sets (sometimes known as totally convex sets) which follows from Lemma 11.2 of <cit.>. Lemma <ref> The function h(W) = - log (I - W ∘ W), ∀ W ∈𝒲 is η-invex for η(U, W) = -1/2 ((I - W ∘ W) (log(I - U ∘ U) - log(I - W ∘ W))) ⊘ W. The invexity of h(W) is already shown by <cit.>. Here, we will verify that our proposed η satisfies equation (<ref>). Note that ∇ h(W) = 2 (I - W ∘ W)^-∘ W. Then ∀ U, W ∈𝒲, h(U) - h(W) - η(U, W)∇ h(W) = - log (I - U ∘ U) + log (I - W ∘ W) - -1/2 ((I - W ∘ W) (log(I - U ∘ U) - log(I - W ∘ W))) ⊘ W2 (I - W ∘ W)^-∘ W = 0 This validates our claim. Lemma <ref> Assume that WU, then functions f(t, W, U) = ∑_i=1^n 1/2S_iW + U + 1/2 t_i S_iW - U, g(t, W, U) = (W) _1 and h(t, W, U) = (U) _1 are η_1-invex for η_1((t, W, U), (t, W, U)) = [ τ∘ (t - t); W - W; U - U ] , where τ(W, U, W, U) ∈^n such that τ_i = S_iW - U/S_iW - U. The invexity of the objective function in (<ref>) is already shown by <cit.>. It suffices to verify that our proposed η satisfies equation (<ref>) for f(t, W, U), g(t, W, U) and h(t, W, U). It can be trivially verified that our η_1 works for g(t, W, U) and h(t, W, U) due to their convexity. For f(t, W, U), ∂ f/∂ t_i = 1/2S_iW - U ∂ f/∂ W = ∑_i=1^n t_i + 1/2 S_i ∂ f/∂ U = ∑_i=1^n 1 - t_i/2 S_i It is easy to verify that f(t, W, U) - f(t, W, U) - η_1(·, ·)∇ f(t, W, U) = ∑_i=1^n 1/2S_iW + U + 1/2 t_i S_iW - U - ∑_i=1^n 1/2S_iW + U - 1/2t_i S_iW - U - ∑_i=1^n τ_i 1/2S_iW - U - W - W∑_i=1^n t_i + 1/2 S_i - U - U∑_i=1^n 1 - t_i/2 S_i = 0
http://arxiv.org/abs/2307.05608v1
20230710220655
RényiTester: A Variational Approach to Testing Differential Privacy
[ "William Kong", "Andrés Muñoz Medina", "Mónica Ribero" ]
cs.CR
[ "cs.CR" ]
Programmable XY-type couplings through parallel spin-dependent forces on the same trapped ion motional modes Rajibul Islam August 12, 2023 ============================================================================================================= Governments and industries have widely adopted differential privacy as a measure to protect users' sensitive data, creating the need for new implementations of differentially private algorithms. In order to properly test and audit these algorithms, a suite of tools for testing the property of differential privacy is needed. In this work we expand this testing suite and introduce , an algorithm that can verify if a mechanism is Rényi differentially private. Our algorithm computes computes a lower bound of the Rényi divergence between the distributions of a mechanism on neighboring datasets, only requiring black-box access to samples from the audited mechanism. We test this approach on a variety of pure and Rényi differentially private mechanisms with diverse output spaces and show that detects bugs in mechanisms' implementations and design flaws. While detecting that a general mechanism is differentially private is known to be NP hard, we empirically show that tools like provide a way for researchers and engineers to decrease the risk of deploying mechanisms that expose users' privacy. § INTRODUCTION In the past decade, there has been an explosion of data driven technologies such as automated chat bots, medical image classifiers and face recognition systems. As these technologies become more ingrained in our everyday lives, society is realizing that sharing data with these technologies, even in aggregate, may pose privacy risks. With this realization, regulators and tech companies have had to update their systems to handle data in a privacy safe manner. At the same time, users expect technology to be automated and frictionless. This automation is generally data-driven, putting both goals of usability and privacy seemingly at odds. Luckily, the concept of differential privacy <cit.> has demonstrated that high quality statistical information or machine learning models can still be generated without compromising the privacy of any individual user. At the heart of differential privacy is the concept of a mechanism. A mechanism ℳ is a randomized function that maps a dataset D to an object, such as a set of statistics or a machine learning model. Differential privacy quantifies how much any individual user in the dataset affects the output of a mechanism, and this quantification is measured by the privacy budget ϵ. The smaller ϵ is the less each user affects the outcome of the mechanism and, hence, the less information about specific users may be leaked from the output of the mechanism. This intuition is formalized by bounding the distance between the distributions of the output of ℳ on two neighboring datasets D and D'. More formally, this is the distance between the distribution of random variables ℳ(D) and ℳ(D'), where D' is a dataset obtained from D by adding or subtracting a single record. The introduction of differential privacy to the research community has revolutionized the world of statistics and machine learning. Research in this field has been prolific and the community has shown that almost any learning task can be done in a differentially private manner. More importantly, mechanisms for these tasks are continuously being improved to extract the most utility, without compromising any privacy. It is in these improvements that one of the issues of differential privacy is observed. Unlike other privacy notions, like k-anonymity, one cannot verify if a mechanism is differentially private based only on a single output of a mechanism. Indeed, differential privacy is an information theoretical property of the mechanism that can only be verified by understanding the probability distribution over the space of outputs of a mechanism. This is straightforward when the mechanism is the well-known Laplace or Gaussian mechanism (albeit there are known errors in the implementation of even these mechanisms). However, as mechanisms become more accurate, the distributions generally become more complex. Fully understanding the distributions of such mechanisms becomes harder and errors on the analysis of such distributions (or errors in the implementations of such mechanisms) have occurred in the past <cit.>. In some of these scenarios, mechanisms that were asserted to be differentially private at a certain privacy budget level ϵ turned out to be either private at a different level or not private at all. As these mechanisms get deployed into real-world systems, it is important for researchers and regulators to verify the privacy claims of their mechanisms. Ideally, given a privacy budget ϵ, there would be a system that takes, as input, the implementation of a mechanism and validates that the mechanism is differentially private at the asserted level of ϵ. The stochastic nature of differential privacy makes this difficult, since verifying differential privacy requires bounding the distance between two distributions, which is generally hard to estimate. In this paper we propose a tester for detecting if a mechanism satisfies so called Rényi differential privacy (RDP) guarantees <cit.>. RDP provides some advantages over approximate (ϵ, δ)-differentialy privacy. For one, it provides a better understanding of the privacy properties of the Gaussian mechanism by smoothly quantifying the probability of failing to achieve privacy. Moreover, its composability properties makes it a great tool for calculating overall privacy budgets of iterative algorithms such as the celebrated differentially private stochastic gradient descent (DP-SGD). Indeed, popular open source privacy accounting libraries <cit.> are implemented with RDP as their backbone. For this reason we believe that Rény DP tester would be of the utmost importance to the privacy community and to the best of our knowledge, this is the first proposed such tester. As an added benefit, we show how a Rényi differential privacy tester can be used to test ϵ-differential privacy. Finally, we believe that estimating lower bounds of the Rényi divergence is of independent interest to the statistics community <cit.>. Another contribution of our work comes from the use of Bayesian optimization methods to find neighboring datasets D and D' for which the privacy guarantee is violated. This approach allows a user to not only discover whether a mechanism is private, but also provides information about the type of datasets for which the mechanism leaks the most information. Previous work either ignores this <cit.> or tests only on grids containing extremal datasets <cit.>. Our experiments show that in some cases the privacy violation does not occur in an extremal dataset. The rest of the paper is organized as follows. First, we introduce the necessary concepts to derive our statistical test, then we discuss previous work on testing of differentially private mechanisms. We then proceed to introduce our test and its theoretical guarantees. Finally, we conduct extensive empirical evaluation to demonstrate that a) our distance estimator performs very well in practice and b) known privacy bugs can easily be detected using our tester. By open sourcing our tester we hope to provide a tool to researchers to easily verify the implementation of private mechanisms. § PRELIMINARIES Notation. : ^n → denotes a mechanism that receives an input dataset D ⊆^n with n records and domain ⊆ℝ^p and outputs a statistic y ∈⊆ℝ^d. §.§ Differential privacy and Renyi divergence Differential privacy <cit.> quantifies the level of risk that a user is exposed to when they contribute their data to a randomized mechanism. We formalize this concept in <ref> Datasets D,D' are called neighbors, denoted by D ∼ D', if D can be obtained from D' by adding or removing one record from D. A randomized mechanism : ^n → satisfies (ϵ, δ)–approximate differential privacy, or is (ϵ,δ)–differentially private ((ϵ,δ)–DP), if for every pair of neighboring datasets D and D' and every set O ⊆ in the output space, we have P((D) ∈ O) ≤ e^ϵP((D') ∈ O) +δ We say satisfies pure differential privacy, or is ϵ–differentially private (ϵ–DP), when δ=0. An interpretation of differential privacy suggests that a mechanism is private if the distance between the distributions of (D) and (D') is small (relative to ϵ and δ). Under this interpretation, novel notions of privacy have emerged by introducing different ways of measuring divergences between distributions. Notably, the Rényi divergence <cit.> (which we define below) has become a popular choice when analyzing the privacy properties of mechanisms such as DP-SGD <cit.>. Let (Ω, ) be an arbitrary measurable space. Let P and Q denote two probabilities in (Ω, ). We assume that P is absolutely continuous with respect to Q [A measure P is absolutely continuous with respect to Q if for every set A⊂Ω such that Q(A) = 0 then P(A) = 0.] and let dP/dQ denote the Radon-Nykodym derivative of P with respect to Q. For α>0, the Rényi divergence of order α between P and Q is given by D_α(P||Q) = 1/α-1ln∫(dP/dQ)^α dQ We now make two remarks about the above definition. First, as α↓ 0, the quantity D_α(P||Q) tends to the well-known Kullback–Leibler (KL) divergence. Second, when P and Q admit density functions p, q respectively, the above expression is equivalent D_α(P||Q) = 1/α - 1ln∫(p(x)^α/q(x)^α-1)dx We will abuse the notation sometimes for random variables X ∼ P and Y ∼ Q we will denote D_α(X || Y) = D_α(P||Q). Using this divergence, we can introduce the notion of Rényi differential privacy <cit.>. A randomized mechanism : ^n → satisfies (α,ϵ)–Rényi differential privacy if for every pair of neighboring datasets D and D', we have D_α((D)||(D')) ≤ϵ The next two results present some important properties about D_α(P||Q). Let 1<α_1<α_2 and P and Q be probability measures. Then D_α_1(P||Q) < D_α_2(P||Q) Let be an ϵ–differentially private mechanism and α>1. Then D_α((D)||(D'))≤min{ϵ, 2 αϵ^2}. α=∞ corresponds to pure-DP, i.e., is an ϵ–DP mechanism if and only if for any D∼ D', we have D_∞((D) || (D') ≤ϵ. § RELATED WORK There are generally two kinds of approaches used in differential privacy testing. The first approach uses adversarial attacks that try to break the privacy definition, like membership inference attacks <cit.> and data reconstruction attacks <cit.> of deep learning models trained with DP-SGD. Hence, the validation of whether a mechanism satisfies privacy is linked to the ability of the attack to succeed. The tests generated by these approaches are very valuable when trying to understand potential privacy risks on a single data set, by manually designing canaries that are expected to have highest sensitivity. However, they do not attempt to understand the worst case (unknown) scenario that differential privacy tries to protect. Running these tests generally requires white box access to the trained model and, more importantly, requires access to large portions of the training data, making auditing of a privately trained model impossible for someone who is not the data curator. Consequently, the resulting lower bounds from these approaches tend to be loose <cit.>. Moreover, the budget ϵ predicted by these experiments is generally much smaller than the theoretical budget. For example, some authors assert that their proposed models were private with an ϵ = 10^-3 when these models were trained without privacy. The second approach, that contains our proposed method, attempts to directly estimate the effective privacy parameters from black-box access to the tested mechanism and compare these effective privacy parameters with the ones stated by the privacy guarantee. This approach focuses on estimating the distance between the distribution induced by the mechanism in two different datasets. However, two key challenges arise: 1) how do we estimate the distance between distributions given two fixed neighboring datasets? and 2) how do we find the pair of neighboring datasets that maximize the distance between these distributions? The problem of estimating distance between distributions has been thoroughly studied in the statistics and hypothesis testing community. While providing a full overview of the literature in this space is beyond the scope of this work, we do highlight <cit.> which consider estimating probability distances through optimization methods over function spaces. Their work provided asymptotic guarantees while we provide strict finite sample complexities to obtain a lower bound on the Rényi divergence between two distributions. For the specific task of estimating the Rényi divergence, our estimator is inspired by the work of <cit.> which considers using neural networks to estimate Rényi divergence. The finite sample complexity bounds provided in that work, however, depend on the structure of the neural network and can rapidly become vacuous for the purpose of testing differential privacy. In contrast, our complexity bounds are independent of the network structure as we are primarily concerned with lower bounds on the Rényi divergence. In a related approach, <cit.> proposes to estimate the regularized kernel Rényi divergence, a lower bound on the Rényi divergence between distributions of a randomized mechanism. However, this approach requires knowledge about the covariance matrix of the underlying distributions, which is impractical for most mechanisms other than the Gaussian and Laplace mechanisms. Recent work on tight estimation of the privacy loss distribution <cit.> provides techniques for lower-bounding ϵ, and in some cases it can be tighter. Unfortunately, the previous method needs access to the cumulative distribution function of the distribution of the privacy loss random variable, which is precisely unknown in our considered setting. There is also a large body of literature pertaining to the testing of a mechanism's privacy, which we briefly go through here. <cit.> proposes a differential privacy tester for mechanisms with discrete and finite output, requiring access to the distribution over datasets and the probability measure over outputs induced by the tested algorithm. Instead of testing privacy in the worst case setting, they test if the mechanism satisfy the guarantee over datasets with high probability. More importantly the tester does not work for continuous output spaces. StatDP <cit.> proposes a system for detecting differential privacy violations by post-processing the output of the mechanisms through different statistics. The tester requires semi-black box access to the mechanisms (as one of the post processing techniques requires running the mechanism without privacy), which is infeasiable for auditing certain systems. <cit.> presents a test for discrete (ϵ, δ)-DP mechanisms but omits the problem of finding the worst case pair of neighboring datasets. DP-Sniper <cit.> provides an ϵ-DP tester that tries to explicitly find a set in the output space that maximizes the difference in probability for the output of the mechanism. The choice of neighboring datasets, however, is done using some hard-coded rules that may hinder the ability to detect violations on new tasks, and under non-classic neighboring relations llike the ℓ_∞ relation instead of the classic swap or add/remove definition of neighboring. Their framework is also specific to detecting ϵ-DP, as low probability events are hard to estimate. In contrast, our mechanism estimates RDP, which averages out low probability events. Moreover, we use our estimates to inform the search of worst case datasets through a Bayesian optimizer. <cit.> proposes a similar approach but targets specifially auditing the privacy of DP-SGD. <cit.> extends the work of <cit.> by developing data poisoning attacks to explore the space of datasets, focusing on machine learning predictive models learning algorithms rather than arbitrary statistical tasks. § RÉNYI TESTER In this section, we propose , an RDP and ϵ-DP (or pure DP) tester that is able to find instances where non-private mechanisms do not satisfy the privacy guarantee that they claim to have. While the sample complexity to prove that a mechanism satisfies pure ϵ-DP can be exponentially large <cit.>, we use several heuristics that help detect mechanisms that are not private. We start by providing an overview of followed by a derivation of the algorithm's subroutines. We finish by proving a sample complexity bound that ensures the test results are valid with high probability. We introduce in <ref>. The tester receives, as input, (i) black-box access to the tested mechanism , (ii) a value ϵ if validating ϵ-DP or a tuple (α, ϵ) if validating RDP, and (iii) a probability of failure β. It then proceeds as follows: * Generate neighboring datasets (line <ref>). This is done according to the process discussed in <ref>. * Generate samples from mechanism. Given the datasets, the tester generates samples for the mechanism for each dataset. * Obtain a lower bound for the Rényi between both samples. The details of the estimation process are described in Section <ref> and through Corollary <ref>. * Detect if the mechanism violates privacy. Specifically, use the bound in Lemma <ref> with Corollary <ref>. §.§ Variational formulations We now present estimator for a lower bound on the Rényi divergence of two distributions. Our estimator relies on a variational formulation of the Rényi divergence. The first such formulation is a special case of the problem of calculating f divergences via convex optimization <cit.>, and the formulation that we use (described below) is the one recently proposed by <cit.>. Let α>1, and P and Q be probability measures on (Ω, ). Let Γ be any function space such that M_b(Ω) ⊆Γ⊆ M(Ω) where M_b(Ω) and M(Ω) are the sets of measurable bounded and measurable functions on Ω respectively. Then, D_α(P||Q) = sup_g ∈Γαα-1log(P e^(α-1)g(X))- log(Q e^α g(X)) Exact computation of the supremum in <ref> is generally hard, given that the complexity of the function space can be arbitrarily large for general distributions. We propose to relax this definition in two ways that allow us to derive a lower bound on the Rényi divergence. First, we fix a space of functions Φ⊆Γ. By restricting the search space for the supremum, the obtained value will be a lower bound on the real divergence. For example, one can define Φ as the set of functions generated by dense neural networks with bounded outputs. Second, we estimate the expectations using approximate (empirical) measures from samples, P_n, Q_n. While this last step introduces estimation error, this error can be bounded with high probability, thus allowing us to find a confidence interval for the lower bound. Let h:⊆Ω→ℝ be a function in Φ on Ω and α>1. Define R^h_α(P||Q) := αα-1log( ∫ e^(α-1) h(x)dP)- log( ∫ e^α h(x)dQ) and, given samples X_1,..., X_n ∼ P, Y_1,...,Y_n ∼ Q, define its empirical counterpart R^h,n_α(X||Y) := αα-1log( 1/n∑_i=1^n e^(α-1) h(X_i))- log( 1/n∑_i=1^n e^α h(Y_i)). The next section derives a sample complexity bound to quantify the estimation error err(n, δ) := |R^h,n_α(X||Y)-R^h_α(P||Q)| with probability 1-δ. Note that with the error function, we can provide a lower bound to the true Rényi divergence between P and Q as follows: for h_0 ∈Φ and M_b(Ω) ⊆Γ⊆ M(Ω), we have R_α(P || Q) = sup_h ∈Γ R_α^h(P ||Q) ≥sup_h ∈Φ R_α^h(P || Q) ≥ R_α^h_0 ( P ||Q) ≥ R^h,n_α(X||Y)- err(n, δ). §.§ Sample complexity The following theorem derives a technical inequality that every bounded mechanism satisfies with high probability for all neighboring datasets (cf. line <ref>). We provide a proof in the supplementary material. Let P and Q be two distributions. Let h Ω⊆→ℝ be a function such that sup_x ∈Ω h(x) < C, 𝐱 = (x_1,...,x_n) and 𝐲 = (y_1,...,y_n) be n realizations of P and Q, respectively, μ_1 = Pe^(α-1)h(x), and μ_2 = Qe^α h(x). Define also M_1 = e^(α-1)C and M_2=e^α C. Then, if γ∈ [0, min(M_1/μ_1, M_2/μ_2)], and n ≥max(3M_1log(2/β)/μ_1γ^2, 2M_2log(2/β)/μ_2γ^2), with probability at least 1-β, we have R_α^h (P||Q) ≥ R^h,n_α(𝐱 || 𝐲) - log(1+γ/1-γ) Our sample complexity is dimension independent. On the other hand, there are results showing that sample complexity of estimating the Rényi divergence from samples is lower bounded by e^d, where d is the dimension of the distribution output space. Our result does not contradict this fact because we are not estimating the true Rényi divergence, but a lower bound of the divergence. As the dimensions of the mechanism increases, one could expect that a more complex space of functions is required for in the definition of the lower bound. The next result shows how our estimate R^h,n_α(xy) is used as a lower bound for the true Rényi divergence. Let h Ω⊂→ℝ be a function such that sup_x ∈Ω |h(x)|≤ C, M denote a mechanism and D, D' be two neighboring databases., 𝐱 = (x_1, …, x_n) be a sample from (D) and 𝐲 = (y_1, …, y_n) be a sample from (D'), and β > 0 and γ be defined as in <ref>. If n is chosen according to <ref>, then with probability at least 1 - β, we have D_α((D) || (D')) ≥ R_α^h,n(𝐱||𝐲) - log(1 + γ/1-γ). §.§ Selection of function h The previous section showed that we can choose a function h to lower bound the Rényi divergence between the output of a mechanism in two neighboring datasets. It remains to show how to select the function that obtains the tightest lower bound. In this section we provide a natural heuristic for choosing h. Fix C > 0 and let Φ denote a collection of functions bounded by C. We propose the following two step approach. First, sample 𝐱 = (x_1, …, x_n) from (D) and 𝐲 = (y_1, …, y_n) from (D'). Let h^* be defined by h^* = _h ∈Φ R_α^h(𝐱||𝐲). Second, given h^*, generate a new sample 𝐱' = (x'_1, …, x'_n) from (D) and 𝐲' = (y'_1, …, y'_n) from (D'), and use <ref> on this sample to obtain a lower bound on the true Rényi divergence. The process just described corresponds to lines <ref>–<ref> in <ref>. It is also worth mentioning that the above approach is somewhat similar to DP-Sniper <cit.>. Specifically, the latter approach uses a training sample to find a set where the DP guarantee can fail and then use a test sample to estimate the actual privacy violation. Model considerations. Even though the model complexity does not appear in the sample complexity of our mechanism, it is important to constrain the model class as our heuristic only makes sense when R_α^h.n(𝐱||𝐲) and R_α^h.n(𝐱'||𝐲') are close. §.§ Dataset generation One of the main difficulties of testing for differential privacy is the worst-case nature of differential privacy guarantees. Namely, to prove a mechanism is not private, one has to find a dataset where inequality (<ref>) or (<ref>) fails to hold. We propose to use black-box optimization to find datasets that maximize R^h,n(X||Y). Specifically, assuming that we have access to R_α^h,n: (D,D') ⊆×→ℝ, our goal is to produce a sequence (D_t,D_t')_t that approaches the optimum. In our case, we only need to generate a point (D,D') where line <ref> does not hold. Available techniques include pure exploration methods, such as grid search, and techniques that use prior information to trade between exploration and exploitation that can accelerate the optimization, such as evolutionary methods. We refer the reader to <cit.> for an overview. In our experiments we will use an open-sourced implementation of the well known Bayesian optimization software Vizier. § EXPERIMENTS This section presents numerical experiments for . We first demonstrate how can be used to detect pure differential privacy guarantees. We then focus on RDP violations and specifically look into two common errors in DP-SGD implementations. We include in the supplementary an analysis on the accuracy of estimating Rényi divergence. [The code for running the experiments will be open sourced at publication time. ] Throughout our exposition, we let ε>0 and n ≥ 1 be fixed and X∈ℝ^n denote the input dataset. Pure DP mean mechanisms. The first three mechanisms attempt to privately compute the mean by generating the random estimates (X) := ∑_i=1^n X_i/ñ + ρ_1, (X) := ∑_i=1^n X_i/n+ ρ_2 (X) := ∑_i=1^n X_i/n+ ρ_1 where ñ = max{10^-12, n + τ}, τ∼ Laplace(0,2/ε), ρ_1 ∼ Laplace(0,2/[ñε]), and ρ_2 ∼ Laplace(0,2/[n ε]). The first estimate satisfies ϵ-DP, the second one violates the guarantee because it has access to the private number of points, and the third one privatizes the number of points to estimate the scale of the noise added to the mean statistic but the mean itself is computed using the non-private number of points. Sparse vector technique mechanisms. The next six mechanisms address different private and non-private implementations of the sparse vector technique (SVT), a mechanism for releasing a stream of c queries on a fixed dataset. SVT mechanisms compare each query value against a threshold and the given algorithm returns certain outputs for a maximum number of queries c. We denote these by – and they correspond to Algorithms 1-6 in <cit.>. and satisfy ϵ-DP. satisfies (1+6c/4)-DP, and ,, and do not satisfy ϵ-DP for any finite ϵ. Rényi DP mean mechanisms. To verify the ability of our tester to detect violations of Rényi differential privacy we first instantiate , a non-private Gaussian mean analog of Non-Private-Mean1 that uses the true number of points to compute the mean and noise scale, but adds Gaussian noise instead of Laplace noise. DP-SGD mechanisms. We also include two flawed DP-SGD's <cit.> implementations. Recall that DP-SGD is parametrized by a clip norm G (which clips individual per-example gradients to have ℓ_2 norm G) and a noise multiplier σ, and that a single iteration of DP-SGD is guaranteed to be (α, ϵ)-RDP for ϵ = 2α/σ^2G^2. The first implementation simulates a scenario where a developer assumes they are using a noise multiplier σ_theory but in reality uses a noise multiplier σ_effective. We dub this scenario . For the second implementation, we consider an accounting error when using batch or micro-batch clipping instead of per-example clipping in DP-SGD. Per-example clipping is memory and computationally expensive when training high-dimensional models. To address these constraints at the cost of utility, practitioners split a batches of size n into m microbatches of size n/m, compute average gradients over each micro-batch, clip and noise the per-microbatch gradient, and finally average the resulting noisy micro-batch gradients. It sometimes goes unnoticed but the sensitivity of per-microbatch gradients is 2G instead of G. below refers to an implementation of a DP-SGD optimizer that receives a model f_θ, learning rate, noise multiplier σ, clip norm value G, number of micro-batches, and takes a DP-SGD with noise scaled by σ G respect to the parameters θ, and does privacy accounting using a library that receives batch size, number of epochs, noise multiplier, assuming per-example clipping and ignoring the of microbatch clipping. The final budget should be ϵ = 2α/σ^2 but by ignoring microbatching results in the misleadingly stricter guarantee of ϵ = α/2σ^2. Baselines. We compare 's auditing capacity first with the the approximate differential privacy tester () presented in <cit.>. For completeness we introduce this algorithm as <ref> in the supplement. For a fixed pair of neighboring datasets, the algorithm estimates from samples the probability z of the algorithm violating a pure ϵ-differential privacy guarantee (line <ref>). If the mechanism is (ϵ, δ)–differentially private, then z<δ up to estimation error η (line <ref>). We also compare our method with DP-Sniper <cit.>. Recall that the original DP-Sniper paper uses different neighboring relationships for different mechanisms. Below we compare the methods under the same neighboring relationships to elucidate the power of these testers under similar conditions. DP-Sniper is generally unsuited for RDP, hence we do not include a comparison in the experimental section for non-pure DP mechanims. The introduced in <cit.> is similar to but requires knowledge of certain covariance matrices that are generally not known a priori. Consequently, we do not compare with this test in our auditing experiments, but do compare it with in the estimation of Rényi divergence between Gaussian distributions in <ref>. Methodology. We run tester with Φ being the class of functions generated by a two-layer dense neural networks consisting of 100 units for each hidden layer. To ensure the output of the network is bounded we use a scaled hyperbolic tangent loss activation scaled to C=16ϵ for the last layer. proposes its own grid of test cases to generate pairs of datasets. and are run on trials by generating pairs of neighboring datasets using an open sourced version of Vizier <cit.>, with an underlying NSGA-II evolutionary algorithm <cit.>. This method performed slighly better than a a random search algorithm, but obtaining similar speed of detection, or no detection at all (see <ref>). We test each mechanism for different values of ϵ and α, and test 5 times for each mechanisms. We found that both and had different estimator values over the five runs but the outcome (False or Passed) was consistent across runs. Pure DP results. The results of our experiments are summarized in Table <ref>. is able to detect all one-dimensional non-private mechanisms while the fails to detect , and is not defined for high dimensional output spaces, and cannot apply it to sparse vector technique algorithms. misses and but catches all the errors for at least a pair of parameters (α, ϵ). DP-Sniper suceeds at detecting the same mechanisms that . However, it requires 10M samples while only needs 400K samples. Rényi DP results. detects all errors while the does not, even when varying the outcome's space discretization size. It does so by evaluating less than 10 pairs of neighboring datasets (we present average number of trials in the appendix). DP-Sniper does not apply in this setting. In the appendix we further investigate the potential of to detect 's implementation for different values of σ_effective. presents an example where exploring extremal datasets is not useful for catching privacy violations but our dataset generation technique can find pairs of datasets violating the privacy constraint on an average of 5 trials. In this case, assuming gradients are in the [-2,2] interval, and assuming a clip norm of G=1, the privacy violation occurs at datasets neighboring datasets D={-1 } and D'={-1,2 }, where the sensitivity of the clipped averaged gradient is 2 and not at neighboring datasets D={-2} and D={-2,2 } where the sensitivity is 1. It is important to highlight that our implementation for detecting errors for higher values of σ_effective is mostly limited due to the cap C used to define the space Φ. This capping parameter noticeably delivers smaller divergence estimates making it harder to find privacy leaks. Unfortunately, increasing this constant substantially increases the required Ω(e^α C) sample size. In the following section we find that removing this cap provides very accurate estimation for Gaussian distributions. We leave tightening the sample complexity as future work. The high sample complexity for measuring divergence distribution seems to be universal. In <ref> we add the number of samples for , , and . requires at least one order of magnitude less than baselines and does not need a discretization parameter m. § DISCUSSION We presented a new test for detecting privacy violations that is suited to pure and Rényi differential privacy and, hence, is able to detect flaws in non-private mechanisms. While failing to detect a few pure differential privacy leaks, it appears to be the first one to test Rényi differential privacy guarantees with only black-box access to the mechanism. We highlight that our tester is particularly flexible and that it can easily be improved as we derive better sample complexity bounds for variational approaches of Rényi divergence estimators. As demonstrated in <ref>, there is still a noticeable gap between the theoretical and practical error bounds on these estimates. We leave possible theoretical improvements as a future area of research. plain § ADDITIONAL PROOF DETAILS Below we introduce Chernoff's multiplicative bound, that we use in the proof of theorem <ref> Let X_1,..., X_n be independent random variables drawn according to some distribution with mean μ and support in [0,M]. Then for any γ∈ [0,M/μ-1] the following inequalities hold: P(1/n∑_i=1^n X_i ≥(1+γ) μ) ≤ e^-nμγ^2/3M P(1/n∑_i=1^nX_i ≤(1-γ) μ) ≤ e^-nμγ^2/2M [<ref>] Let P and Q be two distributions. Let h Ω⊆→ℝ be a function such that sup_x ∈Ω h(x) < C, 𝐱 = (x_1,...,x_n) and 𝐲 = (y_1,...,y_n) be n realizations of P and Q, respectively, μ_1 = Pe^(α-1)h(x), and μ_2 = Qe^α h(x). Define also M_1 = e^(α-1)C and M_2=e^α C. Then, if γ∈ [0, min(M_1/μ_1, M_2/μ_2)], and n ≥max(3M_1log(2/β)/μ_1γ^2, 2M_2log(2/β)/μ_2γ^2), with probability at least 1-β, we have R_α^h (P||Q) ≥ R^h,n_α(𝐱 || 𝐲) - log(1+γ/1-γ) From Chernoff's multiplicative bound ( <ref>, <ref>) we know that for γ_1∈[0, M_1/μ_1-1], with probability less than e^-nμ_1γ_1^2/3M_1, we have 1/n∑_i=1^n e^(α-1)h(x_i)/𝔼 [e^(α-1)h(X)] ≥ (1+γ_1) This implies that log[ 1/n∑_i=1^n e^(α-1)h(x_i)] - logPe^(α-1)h(X) ≥log(1+γ_1) ≥α-1αlog(1+γ), or equivalently, αα-1 log1/n∑_i=1^n e^(α-1)h(x_i) -αα-1logPe^(α-1)h(X) ≥log(1+γ_1) Note that by setting n≥3e^(α-1)Clog(2/β)/μ_1 γ_1^2 the above bound holds with probability at most β/2. A similar analysis (using <ref>) shows that for n≥2e^α Clog(2/β)/μ_2 γ_2^2 with probability at most β/2 the following bound holds: logQe^α h(y) -log[1/n∑_i=1^n e^α h(y_i)] ≥log(1/1-γ_2). Finally, summing <ref> and <ref>, and using the union bound, with probability 1-β we have that R^h,n(𝐱||𝐲) - log(1/1-γ_2) - log(1+γ_1) ≤ R^h(P || Q). The proof follows by letting γ = min(γ_1, γ_2). § EXPERIMENT DETAILS §.§ Approximate DP tester <cit.> We include the approximate DP tester in §.§ Further comparison with DP-Sniper Below we include a more complete comparison against on SVT mechanisms with add/remove and ℓ_∞ neighboring relations and with different sample complexities. only has an adavntage over when using the ℓ_∞ relation and using at least 10 million samples. For the more common add/remove definition has the same performance as . §.§ implementation details Function class. For all auditing mechanisms we used as the underlying function class Φ the family of fully connected neural network with two dense layers, each with 100 units. To ensure h ∈Φ are bounded but contain the real value of the divergence (ϵ for which we test) we add scaled hyperbolic tangent activations scaled to 16ϵ. Privacy parameters. Below we show results with different selections of hyperparameters ϵ and α used for auditing mechanisms. The range of ϵ and α was selected based on sample complexity sizes that allowed us to run the tests in an efficient manner. Besides , we notice that results tend to be consistent across the selections of these parameters. §.§ Exploring the space of datasets Average number of trials to detect privacy violations. In <ref> we provide details on the number of datasets our algorithm needs to test before finding a dataset where the privacy guarantee is broken. Random Search vs. NSGA-II In <ref> we show the number of trials ran by our algorithm before finding a dataset where a privacy violation occurs. We compare random and NSGA-II algorithms. §.§ DP-SGD mechanisms details All DP-SGD mechanisms train a simple model minimizing the loss function ℓ(x_i^d_i=1, w) = ∑^d_i=1w*x_i. Below we investigate the potential of to detect 's implementation for different values of σ_effective. For this, we estimated the Rényi divergence varying σ_effective on fixed neighboring datasets D = {-2,2 }, D' = {-2}. <ref> shows that we detect the error for σ_effective≤ 3. §.§ Renyi divergence estimation methods This subsection compares the estimator used by the <cit.> and the one proposed in this work in <ref>. Given a positive-definite kernel k(·,·) and some λ > 0, the approach in <cit.> considers estimating the regularized kernel Rényi divergence, a variant of the Rényi divergence in which its usual input distributions P and Q are replaced by Σ_P and Σ_Q + λ Id, respectively, where Σ_F denotes the k-covariance operator of a distribution F. Specifically, this estimator replaces Σ_P and Σ_Q with their empirical estimates. It is then shown that an O(1/n) sample complexity obtained for the empirical estimator, and that the (exact) Rényi divergence variant provides a lower bound on the classic Rényi divergence. Both methods have a theoretical error bounded by O(1/√(n)). We sample from two gaussian distributions with different means where the exact value of the divergence is known, namely P = (0,σ), Q = (μ, σ), and D_α(P||Q) = αμ^2/2σ^2. We set μ = σ = 1 and test for values of α = 1.5, 2., same values used in our auditing tests. We calculate the estimates five times and report the average over runs. As mentioned in the previous section, our estimator differs from the one used in the tester as we remove the final activation of our neural network and we allow the network to generate unbounded predictions. While theoretically we cannot use this estimator to detect violations of privacy, it is important to understand its empirical performance. In <ref>, we show the results for estimating the Rényi divergence with using 100 and 1000 samples. Observe that increasing the number of samples does not improve the quality of estimation. Further, it significantly harms the performance of due to its Ω(n^2) computational complexity. We plot the true Rényi divergence in blue. depends on the regularization parameter λ in the x–axis; as pictured, the estimator is highly sensitive to this value. λ=0.001 achieves the best performance when working with 100 samples, while achieving the worst for 1000 samples. <ref> shows the estimated divergence using (<ref>). Given that we do not need a confidence interval as in the previous section we work with an unbounded neural network. We observe that with 10K samples we can achieve tight estimates with small variance.
http://arxiv.org/abs/2307.03908v1
20230708054722
Incorporating Deep Q -- Network with Multiclass Classification Algorithms
[ "Noopur Zambare", "Ravindranath Sawane" ]
cs.LG
[ "cs.LG" ]
1 Indian Institute of Technology, Jodhpur, India 2 Western University, Ontario, Canada In this study, we explore how Deep Q-Network (DQN) might improve the functionality of multiclass classification algorithms. We will use a benchmark dataset from Kaggle to create a framework incorporating DQN with existing supervised multiclass classification algorithms. The findings of this study will bring insight into how deep reinforcement learning strategies may be used to increase multiclass classification accuracy. They have been used in a number of fields, including image recognition, natural language processing, and bioinformatics. This study is focused on the prediction of financial distress in companies in addition to the wider application of Deep Q-Network in multiclass classification. Identifying businesses that are likely to experience financial distress is a crucial task in the fields of finance and risk management. Whenever a business experiences serious challenges keeping its operations going and meeting its financial responsibilities, it is said to be in financial distress. It commonly happens when a company has a sharp and sustained recession in profitability, cash flow issues, or an unsustainable level of debt. DQN (Deep Q - Network)Deep Reinforcement Learning Financial Distress Multiclass Classification, Decision Tree Classifier Naive Bayes, Random Forest Classifier § INTRODUCTION §.§ Background The goal of Reinforcement Learning (RL), a branch of machine learning, is to train agents how to make decisions sequentially in an environment that optimises a reward signal. By interacting with the environment, getting feedback in the form of rewards or penalties, and adapting their behaviour in response, RL algorithms learn through trial and error. The Deep Q-Network (DQN) is a deep reinforcement learning method that combines the Q-learning algorithm and the capability of deep neural networks. Financial distress refers to a state in which a company faces considerable challenges in meeting its financial obligations. Early indications of financial problems might help proactive actions like restructuring, obtaining more finance, or putting cost-cutting measures into place. Machine learning has made breakthroughs in recent years when it comes to applying reinforcement learning algorithms, particularly DQN, to different problem domains. We use a wide range of supervised learning algorithms, such as Decision Tree, Random Forest Classifier, and Naive Bayes, to create the DQN framework. The DQN ensemble's underlying models are represented by these algorithms. We intend to study the potential advantages and performance enhancements that can be achieved by combining supervised learning with the reinforcement learning approach of DQN using supervised learning algorithms as the foundation models. The use of DQN for multiclass classification to forecast financial difficulties in businesses is explored in this study. §.§ Problem Statement The goal of this paper is to investigate the use of Deep Q-Network in multiclass classification problems. We intend to adapt and use DQN's skills for resolving multiclass classification issues despite the fact that its typical application is mostly in the field of reinforcement learning. The subject of interest is the application of DQN for multiclass classification to predict financial distress in businesses. By effectively resolving this problem, we want to open up the possibility of applying reinforcement learning principles to a variety of classification problems. § STATE OF THE ART In DQN, our goal is to train an action-value function Q(s, a) that calculates the predicted cumulative reward for performing action 'a' in state 's'. The Bellman Equation or Q-Learning update equation is defined as follows: Q(s,a) = (1 - ϵ) Q(s,a) + α[ r + γ max Q(s^', a^') - Q(s,a) ] where, Q(s, a) = Current estimate of the predicted future benefits of action 'a' in state 's' ϵ = exploration-exploitation trade-off α = learning rate r = immediate received reward γ = discount factor A variation of the Q-learning process called Deep Q-Network makes use of neural networks to make approximations of the Q-value function. The expected reward for performing a specific action in a given condition is provided by the Q-value function. The Q-value function is represented as a table in conventional Q-learning but as a neural network in DQN. Experience replay and a technique called fixed Q-targets are both used by the DQN algorithm to stabilise the learning process. Experience replay involves sampling small batches of experiences for training and storing observed transitions (s, a, r, and s') in a replay buffer. Using a target network with set parameters for a predetermined number of iterations before updating it with the parameters of the online network is recognised as leveraging fixed Q-targets. § METHODOLOGY §.§ Dataset The study involves the use of a dataset gathered by Kaggle that includes different financial parameters and company characteristics. The dataset, which is accessible in CSV format, includes statistics on the company's performance as well as relevant contextual information. Using methods like label encoding, a preprocessing step is implemented to handle missing data, normalise features, and transform categorical variables. Then, training and testing sets are created from the preprocessed dataset. §.§ Baseline Multiclass Classification Algorithms §.§.§ Decision Tree In this algorithm, the space of features is recursively divided according to a set of criteria in order to generate a decision tree. Information gain or Gini impurity is the most widely used criterion. They can handle categorical and numerical features, as well as non-linear relationships, and they can capture both. Decision trees, show a tendency to overfit the training set if they are not appropriately regularised or pruned. Overfitting can be reduced using strategies like pruning, establishing a minimum number of samples needed to split a node, or using ensemble methods. §.§.§ Random Forest Classifier An ensemble technique called the Random Forest Classifier combines several decision trees to produce predictions. A random subset of features is taken into account at each split of each tree, which is trained on a bootstrap sample of the training data. By combining the predictions of various trees, either through majority voting or averaging, the final prediction is obtained. §.§.§ Naive Bayes The Naive Bayes algorithm is a probabilistic classifier that relies on the Bayes theorem and makes the assumption that features are independent of the class. Given the input features, it calculates the probabilities of each class and chooses the class with the highest probability as the prediction. §.§ Multiclass Classification Algorithms with DQN Integration §.§.§ Defining Agent The DQN class is used to represent the agent. Based on the input features given, it acts as the decision-making entity that learns to categorise the different levels of financial distress. The agent employs a method akin to the DQN, using a group of Decision Tree Classifier, Random Forest Classifier and Naive Bayes models as the Q-network. §.§.§ Defining Environment In this case, the environment is the classification problem itself, which involves determining the levels of economic distress based on the given input features. The agent receives rewards from the environment as feedback, which helps it improve its classification performance. §.§.§ State Representation The input features that were utilised to train the agent define the state representation. In this instance, the features Company, Time, x1, x2, x3, and x4 serve as representations of the state. These features are taken out of the data frame and sent to the classification agent as input. §.§.§ Setting Reward Function The act() method of the DQN class contains a definition of the reward. If any of the true class labels in the y variable match the predicted action (class label), the agent is rewarded with a value of 1. If not, it is rewarded with -1. The goal of the reward system is to encourage the agent to forecast classes correctly. §.§.§ Selection of Action The action selection method makes sure that the model chooses the best class label depending on the situation at hand and previously learnt information. The class labels that are available in this situation make up the action space. To determine the class for a particular input, the agent will select an action (class label) from this collection. The number of classes in the classification problem and the size of the action space are related. §.§.§ Training Iterating through episodes and the stages in each episode are both parts of the training process. In agreement with an epsilon-greedy exploration-exploitation strategy, the agent chooses a course of action (class label). In accordance with the accuracy of its forecast, it is rewarded, and the ensemble of decision tree models is updated. For the specified number of episodes, training is ongoing. §.§.§ Evaluation By comparing the predicted labels with the actual labels using the test data, it is feasible to assess how accurate the agent's predictions were. The calculated accuracy of the base model and the accuracy of the DQN-based agent after training are compared. §.§ Evaluation Metrics The various metrics involved in analysis are accuracy, recall score and precision score. However, the performance of models was also analyzed using a confusion matrix. § RESULTS AND ANALYSIS §.§ Comparison with Baseline Algorithms On the chosen benchmark datasets, the performance of the proposed framework, which incorporates Deep Q-Network with multiclass classification algorithms, is compared with that of the baseline algorithms. 4|c|Comparitive Analysis Model Accuracy Recall Precision Decision Tree 0.98 0.50 0.50 With DQN 0.33 0.28 0.34 Random Forest 0.99 0.50 0.50 With DQN 0.32 0.29 0.34 Naive Bayes 0.99 0.75 0.67 With DQN 0.31 0.28 0.34 §.§ Analysis of Computational Efficiency §.§.§ Decision Tree * In comparison to the DQN-based model, the baseline model (Decision Tree Classifier) often takes less time to train. As they do not require iterative optimisation, decision trees can be trained quickly because they directly learn the decision boundaries and feature splits. In comparison, the DQN-based model employs a more computationally costly training procedure that requires repeatedly training an ensemble of Decision Tree Classifiers. * Compared to the DQN-based approach, the baseline model often uses a smaller amount of memory. To store the separate models, the ensemble of Decision Tree Classifiers utilised in the DQN model needs more memory. The baseline approach, in comparison, only has to keep one decision tree, which requires less memory. * Comparing the baseline model to the DQN-based approach, the baseline model is certainly more effective in terms of computation. §.§.§ Random Forest Classifier * In comparison to the DQN-based model, the baseline model (Random Forest Classifier) often requires less training time. Due to its ability to create numerous decision trees at once while using parallel processing, random forests can be trained effectively. A random subset of characteristics and data samples is used to train each decision tree individually. The DQN-based model, on the other hand, requires several pieces of training for an ensemble of Random Forest Classifiers, which can be computationally more taxing. * Usually, the baseline model uses less memory than the DQN-based model. The DQN model's ensemble of Random Forest Classifiers requires extra memory to store each individual model. * In conclusion, compared to the DQN-based model, the baseline model (Random Forest Classifier) is anticipated to be computationally more efficient in terms of training time, inference time, and memory use. §.§.§ Naive Bayes * Due to its simplicity, the basic framework (Gaussian Naive Bayes) is computationally effective for both training and prediction. While the DQN model similarly employs Naive Bayes classifiers, the ensemble technique adds more complexity and increases processing overhead when compared to the base model. * A single Naive Bayes classifier, along with its associated parameters and probability distributions, must be stored in memory by the base model (Gaussian Naive Bayes). In order to store an ensemble of Naive Bayes classifiers, which consists of various models with their unique parameters and probability distributions, the DQN model needs memory. § DISCUSSION §.§ Advantages Multiclass classification methods that incorporate Deep Q-Network (DQN) have various benefits and provide special capabilities to the task. Benefits involve : * Handling Complex Decision-Making * Adaptability to Dynamic Environments * Handling Imbalanced Datasets * Real-time Classification §.§ Limitations §.§.§ Large Memory Requirements Especially when employing experience replay, which includes storing and sampling from a significant replay buffer, DQN often needs a lot of RAM. §.§.§ Curse of Dimensionality Finding the most effective measures and achieving efficient convergence can be more difficult when the DQN training and learning process is impacted by the curse of dimensionality. Consequently, DQN's ability to do multiclass classification well may be constrained by its ability to handle significant feature spaces. §.§.§ Limited Generalization to New Classes It often acquires policies unique to the classes found in the training set. They are efficient at handling well-known classes, but they have a limited ability to generalise to unfamiliar or new classes. In dynamic classification contexts where new classes continually emerge, the technique is less adaptive since incorporating new classes into the model often requires retraining or considerable fine-tuning. §.§ Future scope Future prospects are promising when Deep Q-Network is incorporated into multiclass classification algorithms. Such as Transfer Learning and Knowledge Transfer, Real-time Classification, Hierarchical Multiclass Classification, Adaptive Learning and Dynamic Feature Selection, and many others. § CONCLUSION The study uses multiclass classification to show the significance of using DQN for financial distress prediction in businesses. The study's findings may help businesses, investors, and financial institutions make informed decisions and take preventive action to reduce the risks associated with the financial crisis. Possible reasons for less accuracy by the DQN model than the base model : * The classifier for the base model is trained directly on the labelled training data using a traditional supervised learning methodology. In a single step, it learns the probability distributions and class boundaries from the data. While the DQN model iteratively changes its ensemble of classifiers based on the rewards it receives from the environment, it is trained using a reinforcement learning methodology. This recurrent training procedure may generate noise and instability, resulting in less accurate convergence. * While lowering bias and variance can help ensembles perform better, they also add to the complexity and risk of inconsistencies across the various models. Lower accuracy may be the consequence if the ensemble is unable to fully capture the underlying patterns and relationships. § REFERENCES * Melrose Roderick, James MacGlashan, Stefanie Tellex "Implementing the Deep Q-Network" arXiv:1711.07478v1 [cs.LG] 20 Nov 2017 * Jianqing Fan, Zhaoran Wang, Yuchen Xie, Zhuoran Yang Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:486-489, 2020 * Z. Gao, Y. Gao, Y. Hu, Z. Jiang and J. Su, "Application of Deep Q-Network in Portfolio Management," 2020 5th IEEE International Conference on Big Data Analytics (ICBDA), Xiamen, China, 2020, pp. 268-275, doi: 10.1109/ICBDA49040.2020.9101333. * Mills P. Solving for multi-class: a survey and synthesis. arXiv preprint arXiv:1809.05929. 2018 Sep 16. * Wen G, Wu K. Building decision tree for imbalanced classification via deep reinforcement learning. Asian Conference on Machine Learning 2021 Nov 28 (pp. 1645-1659). PMLR. * Fu Q, Li K, Chen J, Wang J, Lu Y, Wang Y. Building energy consumption prediction using a deep-forest-based DQN method. Buildings. 2022 Jan 27;12(2):131. * Reddy EM, Gurrala A, Hasitha VB, Kumar KV. Introduction to Naive Bayes and a Review on Its Subtypes with Applications. Bayesian Reason. Gaussian Process. Mach. Learn. Appl. 2022 Apr 19:1-4. * Whitaker RB. The early stages of financial distress. Journal of Economics and Finance. 1999 Jun;23(2):123-32. * Lau AH. A five-state financial distress prediction model. Journal of Accounting Research. 1987 Apr 1:127-38. * Mselmi N, Lahiani A, Hamza T. Financial distress prediction: The case of French small and medium-sized firms. International Review of Financial Analysis. 2017 Mar 1;50:67-80. * Grandini M, Bagli E, Visani G. Metrics for multi-class classification: an overview. arXiv preprint arXiv:2008.05756. 2020 Aug 13. * Toupas P, Chamou D, Giannoutakis KM, Drosou A, Tzovaras D. An intrusion detection system for multi-class classification based on deep neural networks. In2019 18th IEEE International Conference on machine learning and Applications (ICMLA) 2019 Dec 16 (pp. 1253-1258). IEEE. * Li J, Liu Y, Yin R, Zhang H, Ding L, Wang W. Multi-class learning: From theory to algorithm. Advances in Neural Information Processing Systems. 2018;31.
http://arxiv.org/abs/2307.05142v1
20230711094211
Dissecting the $γ$-ray emissions of the nearby galaxies NGC 1068 and NGC 253
[ "Shunhao Ji", "Zhongxiang Wang", "Yi Xing", "Dahai Yan", "Jintao Zheng" ]
astro-ph.HE
[ "astro-ph.HE" ]
Department of Astronomy, School of Physics and Astronomy, Yunnan University, Kunming 650091, China; [email protected] 0000-0003-1984-3852]Zhongxiang Wang Department of Astronomy, School of Physics and Astronomy, Yunnan University, Kunming 650091, China; [email protected] Shanghai Astronomical Observatory, Chinese Academy of Sciences, 80 Nandan Road, Shanghai 200030, China Shanghai Astronomical Observatory, Chinese Academy of Sciences, 80 Nandan Road, Shanghai 200030, China Department of Astronomy, School of Physics and Astronomy, Yunnan University, Kunming 650091, China; [email protected] Department of Astronomy, School of Physics and Astronomy, Yunnan University, Kunming 650091, China; [email protected] Intrigued by recent high-energy study results for nearby galaxies with emission and in particular NGC 1068 that has been detected as a neutrino-emitting source by the IceCube Neutrino Observatory, we conduct detailed analysis of the γ-ray data for the galaxies NGC 1068 and NGC 253, obtained with the Large Area Telescope onboard the Fermi Gamma-ray Space Telescope. By checking for their possible spectral features and then constructing light curves in corresponding energy ranges, we identify flare-like activity from NGC  1068 in ≥2 GeV energy range and significant long-term variations of NGC 253 in ≥5 GeV energy range. In the former, the emission appears harder in the two half-year flare-like events than that in the otherwise `quiescent' state. In the latter, there is a 2-times decrease in the flux before and after MJD 57023, which is clearly revealed by the test-statistic maps we obtain. Considering studies carried out and models proposed for the γ-ray emissions of the two sources, we discuss the implications of our findings. The jet in NGC 1068 may contribute to the emission. The nature of the long-term variations in NGC 253 is not clear, but the variation part of the emission may be connected to the very-high-energy (VHE) emission of the galaxy and could be verified by VHE observations. § INTRODUCTION The launch of the Fermi Gamma-ray Space Telescope (Fermi) and the observations carried out with the large Area Telescope (LAT) onboard it have confirmed the theoretical expectations (e.g., ) that star-forming galaxies can emit significant γ-rays <cit.>. Due to high density of cosmic rays, emanated from copious supernova remnants that are related to the strong star formation (e.g., and references therein), such emission is produced through the proton-proton collisions and/or leptonic processes (i.e., bremsstrahlung or inverse Compton scattering; e.g., ). Thus far, more than 10 star-forming galaxies, within the local group or nearby, have been detected at γ-rays (e.g., ). These galaxies lie along a correlation line of the versus the infrared (or radio) luminosities <cit.>, which is considered to indicate the high cosmic-ray densities related to their star-formation property. However along with this understanding, there are complications. Some of the galaxies contain an active nuclear or other related components (e.g., the jet or outflow), which can contribute emissions. The extreme cases are NGC 3424 and UGC 11041, which made themselves detectable with the LAT by having flare-like events <cit.>. Another interesting case is , as it has been shown that its emission is more intense than that expected from modeling of the starburst and related cosmic-ray density <cit.>. Alternative sources that possibly power the emission have been proposed, such as the jet <cit.> or the outflow <cit.> in it. Recently, very-high-energy (VHE) TeV neutrino emission has been detected from this galaxy by the IceCube Neutrino Observatory (IceCube; ), making it the second known extra-galactic neutrino-emitting source. By taking the neutrino emission into consideration, <cit.> have considered a combination of two zones, the starburst region plus the nuclear corona, and <cit.> have suggested the disk winds as the power source. To fully understand the physical properties of and other galaxies of the same group (e.g., ), we have conducted detailed studies of their emissions, in which we focused on the variability analysis. In this paper, we report our analysis results for and another nearby star-burst galaxy NGC 253. Spectral and/or flux variations were found, which add more features for helping our understanding of possible physical processes occurring in them. Below we first briefly summarize the properties of the two galaxies related to this work in Section <ref>. The analysis and results are presented in Section <ref>. In Section <ref>, we discuss the results. §.§ Properties of and NGC 253 At a distance of ∼14.4 Mpc <cit.>, is a nearby galaxy that has been extensively studied at multi-wavelengths (e.g., and references therein). It contains an active nucleus in the center, viewed by us with an angle of ∼70 <cit.> and thus classified as type 2. Its GeV emission was detected in early observations with the LAT <cit.>, and no >200 GeV very-high-energy (VHE) emission has been detected <cit.>. Given it being an active galactic nucleus (AGN), its GeV emission was searched for variability in studies <cit.>, but no significant variability was found. NGC 253 is one of the nearest galaxies to us (at a distance of 3.5±0.2 Mpc; ) and has also been extensively studied at multi-wavelengths. It is nearly edge-on with an inclination angle of i∼ 72 (e.g., ) and appears with an approximate angular size of 048×010 in the optical <cit.>. One particular feature of it is its starburst in the nuclear region (e.g., ). Related to the starburst, VHE and GeV emission from it has been detected with the ground-based High Energy Stereoscopic System (HESS; ) and LAT <cit.>, respectively. Whether this galaxy contains an AGN or not is under investigation (e.g., ,, and and references therein). § DATA ANALYSIS AND RESULTS §.§ Fermi-LAT Data and Source Model We selected 0.1–500 GeV LAT events (evclass=128 and evtype=3) from the updated Fermi Pass 8 database in a time range of from 2008-08-04 15:43:36 (UTC) to 2023-04-06 00:00:00 (UTC), approximately 14.7 yr. The region of interest (RoI) for each target was set to be 20× 20 centered at the center of the galaxy. For NGC 1068, R.A. = 406696 and Decl. = -001329 (equinox J2000.0), and for NGC 253, R.A. = 118881 and Decl. = -252888 (equinox J2000.0). We excluded the events with zenith angles greater than 90 to reduce the contamination from the Earth limb and included those with good time intervals (selected with the expression DATA_QUAL > 0 && LAT_CONFIG = 1). The instrumental response function P8R3_SOURCE_V3 and the software package of Fermitools–2.2.0 were used in the analysis. Source models were generated based on the Fermi LAT 12-year source catalog (4FGL-DR3; ) by running the script make4FGLxml.py. They each included all sources in 4FGL-DR3 within a radius of 25 centered at each of the targets. The positions and the spectral parameters of the sources are provided in the catalog. In our analysis, the spectral parameters of the sources within 5 from a target were set free and the other parameters were fixed at their catalog values. The spectral model gll_iem_v07.fit was used for the Galactic diffuse emission, and the spectral file iso_P8R3_SOURCE_V3_v1.txt for the extragalactic diffuse emission. The normalizations of these two diffuse components were set free in our analysis. For each of the two targets, we set baseline spectral models following those given in the catalog. The counterpart to NGC 1068 was modeled as a point source with a power law (PL) spectrum, dN/dE = N_0 (E/E_0)^-Γ, where E_0 was fixed at the catalog value 1018.59 MeV, and the counterpart to NGC 253 was modeled as a point source with a log-parabola (LP) spectral form, dN/dE = N_0 (E/E_b)^-[α +βln(E/E_b)], where E_b was fixed at the catalog value 1048.36 MeV. §.§ Analyses for NGC 1068 §.§.§ Likelihood Analysis and Spectrum Extraction Setting a PL for , we performed the standard binned likelihood analysis to the data in 0.1–500 GeV. The obtained results, given in Table <ref>, are consistent with the catalog values. We then extracted the γ-ray spectrum of NGC 1068 by performing maximum likelihood analysis to the LAT data in 10 evenly divided energy bins in logarithm from 0.1 to 500 GeV. In the extraction, only the spectral normalizations of the sources within 5 from NGC 1068 were set free, and all other spectral parameters of the sources in the source model were fixed at the best-fit values obtained from the likelihood analysis. For the obtained spectral data points, we kept those with TS≥4 and derived the 95% flux upper limits otherwise. The spectrum and the PL model fit are shown in Figure <ref>. We noted that while the spectrum can be relatively well described with the PL model, the spectral data points with energies approximately greater than 2 GeV appear flat, which provided a hint for our following analysis. §.§.§ Light curve analysis We extracted the source's 0.1–500 GeV light curve by setting 180-day a time bin and performing the maximum likelihood analysis to each set of the time-bin data. In the extraction, only the normalization parameters of the sources within 5 from NGC 1068 were set free and the other parameters were fixed at the best-fit values obtained from the above maximum likelihood analysis. The extracted light curve is shown in the top panels of Figure <ref>. The light curve shows possible variations, but when we used the variability index TS_var <cit.> to evaluate, no significant variations were found as TS_var≃ 37.5, lower than the threshold value 49.6 (at a 99% confidence level, for 29 degrees of freedom). Given the spectrum of the source and our suspicion about the spectral flatness above 2 GeV, we also extracted a 2–500 GeV light curve (bottom panels of Figure <ref>) with the same setup as the above. In this high energy range, there are two data points (especially the corresponding TS ones), at the beginning ∼MJD 54770 and the middle ∼MJD 57650, possibly indicating two flaring events. However by calculating TS_var, we noted that the light curve did not show significant variations, as TS_var≃ 29.5. In any case, we refined the time periods of the two possible events by obtaining a smooth light curve, for which a time bin of 180 day but with a moving step of 30 day was used. The obtained smooth light curve and TS curve are shown in Figure <ref>. The two time bins with the highest TS values were thus found to be MJD 54712.7–54892.7 and MJD 57502.7–57682.7. We defined the first and second time bins as P_1 and P_2, respectively, and the rest of the time periods as the `quiescent' P_q. §.§.§ Likelihood and Spectral Analysis for P_q, P_1, and P_2 We performed likelihood analysis to the data in P_q, P_1, and P_2, while a PL spectral model was still used for . The obtained results are given in Table <ref>. As can be seen, the emissions in P_1 and P_2 (Γ≃ 2.02 and 1.85 respectively) were harder that that in P_q (Γ≃ 2.43). The significances for the spectral differences were ≃2.2σ and 3.2σ respectively. We extracted spectra from the data during the three time periods to check the likelihood results. The same setup and procedures as that in Section <ref> were used. We kept the spectral data points with TS ≥ 4 and showed the derived 95% flux upper limits otherwise. The obtained spectra are shown in Figure <ref>, with the spectra of P_1 and P_2 compared to that of P_q respectively. As can be seen, the spectral models from the likelihood analysis can generally describe the spectra, while only a ∼10 GeV spectral data point appears slightly deviating away from the model fit in P_1. §.§.§ Summary of the results By checking the ≥ 2 GeV emission of , we found two half-year periods, during which the TS values appeared larger than those of the rest. The corresponding spectra were found to become harder from 2 GeV than the spectrum in quiescence, i.e., having more higher energy photons, and in one time period, the deviation of the spectrum from the latter reached 3.2σ. Thus likely variations in the ≥ 2 GeV energy range were found for . §.§ Analyses for NGC 253 §.§.§ Likelihood and spectral analysis Using the source model described in Section <ref>, we performed the standard binned likelihood analysis to the data in 0.1–500 GeV. The obtained results, given in Table <ref>, are consistent with the values given in 4FGL-DR3, while it can be noted that β is quite small, suggesting a PL model could likely provide an equally good fit. We then extracted the γ-ray spectrum of NGC 253 by performing maximum likelihood analysis to the LAT data in 10 evenly divided energy bins in logarithm from 0.1 to 500 GeV. The same setup and procedure as those in Section <ref> were used. For the obtained spectral data points, we kept those with TS≥4 and uncertainty/flux ≤ 50%, and derived the 95% flux upper limits otherwise. The spectrum and the LP model fit are shown in Figure <ref>. We noted that while the spectrum can be relatively well described with the LP model, there is a possible flux drop-off at ≳5 GeV, which provided a hint for our following analyses. §.§.§ Light curve analysis We extracted the source's 0.1–500 GeV light curve using the same setup and procedure described in Section <ref>. The extracted light curve is shown in Figure <ref>. The variability index TS_var <cit.> was determined to be TS_var≃ 24.9, lower than the threshold value 49.6 (at a 99% confidence level for 29 degrees of freedom). Thus no variability was found in the whole data. Given the spectrum of the source and its possible spectral drop-off above 5 GeV, we also extracted a 5–500 GeV light curve (bottom panels of Figure <ref>) with the same setup as the above. In this high energy range, a `turn-off' seems to have occurred around ∼MJD 57000, as the TS data points after are mostly lower than those before. However TS_var≃ 30.7 was obtained for the light curve, not indicating significant variations. In any case, we defined the two time periods based on the light curve, phase 1 (P1) in MJD 54682.7–57022.7 and phase 2 (P2) in MJD 57022.7–60040.0. §.§.§ Likelihood and spectral analysis for P1 and P2 In order to check possible property differences between the two time periods, we first performed likelihood analysis to their data. For the target, we tested both LP and PL spectral models. The results are summarized in Table <ref>. We found that in P1 and P2, the PL was preferred given the same likelihood values obtained from the both models. The obtained Γ values are consistent with each others within uncertainties. Spectra of P1 and P2 were extracted from the data during the two time periods. The setup and procedure were the same as in Section <ref>. With the spectra shown in Figure <ref>, it can be seen that the difference lies at the last two spectral data points (i.e., >5 GeV). We thus calculated the TS maps centered at NGC 253 in energy range of >5 GeV during P1 and MJD 57022.7–59362.7 (hereafter P2e), where the second one was set such that its length is equal to that of P1 for direct comparison. The TS maps are shown in Figure <ref>. As can be seen, the TS value is ∼114 in P1 while ∼26 in P2e, indicating significant flux drop-off. We performed the likelihood analysis to the data in 5–500 GeV in P1 and P2 (P2e), and obtained the fluxes of 11.8±2.4 × 10^-11 and 5.6±1.5 × 10^-11 photon cm^-2 s^-1 (4.1±1.5× 10^-11 photon cm^-2 s^-1). The flux values are consistent with the TS values as the flux in P2 or P2e is approximately half of that in P1. However, because the TS value in P2 is low, the flux has a large uncertainty, making the flux difference not significant when directly comparing the fluxes in P1 and P2 (similarly indicated by the spectral data points in Figure <ref>). §.§.§ Additional analyses For discussing the emission of NGC 253, we conducted additional analyses and provide the results in this section. First, we checked if the emission could be extended or not. In this analysis, the whole 0.1–500 GeV data were used. We set a uniform disk model for the emission in the source model, with its radius ranging from 01 to 10 at a step of 01. Performing the likelihood analysis, the likelihood values from different radii were obtained and compared to that from a point source. The analysis and comparison indicated the emission was consistent with being from a point source. Second, we determined the position of ≥5 GeV emission of NGC 253, to check if the high energy emission was consistent with being from the center of NGC 253. Running gtfindsrc to the data, we obtained R.A.=1190, Decl.=-2528 (equinox J2000.0), with a 2σ uncertainty of 004. The error circle contains the center's position of NGC 253 (Figure <ref>). Note that given the LAT's ∼025 containment angle at 5 GeV[https://www.slac.stanford.edu/exp/glast/groups/canda/lat_Performance.htm], smaller than the major axis of the galaxy, the emission should arise from the central region. To further check this, we calculated a ≥ 10 GeV TS map (Figure <ref>), at which energy the containment angle is ≲02. The TS map shows that the emission appears smaller than the major axis of the galaxy. §.§.§ Summary of the results By checking the emission of NGC 253, we found that its ≥5 GeV part had a flux drop around MJD 57023, with the flux before the date being ∼2 times that after the date. This change is clearly revealed by the TS maps calculated for the two time periods. Thus likely high energy flux variations in long-time scales existed in the emission of this galaxy. In addition, the emission in 0.1–500 GeV is consistent with being a point source, and the high-spatial resolution TS maps obtained at higher, ≥5 GeV or ≥10 GeV energies indicate that the emission likely arises from the central region of NGC 253. § DISCUSSION For the purpose of fully studying the emissions of the nearby galaxies NGC 1068 and NGC 253, which can potentially help improve our understanding of the physical processes in them, we carried out detailed analysis. In their emissions, we found possible spectral or flux variations at ≥2 GeV or ≥5 GeV high-energy ranges. Below we discuss possible implications of the findings respectively for the two galaxies. Because of the presence of the variable components, additional processes are likely required in order to explain the emissions from them. For , we suggest emission from its jet may contribute to the observed. It is not clear about the nature of ≥5 GeV variations seen in NGC 253, but the variations may be related to its potential AGN. §.§ NGC 1068 From the beginning when emission was detected from , it has been realized that the emission is more intense than that estimated from typical starburst-induced radiation models <cit.>. Thus alternative possibilities have been proposed. <cit.> used a jet model, as a kpc-scale radio jet is seen as a component of the galaxy <cit.>. <cit.> then considered the AGN-driven outflow <cit.> that induces shocks to produce high-energy particles. Recently to explain the observed neutrino emission, high-energy particles are suggested to be produced in the corona around the central supermassive black hole <cit.>. While these particles emit γ-rays in the same processes that produce the neutrinos, the optical depth to the high-energy photons is high (e.g., ) and thus the photons can not escape the central region until further cascading to MeV energies. By including the processes of the corona with those of the starburst, <cit.> provided a fit to both and neutrino spectra, while the one was dominantly contributed by the starburst component. Alternatively, <cit.> proposed disk winds as the power source for the neutrino and emissions, with the latter arising from the interaction of the outgoing wind with the surrounding torus. We note that <cit.> very recently conducted analysis of the emission of , mainly by extending the spectrum to as low as 20 MeV. They argued that the low-energy, ≤500 MeV part of the emission comes from the corona and the other part from the starburst. Our results also suggest that the emission probably contains multiple components. As the typical starburst models generally work, as proved from studies of other starburst galaxies (e.g., ), a component from the starburst should be present. Then at ≳2 GeV energies, our finding of flare-like events suggests there is another component. Given the variations, it is possibly from the jet (it is not clear whether the outflow scenario would produce variable emission), since jets' emissions are well known to be highly variable. In the model of <cit.>, the emission is due to the external inverse Compton process (EIC), from a blob of plasma that has a bulk Doppler factor δ_ b and a radius r_ b. Taking half a year as the variability time scale Δ t, we may estimate the size of the blob, ∼ cΔ tδ_ b≃ 0.15δ_ b pc, where c is the speed of the light. This size value is smaller than that (≃6.5 pc) estimated from the spectral-energy-distribution (SED) fitting in <cit.>, but we should note that there are large uncertainties on parameters in the modeling and the size value (or the variability time scale) is in a reasonable range for this type of the emission regions. For the two variation events, we have checked multiwavelength data to find any possible correlated variations, in particular at X-rays. <cit.> reported a hard X-ray light curve in 14–195 keV obtained with the Burst Alert Telescope (BAT) onboard the Neil Gehrels Swift Observatory (Swift), and the light curve ends at ∼MJD 56500. The mission Monitor of All-sky X-ray Image (MAXI; ) also detects the galaxy and provides a 2–20 keV light curve. However no correlated variations were found from these two light curves. We note that <cit.> reported a transient flux increase in ≥ 20 keV hard X-rays, but the event was detected in 2014 Aug. (before the second variation event) and likely due to a temporary decrease of the column density towards the AGN. We note that if the variable part in the emission comes from the jet, radio monitoring of the jet might be able to provide supporting evidence by detecting similar variations. §.§ NGC 253 As the starburst models can explain the observed GeV and VHE emissions from NGC 253 (e.g., ) and no flux variations would arise from this sort of starburst-induced emission, it is unexpected to find the variation reported in this work. Also both the VHE detection <cit.> and our analysis results point the emission region at the center of the galaxy. We thus suspect the variation is possibly related to the potential AGN that has been searched (e.g., ). From the work reported by <cit.>, we can see there are four relatively bright X-ray sources in the center, with one of them (N-2003) suggested as an AGN candidate. We checked the hard X-ray light curve of NGC 253 reported from the Swift/BAT Hard X-ray Transient Monitor program <cit.>, but no significant variations could be seen in the light curve. We also checked the X-ray data from targeted observations with the Swift X-ray Telescope (XRT), and no variation patterns were seen in the flux measurements of the central X-ray source (which presumably consists of the four sources) before and after ∼MJD 57023 (2015 Jan. 1). There are also many sets of archival data from Chandra X-ray observations of NGC 253, but by checking the data for the resolved X-ray sources, no clear correlated variations were found. Finally, we note that the VHE emission may be closely connected with the ≥5 GeV part. In Figure <ref>, we show the HESS VHE data points of the galaxy <cit.>, which were obtained from the data collected in years 2005 and 2007–2009. The ≥5 GeV spectral data points in P2 appear to be at the same flux level as the VHE data points. If they are closely connected, similar variations might be seen in the VHE emission. Unfortunately HESS did not conducted any further observations of the galaxy after year 2009. Hopefully with other VHE facilities at work, possible variations of the VHE emission may be detected in the near future. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. This research has made use of the MAXI data provided by RIKEN, JAXA and the MAXI team. This research is supported by the Basic Research Program of Yunnan Province (No. 202201AS070005), the National Natural Science Foundation of China (12273033), and the Original Innovation Program of the Chinese Academy of Sciences (E085021002). aasjournal
http://arxiv.org/abs/2307.04623v1
20230710150910
The precise form of Ahlfors' second fundamental theorem
[ "GUang-Yuan Zhang" ]
math.CV
[ "math.CV", "[2020] 30D35, 30D45, 52B60" ]
The precise form of Ahlfors' second fundamental theorem]The precise form of Ahlfors' second fundamental theorem Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China. Email: [email protected] Project 12171264 & 10971112 supported by NSFC Let S=ℂ be the unit Riemann sphere. A simply connected covering surface over S is a pair Σ=( f,U) , where U is a Jordan domain in ℂ and f:U→ S is an orientation-preserving, continuous, open and finite-to-one mapping (OPCOFOM). Let 𝐅 be the space of all simply connected covering surfaces over S, for each Σ=( f,U) ∈𝐅 let A(Σ) and L(∂Σ) be the area and boundary length of Σ, weighted according to multiplicity, and for each 𝔞∈ S let n( Σ,𝔞) =#f^-1( 𝔞) ∩ U be the cardinality of the set f^-1( 𝔞) ∩ U. The Second Fundamental Theorem (SFT) of Ahlfors' covering surface theory is that, for any set E_q={𝔞 _1,𝔞_2,…,𝔞_q} of distinct q(≥3) points on S, there exists a positive constant h, which depends only on E_q, such that for any Σ∈𝐅, (q-2)A(Σ)≤4πn(Σ,E_q)+hL(∂Σ), where n(Σ,E_q)=∑_v=1^qn(Σ ,𝔞_v). The goal of this paper is to develop a new method to give the precise bound H_0 of h. We write R(Σ)=R(Σ,E_q) the error term of Ahlfors' SFT, say R(Σ)=( q-2) A(Σ)-4πn( Σ ,E_q) . Our first main result is that for a.e. L∈(0,+∞), there exists an extremal surface Σ_0 in 𝐅( L) ={Σ∈𝐅:L(∂Σ)≤ L}, say H_L=sup{R(Σ)/L(∂Σ):Σ∈𝐅( L) } =R(Σ_0)/L(∂Σ_0). Our second main result is that there exists a subspace 𝒮_0 of 𝐅, consisted of very simple surfaces, and there exists a 4π-extremal surface Σ_1∈𝒮_0, say, H_0=lim_L→∞H_L=sup{R(Σ)/L(∂Σ):Σ∈𝐅} =R(Σ_1)+4π/L(∂Σ_1). Our third main result is that among all 4π-extremal surfaces, there exists one which is the simplest (such surfaces may not be unique). Simplest 4π-extremal surfaces can be used to give the precise bound H_0 as simple as possible. [2020] 30D35, 30D45, 52B60 [ Guang Yuan Zhang August 12, 2023 ===================== § INTRODUCTION We first recall some notations used in <cit.>. The Riemann sphere S is the unit sphere in ℝ^3 centered at the origin which is identified with the extended plane ℂ via stereographic projection P:S→ℂ as in <cit.>. Length and area on S have natural interpretations using the spherical (chordal) metric on ℂ: ds=ρ(z)|dz|=2/1+|z|^2|dz|,z∈ℂ. For a set V on S, ∂ V denotes its boundary and V its closure. We write Δ={z∈ℂ :|z|<1}. Then for the upper and lower open hemispheres S^+ and S^- on S, we have P( S^+) =( ℂ \Δ) ∪{∞} and P(S^-)=Δ. A topological triangle γ on S is a Jordan curve equipped with three vertices lying on γ. The three vertices define three (compact) edges of γ, the two components of S\γ are called topological triangular domains, and the vertices and edges of γ are also called vertices and edges of the topological triangular domains. If each of the three edges of γ is on a great circle on S, then γ is called a triangle and the two components of S\γ are called triangular domains. A (covering) surface Σ (over the sphere S) is defined as in Ahlfors' paper <cit.>: Σ is sewn from a finite number of compact topological triangular domains on S (see Remark <ref> A(3) for details). Equivalently speaking, Σ is defined to be a pair ( f,U) , where U is a domain[A domain means a connected open subset of ℂ.] in ℂ so that U has a finite topological triangulation ∪_j=1^m_0U_j and for each j=1,…,m_0,f|_U_j:U_j→ f(U_j)⊂ S is a homeomorphism. Moreover, f is locally homeomorphic on U, except at a finite number of points, which are some vertices of the triangulation {U_j}. A mapping from a compact subset K of ℂ into S is called an orientation-preserving, continuous, open and finite-to-one mapping (OPCOFOM) if it can be extended to be an OPCOFOM from a neighborhood of K in ℂ into S. Then a surface Σ=( f,U) in the definition can be regarded as an OPCOFOM from U into S, and vice versa. The term orientation-preserving means that P^-1∘ f is orientation-preserving. In this setting, we call the pair ∂Σ=(f,∂ U) the boundary of the surface Σ, and define A(Σ)=A(f,U) and L(∂Σ)=L(f,∂ U), respectively, area and length, weighted according to multiplicity. For instance, A(g,Δ)=L(g,∂Δ)=6π when g(z)=z^3 ,z∈Δ. Here we list some elementary results on the sphere S as in <cit.>: any great circle on S has length 2π, A(S)=2A(S^+)=2A(S^-)=4π, L([0,+∞])=2L([0,1])=2L([1,+∞ ])=π, the circle on S with spherical diameter [0,1] or [1,+∞], has length √(2)π, and a disk in a hemisphere on S with perimeter L has area 2π-√(4π^2-L^2). All surfaces in this paper are covering surfaces defined above. Let Σ=(f,U) be a surface over S. (1) Σ is called a closed surface, if U=ℂ=S. For a closed surface Σ, we have ∂Σ=∅, and then L(∂Σ)=0. (2) Σ is called a simply-connected surface, if U is a simply connected domain. (3) 𝐅 denotes all surfaces such that for each Σ=( f,U) ∈𝐅, U is a Jordan domain. In <cit.>, it is assumed that U is a Jordan domain, when ( f,U) is a (covering) surface. But in this paper, there is no such restriction. It is permitted that even for a simply connected surface ( f,U), U is not a Jordan domain, it may be the domain between two tangent circles, for example. (A) Let K_1 and K_2 be two domains or two closed domains on S, such that ∂ K_1 and ∂ K_2 are both consisted of a finite number of disjoint Jordan curves. A mapping f:K_1→ K_2 is called a complete covering mapping (CCM), if (a) for each p∈ K_2 there exists a neighborhood V of p in K_2 such that f^-1(V)can be expressed as a union ∪_j∈𝒜U_j of disjoint (relative) open sets of K_1, and (b) f|_U_j:U_j→ V is a homeomorphism for each j∈𝒜. (B) We call f a branched complete covering mapping (BCCM), if all conditions of (A) hold, except that (b) is replaced with (b1) or (b2): (b1) If both K_1 and K_2 are domains, then for each j∈𝒜, U_j∩ f^-1(p) contains only one point a_j of f^-1(p), and there exist two homeomorphisms φ_j:U_j→Δ,ψ_j:V→Δ with φ_j( a_j) =ψ_j( p) =0, such that ψ_j∘ f|_U_j∘φ_j^-1(ζ)=ζ^k_j,ζ∈Δ,where k_j is a positive integer; or (b2) if both K_1 and K_2 are closed domains, then f|_K_1^∘:K_1^∘→ K_2^∘ satisfies (b1) and moreover, f restricted to a neighborhood of ∂ K_1 in K_1 is a CCM onto a neighborhood of ∂ K_2 in K_2. (C) For a surface Σ=( f,U)over S, f is in general not a CCM or BCCM. When f( z) =z^2, both f:Δ→Δand f:Δ→Δ are BCCMs, but when f( z) =z^-1( z-1/2/1-z/2) ^2, f:Δ→ f(Δ) is neither a CCM nor a BCCM. Nevanlinna theory of value distribution of meromorphic functions (<cit.>, <cit.>, <cit.>, <cit.>) and Ahlfors theory of covering surfaces (<cit.>, <cit.>, <cit.>, <cit.>) are two major events in the history of the development of function theory. The most striking result of Nevanlinna theory is the Second Fundamental Theorem. Ahlfors' theory is a geometric version of Nevanlinna's, in which Nevanlinna's Second Fundamental Theorem is reinterpreted as follows. Theorem A. (Ahlfors' Second Fundamental Theorem (SFT) <cit.>). For an arbitrarily given set E_q={𝔞_1,𝔞_2,…,𝔞 _q} of distinct q( ≥3) points on S, there exists a positive constant h such that for any surface Σ=(f,U)∈𝐅, (q-2)A(Σ)≤4πn(Σ)+hL(∂Σ), where n(Σ)=n(Σ,E_q)=∑_v=1^qn(Σ,𝔞_v), n(Σ,𝔞 _v)=#f^-1(𝔞_v)∩ U, and # is the cardinality. The goal of this paper is to present a method to identify the precise bound for that h in Ahlfors' SFT for 𝐅. This problem can be traced back to the early 1940s, when J. Dufresnoy first gave a numerical estimate of h in <cit.> as follows. Theorem B. (Dufresnoy <cit.>) For any surface Σ=(f,U)∈𝐅 , (q-2)A(Σ)≤4πn(Σ,E_q)+( q-2) 6π/δ_E_qL(∂Σ), where δ_E_q=min_1≤ i<j≤ qd( 𝔞 _i,𝔞_j) . Here d( 𝔞_i,𝔞_j) is the spherical distance on S, which is the minimum of length of all paths on S from 𝔞_ito 𝔞_j. In 2011, the author have identified the precise bound for h in a special case as follows. Theorem C. (Zhang <cit.>) For any surface Σ=(f,U)∈𝐅 with n( Σ,{0,1,∞}) =∅, we have A(Σ)≤ h_0L(∂Σ), where h_0=max_θ∈[ 0,π/2] h(θ), h(θ)=A( 𝔇( 0,1,θ ,θ) ) +4π/L(∂𝔇( 0,1,θ,θ) )=( π+θ) √(1+sin^2θ)/arctan√(1+sin^2θ)cosθ -sinθ, and 𝔇( 0,1,θ,θ) is the lens on the sphere S enclosed by two symmetric circular arcs with endpoints 0 and 1 and with interior angle 2θ at the cusps (see Definition <ref>). Moreover, the bound h_0 is precise: there exists a sequence Σ_n∈𝐅 with n(Σ_n ,{0,1,∞})=∅ such that A(Σ_n)/L(∂Σ_n)→ h_0 as n→∞. The proof of (<ref>) occupied almost all space of the long paper <cit.>, in which it is pointed out that h_0 can be easily found and (<ref>) can be easily proved, provided that for every given L>0, the extremal surface Σ_L=( f_L,Δ) with L(∂Σ_L)≤ L so that f_L(z)≠0,1,∞ and that A(Σ_L) assumes the maximal value exists. In fact, such extremal surfaces have a lot of good properties, for example, the boundary ∂Σ_L is consisted of circulars arcs with the same curvature and each of these arcs contains two endpoints {0,1} or {1,∞}. From this property, after a little argument as in the introduction section of <cit.> one can compute the precise bound. However, at that time we couldn't prove the existence of the extremal surface Σ_L and left it as a conjecture. We were lucky to find a substitute for the extremal surface, and finally we have successfully proved the optimality of h_0 in <cit.>. But the method in <cit.> is difficult to be applied to the general case, that is, in (<ref>), q≥3, the position of 𝔞_vs in E_q={𝔞_1,𝔞 _2,…,𝔞_q} are arbitrarily given and n (Σ,E_q) is not assumed equal to zero. So, to identify the precise bound of h in (<ref>) in general case, it seems the simplest way is to solve the existence of extremal surfaces. Consider the general case that q≥3, E_q={𝔞_1,𝔞_2,…,𝔞_q} and L>0 are given arbitrarily. We introduce the following notations R(Σ)=R(Σ,E_q)=(q-2)A(Σ)-4πn(Σ,E_q), which is the error term of Ahlfors' SFT, 𝐅(L)={Σ∈𝐅:L(∂Σ)≤ L}, H(Σ)=H(Σ,E_q)=R(Σ,E_q)/L(∂Σ), H_0=H_0(E_q)=sup_Σ∈𝐅H(Σ), and H_L=H_L(E_q)=sup_Σ∈𝐅(𝐋)H(Σ). Then it is clear that H_L increase with respect to L, H_0=lim_L→∞H_L, and Ahlfors' SFT can be restated as H_0=H_0(E_q)<+∞. Then our goal becomes to prove the existence of extremal surfaces in 𝐅( L) and present an achievable method that gives the precise value of H_0. For this purpose, we introduce some more terminology and notations. Let ℒ be the set of continuous points of H_L=H_L(E_q), with respect to L. Since H_L increase with respect to L, it is clear that ( 0,+∞) \ℒ is just a countable set. For any two non-antipodal points p and q on S, pq is the geodesic on S from p to q: the shorter of the two arcs with endpoints p and q of the great circle on S passing through p and q. Thus d(p,q)<π and pq is uniquely determined by p and q. An arc of a great circle on S is called a line segment on S, and to emphasize this, we also refer to it as a straight line segment. For the notation pq, when p and q are explicit complex numbers we write p,q, to avoid ambiguity such as 123=12,3 or 1,23. When p and q are two antipodal points on S, pq is not unique and d( p,q) =π. To avoid confusions, when we write pq, or say pq is well defined, we always assume d( p,q) <π. All paths and curves considered in this paper are oriented and any subarc of a path or closed curve inherits this orientation. Sometimes paths and curves will be regarded as sets, but only when we use specific set operations and set relations. For an oriented circular arc c, the circle C containing c and oriented by c is called the circle determined by c. (1) For a Jordan domain D in ℂ, let h be a Möbius transformation with h(D)⊂Δ. Then ∂ D is oriented by h and the anticlockwise orientation of ∂ h(D). The boundary of every Jordan domain on S is oriented in the same way, via stereographic projection. (2) For a Jordan curve C on ℂ or S, the domain T_C bounded by C is called enclosed by C if the boundary orientation of T_C agrees with the orientation of C. (3) A domain D on S is called convex if for any two points q_1 and q_2 in D with d(q_1,q_2)<π, q_1q_2⊂ D; a Jordan curve on S is called convex if it encloses a convex domain on S; a path on S is called convex if it is locally an arc of a convex Jordan curve. (4) Let γ:[a,b]→ S be a path on S and p_0∈(a,b). γ is called convex at p_0, if γ restricted to a neighborhood I_δ=(p_0-δ,p_0+δ) of p_0 in (a,b) is a convex Jordan path, with respect to the orientation of γ when t∈ I_δ increases; and γ is called strictly convex at p_0 if for some δ>0 the restriction γ|_I_δ is convex and γ|_I_δ∩ S_1 =γ|_I_δ\{γ(p_0)} for some open hemisphere S_1 on S. By definition, the disk {z∈ℂ:|z|>2} is viewed as a convex domain on S and its boundary orientation is clockwise, and thus the circle |z|=2 oriented clockwise is also convex on S. Also by definition, the disk {z∈ℂ:|z|<2} is not convex on S and its boundary orientation is anticlockwise, and thus the circle |z|=2 oriented anticlockwise is not convex on S. By convention, an arc of a curve inherits the orientation of this curve. A Jordan curve on S is convex and lies in a hemisphere on S if it is locally convex (see Lemma 4.1 in <cit.> for polygonal Jordan curves on S). A locally convex curve on S is locally simple and always goes straight or turns left when viewing S from its interior, such as with planar convex polygons oriented anticlockwise. For a curve β, in ℂ or S, given by z=z(t),t∈ a,b], β^∘ denotes the interior of β, which is the restriction of β to the open interval (a,b), and ∂β denotes the set of the endpoints of β, which is either the singleton {z(a)} if β is closed, or the two points set {z(a),z(b)}. For a surface Σ=( f,U) , the interior of the surface Σ is the restricted open surface Σ^∘=( f,U) . The notation q∈Σ^∘ means a pair ( f,p) with q=f(p) for some p∈ U. When β is defined by z=e^it,t∈0,4π], for example, β^∘ is still well defined as z=e^it,t∈(0,4π), and thus as sets, β and β^∘ may coincide. For a Jordan curve α in ℂ, its partition is a collection {α_j }_j=1^n of its subarcs such that α=∪_j=1^nα_j and α_j^∘ are disjoint and arranged anticlockwise. In this setting we write α=α_1+α_2+⋯+α_n. Here α _j^∘ is the interior of α_j, which is now α_j without endpoints (since α is simple). A partition ∂Σ=γ_1+γ_2+⋯+γ_n of ∂Σ for a surface Σ=(f,U)∈𝐅 is equivalent to a partition ∂ U=α_1+α_2+⋯+α_n of ∂ U such that γ_j=(f,α_j) for j=1,…,n. Now we can introduce the subspace ℱ of 𝐅: We denote by ℱ the subspace of 𝐅 such that for each Σ=( f,U) ∈ℱ, ∂Σ has a partition ∂Σ=c_1+c_2+⋯+c_n of simple convex circular (SCC) arcs. This means that ∂ U has a partition ∂ U=α_1+α_2+⋯+α_n such that f restricted to each α_j is a homeomorphism onto the SCC arc c_j. ℱ( L) is the subspace of ℱ such that for each Σ∈ℱ( L) , L(∂Σ)≤ L. Let T be a Jordan domain on S which is a union of a finite number of disks D_j,j=1,…,m. Then T can be viewed as a surface in ℱ, if all disks D_j are convex on S, say, all disks are of diameter ≤π. For any surface Σ in 𝐅 and any ε>0, to estimate H(Σ) we may assume L(∂Σ)<+∞, for otherwise we have H(Σ)=0. Then for any ε>0, there are standard ways to show that there exists another surface Σ_ε in ℱ such that ∂Σ_ε is consisted of a finite number of line segments, L(∂Σ_ε)≤ L(∂Σ)+ε, A(Σ_ε)≥ A(Σ)-ε and n( Σ_ε) ≤n( Σ) . Then we have H(Σ_ε)≥( q-2) ( A(Σ )-ε) -4πn( Σ) /L(∂Σ)+ε→ H(Σ) as ε→0. Thus, we have H_0=H_0(E_q)=sup_Σ∈𝐅H(Σ)=sup_Σ∈ℱH(Σ). Define H_L=H_L(E_q)=sup_Σ∈ℱ(L)H(Σ). We call Σ_0∈ℱ(L) an extremal surface of ℱ(L), if H(Σ_0)=H_L. If Σ_0 is an extremal surface of ℱ(L) with minimal perimeter among all the extremal surfaces of ℱ(L), we call it a precise extremal surface of ℱ(L). By Remark <ref>, a precise extremal surface of ℱ(L) is also a precise extremal surface of 𝐅(L), if L∈ℒ. Our first main result is the following. For each L∈ℒ with L>0, there exists a precise extremal surface of 𝐅(L), and for every precise extremal surface Σ of 𝐅( L) we have Σ∈ℱ( L), and ∂Σ has a partition ∂Σ=C_1+C_2+⋯+C_N, such that the following holds. (i) All C_j,j=1,…,N, are SCC arcs and have the same curvature. (ii) At most one of C_j,j=1,…,N, is a major circular arc. (iii) Either ∂Σ is a simple circle containing at most one point of E_q, or N>1, #∂ C_j=2, ∂ C_j⊂ E_q and C_j^∘∩ E_q=∅, for j=1,…,N. (iv) For every j=1,…,N,C_j is contained in an open hemisphere S_j on S. Recall that ∂ C_j denotes the endpoints of C_j. A major circular arc means more than half of the circle that contains it, and a closed circular path is regarded major. From the above theorem we will find the simplest models to compute H_0=H_0(E_q). But we need introduce some more notations and terminologies. 𝒮_0=𝒮_0(E_q) is the space of all surfaces Σ=( f,Δ) in ℱ such that the following hold. (1) There exists a positive integer q^'=Q(Σ)∈{2,…,q} and ∂Σ has a partition ∂Σ=C_1( p_1,p_2) +⋯+C_q^'( p_q^',p_1) , such that for each j=1,2,…,q^', C_j=C_j( p_j ,p_j+1) is an SCC arc on S (C_q^'+1=C_1). (2) For each j=1,2,…,q^', the initial point p_j and the terminal point p_j+1 are two distinct points in E_q, and d( p_j,p_j+1) <π, say, p_jp_j+1 is well defined. (3) p_1,…,p_q^' are distinct each other. (4) _maxΣ=max{#f^-1(w)∩Δ:w∈ S\∂Σ}≤2q^'-2. (5) For each j=1,2,…,q^', C_j^∘⊂ S\ E_q. (6) For each j=1,2,…,q^', C_j is contained in an open hemisphere S_j on S. (7) All C_j,j=1,2,…,q, have the same curvature. (8) For each j=1,2,…,q^', C_j is either a minor arc or half of a circle. For each surface Σ in 𝒮_0=𝒮_0( E_q) , all the circular arcs of ∂Σ have the same curvature k=k( Σ,E_q) . Now we can state our second main result: (i) There exists a surface Σ_0 in 𝒮_0=𝒮_0(E_q) such that H_0=sup_F∈𝐅R(F)/L(∂ F)=H_𝒮_0 ^4π=max_F∈𝒮_0R(F)+4π/L(∂ F) =R(Σ_0)+4π/L(∂Σ_0). (ii) For each surface of 𝒮_0 satisfying (<ref>) the boundary is consisted of strictly convex circular arcs. (iii) k=k( Σ,E_q) is the same for all surfaces Σ satisfying (<ref>). For each Σ∈𝒮_0, the partition (<ref>), ignoring a permutation of subscripts like i_0+1,i_0 +2,…,i_0+q^' ( modq^') , is unique and so is the number q^' of terms. Then the boundary length of all surfaces in 𝒮_0 has a common upper bound, say, L(∂Σ_0)≤ qπ for all Σ_0∈𝒮_0. Though 𝐅( L) contains extremal surface when L∈ℒ by Theorem <ref> and (<ref>), it is proved in <cit.> that 𝐅 contains no extremal surface and, since H_L=sup{ H( Σ) ∈𝐅:L(∂Σ)≤ L} increases as a function of L∈( 0,∞) , for any sequence F_n∈𝐅,with H(F_n)=R(F_n)/L(∂ F_n)→sup_F∈𝐅 R(F_n)/L(∂ F_n)=H_0, we must have L(∂ F_n)→∞( n→∞) . Thus (<ref>) plays the role converting infinite problems into finite problems. To make (<ref>) easier to use, we will reduce the space 𝒮_0 as much as possible to make (<ref>) simpler: A surface Σ in 𝒮 _0=𝒮_0(E_q) satisfying (<ref>) will be called a 4π-extremal surface of 𝒮_0. A surface Σ of 𝒮_0 is called the simplest 4π-extremal surface of 𝒮_0 if the following hold. (1) Q(Σ)=Q(E_q)=min{Q(Σ^')=#∂Σ^'∩ E_q:Σ^' are 4π-extremal surface of 𝒮_0}. (2) L(∂Σ^')≥ L(∂Σ) for any 4π-extremal surface Σ^' of 𝒮_0 with Q( Σ^') =Q(Σ). (3) _max(Σ)≤_max(Σ^') for any 4π-extremal extremal surface Σ^' of 𝒮_0 with Q( Σ^') =Q(Σ)and L(∂Σ^')=L(∂Σ). 𝒮^∗=𝒮^∗(E_q) is defined to be the space of all the simplest 4π-extremal surfaces of 𝒮_0. Our third main theorem is the following For any set E_q of distinct q points on S,q≥3, 𝒮^∗=𝒮^∗(E_q)≠∅. To present some application of the last two main theorems, we introduce some more notations. For a path Γ on S or ℂ given by z=z(t),t∈ t_1,t_2], -Γ is the opposite path of Γ given by z=z(t_2-t+t_1),t∈ t_1,t_2]. A convex domain enclosed by a convex circular arc c and its chord I is called a lune and is denoted by 𝔇^'( I,c) ,𝔇^'( I,θ(c)) , 𝔇^'( I,L(c)) , or 𝔇^'( I,k(c)) , where θ is the interior angle at the two cusps, k is the curvature of c and I is oriented such that[The initial and terminal points of I and c are the same, respectively, in the notation 𝔇^'(I,θ), in other words, 𝔇^'(I,θ) is on the right hand side of I.] ∂𝔇^'( I,θ) =c-I. For two lunes 𝔇^'( I,θ_1) and 𝔇^'( -I,θ_2) sharing the common chord I we write 𝔇( I,θ_1,θ_2) =𝔇^'( I,θ_1) ∪ I^∘∪𝔇^'( -I,θ_2) and call the Jordan domain 𝔇( I,θ_1,θ _2) a lens. Then the notations 𝔇( I,l_1 ,l_2), 𝔇( I,c_1,c_2)and 𝔇( I,k_1,k_2) are in sense and denote the same lens, when l_j=L(c_j) and k_j=k( c_j) are the length and curvature of c_j, j=1,2, say, 𝔇( I,c_1,c_2) =𝔇( I,l_1,l_2) =𝔇( I,k_1,k_2) =𝔇^'( I,l_1) ∪ I^∘∪𝔇^'( -I,l_2) =𝔇^'( I,c_1) ∪ I^∘∪𝔇^'( -I,c_2) =𝔇^'( I,k_1) ∪ I^∘∪𝔇^'( -I,k_2 ) . For a lune 𝔇^'( I,τ) , whether τ denotes the length l, the angle θ, or the curvature k is always clear from the context, and so is for the lens 𝔇( I,τ _1,τ_2) . By definition we have 0<θ_j≤π for j=1,2, since 𝔇 ^'( I,θ) is convex, but for the domain 𝔇( I,θ_1,θ_2) it is permitted that θ_1 or θ_2 is zero, say 𝔇( I,θ _1,θ_2) reduces to 𝔇^'( I,θ_1) or 𝔇^'( -I,θ_2) . By definition of 𝔇(I,θ,θ) we have 𝔇(I,θ,θ)=𝔇^'( I,θ) ∪𝔇^'( -I,θ) ∪ I^∘, and θ∈(0,π]. If I=1,0,-1 and θ=π/2, for example, 𝔇( I,θ,θ) =Δ and 𝔇^'( I,π/2) =Δ^+ is the upper half disk of Δ. As a consequence of the last two main results we have our forth main result: Let Σ=( f,Δ) ∈𝒮^∗(E_q). Then ∂Σ has a partition (<ref>) satisfying Definition <ref> (1)–(8) and one of the following holds. (i) If q^'=Q(Σ)=2, then Σ is a closed lens 𝔇( p_1p_2,θ_0,θ_0) (see Definition <ref>) with {p_1,p_2}⊂ E_q, θ_0∈(0,π/2) and d( p_1,p_2) =δ_E_q=min{ d( a,b) :a∈ E_q,b∈ E_q,a≠ b} . (ii) If q=3 and E_q is on a great circle on S, then q^'=Q(Σ)=2. (iii) If q=q^'=Q(Σ)=3, and the three point set ( ∂Σ) ∩ E_q={p_1,p_2,p_3} is not in a great circle on S, then ∂Δ has a partition ∂Δ =α_1( a_1,a_2) +α_2( a_2,a_3) +α_3( a_3,a_1) and there is a Jordan curve α^'=α_1^'( a_1,a_2) +α _2^'( a_2,a_3) +α_3^'( a_3,a_1) in Δ such that α^' ∩∂Δ={a_1,a_2,a_3}, f restricted to the domain enclosed by α^' is a homeomorphism onto the triangular domain (see Definition <ref>) T enclosed by p_1p_2 p_3p_1, and f restricted to the closed Jordan domain enclosed by α_j-α_j^' is a homeomorphism onto the closed lune 𝔇^'( p_jp_j+1,θ _j) enclosed by C_j( p_j,p_j+1) +p_j+1p_j, for j=1,2,3 (see Definition <ref>). (iv) If q^'=q=3, or if q^'=2 and q≥3, then n( Σ) =n( Σ,E_3) =∅. (1) We call two surfaces Σ _1=( f_1,U_1) and Σ_2=( f_2,U_2) equivalent and write ( f_1,U_1) ∼( f_2,U_2 ) , if there exists an orientation preserving homeomorphism (OPH) φ :U_1→U_2 such that f_1=f_2 ∘φ. If Σ_1 and Σ_2 are equivalent, then it is clear that H(Σ_1)=H(Σ_2), and thus, it is very useful to regard Σ_1 and Σ_2 as the same surface in our paper. Then we can show that 𝒮^∗ contains at most a finite number surfaces, say, equivalent classes. (2) Similar to (1), we call two curves (α_1,[a_1,b_1]) and (α_2,[a_2,b_2]) on Sequivalent and write (α_1,[a_1,b_1])∼(α_2,[a_2,b_2]) if there is an increasing homeomorphism τ:[a_1,b_1]→ a_2,b_2] such that α_2∘τ=α_1. (3) 𝒮^∗ may contain more than one equivalent classes. When q=3, E_3={0,1,∞}, 𝒮^∗ is consisted of two equivalent classes, and when E_3={0,r,∞}, 𝒮^∗ contains only one equivalent class when d( 0,r) <π/2,by Theorem <ref> and Definition of 𝒮 ^∗. Let 𝐅^∗=𝐅^∗( E_q) be the subspace of 𝐅 such that for each Σ∈𝐅^∗( E_q) , n(Σ ,E_q)=0. Define h_0=h_0(E_q):=sup_Σ∈𝐅^∗( E_q) H(Σ). then we have h_0≤ H_0. As a consequence of Theorem <ref>–<ref>, we can easily prove the following at the end of this paper. If q=3, then H_0=H_0(E_3)=h_0(E_3)=h_0; and if q>3, then there exists a set E_q^' of q distinct points such that H_0(E_q^')>h_0(E_q^'). If E_3={0,1,∞}. Then by Theorem <ref>, Σ_0∈𝒮^∗( E_3) =𝒮^∗( { 0,1,∞}) is the closed lens 𝔇(0,1,θ_0,θ_0) or 𝔇(1,∞,θ_0,θ_0) for some θ_0∈0,π/2]. Then H_0=h_0=A( 𝔇(0,1,θ_0,θ _0)) +4π/L(∂𝔇(0,1,θ_0 ,θ_0))=max_θ∈0,π/2]A( 𝔇 (0,1,θ,θ)) +4π/L(∂𝔇 (0,1,θ,θ)) and Theorem C is recovered with more: The inequality (<ref>) of <cit.> still holds and is sharp without the assumption (<ref>). (A) We always assume the triangulation {U_j}={U_i}_i=1^m_0 of U in Definition <ref> satisfy the rules (1)–(3) of triangulation: (1) There is a closed triangular domain K on ℂ, such that for each j, there exists a homeomorphism φ_jfrom U_j to K, the inverse of vertices and edges of K are called vertices and edges of U_j. We will write 𝐔={U_i,e_i1,e_i2,e_i3}_i=1^m_0 and also call 𝐔 a triangulation of U, the vertices and edges of U_j are also called vertices and edges of 𝐔, respectively, and each U_i is called a closed topological triangular domain (CTTD), or a face of 𝐔. (2) For each pair of edges e_i and e_j of 𝐔 with e_i≠ e_j, e_i∩ e_j is empty, or a singleton which is a common vertex of e_i and e_j. (3) For each pair of faces U_i and U_j of 𝐔with U_i≠U_j and U_i∩U_j≠∅, U_i∩U_j is either a common vertex, or a common edge, of U_i and U_j. Then the surface Σ=( f,U) can be identified as the disjoint collection 𝐓={T_i,l_i1,l_i2,l_i3}_i=1^m_0={f(U_i ),f(e_i1),f(e_i2),f(e_i3)}_i=1^m_0 of CTTDs and their edges on S with adjacency condition, the equivalent relation 𝐑 of the collection of edges 𝐄={l_ij,i=1,…,m,j=1,2,3} of 𝐓:l_ij∼ l_i_1j_1 iff the two CTTDs U_i and U_i_1 share an common edge e_ij=e_i_1j_1. Thus f( e_ij) =l_ij=l_i_1j_1=f( e_i_1j_1 )(as sets). Note that this relation depends on the order of U_i and e_ij. When such identification is established, we can get rid of U_j and f. That is, we can understand the surface Σ as the collection 𝐓={T_j,l_j1,l_j2,l_j3}_j=1^m_0 with a relation 𝐑={( i,j,i_1,j_1) }such that l_ij∼ l_i_1j_1 iff ( i,j,i_1,j_1) ∈𝐑. Then, for a pair ( i,j) ∈{ 1,2,…,m_0}×{1,2,3}, l_ij is on the boundary of Σ iff there is no pair ( i_1,j_1) such that ( i,j,i_1,j_1) ∈𝐑, and we can understand Σ as Σ=(𝐓,𝐑), and regard ( 𝐓,𝐑) the Riemann surface of Σ, with a finite number of branch points. In this way we can understand the relation Σ_1∼Σ_2 as this: the Riemann surface of Σ_1 and Σ_2 are the same (when we order U_j and its edges properly). (B) If Σ=( f,U) is a surface, where U is a Jordan domain, we should understand the whole boundary ∂Σas a simple curve on the surface. In fact, we can define the positive distance d_f( ·,·) in Definition <ref>. But for simplicity, sometimes we state that ∂Σ contains a proper closed arc γ( p_1,p_1). This only means ∂ U has a partition ∂ U=α( a_1,a_2) +β( a_2,a_1) such that a_1≠ a_2 but f( a_1)=f(a_2) =p_1. That is to say when we project the curve ∂Σ to S, the arc ( f,α_1) is projected onto the closed path γ( p_1,p_1) . (C) For convenience, we make the agreement: For two faces T_j=( f,U_j) of the partition 𝐓 of Σ,j=1,2, they can be regarded to be closed subdomains of both Σ and S. Regarded to be in Σ, T_1 and T_2 can not intersect when U_1∩U_2=∅. But when they are regarded sets on S, T_1 and T_2 intersect when f( U_1 ) ∩ f(U_2)≠∅. For a set K⊂ S, when we write K⊂ T_1∩ T_2,we only regard T_j as the set f(U_j) on S for j=1,2, say, K⊂ T_1∩ T_2 iff K⊂ f(U_1)∩ f(U_2). § ELEMENTARY PROPERTIES OF SURFACES OF ℱ By Stoilow's theorem, every surface ( f,U) is equivalent to a surface whose defining function is holomorphic on the domain of Definition. (i). (Stoilow's Theorem <cit.> pp.120–121) Let U be a domain on ℂ and let f:U→ S be an open, continuous and discrete mapping. Then there exist a domain V on ℂ and a homeomorphism h:V→ U, such that f∘ h:V→ S is a holomorphic mapping. (ii). Let Σ=(f,U) be a surface where U is a domain on ℂ. Then there exists a domain V on ℂ and an OPH h:V→U such that f∘ h:V→ S is a holomorphic mapping. (iii) Let Σ=(f,U)∈𝐅. Then there exists an OPH φ:U→U such that f∘φ is holomorphic on U. What f is discrete means that f^-1(w)∩ K is finite for any compact subset K of U. Let Σ=(f,U) be a surface where U is a domain on ℂ. Then f:U→ S is the restriction of an OPCOFOM g defined in a neighborhood U_1 of U, and thus by Stoilow's theorem, there exists a domain V_1 on ℂ and an OPH h:V_1 → U_1 such that g∘ h is holomorphic on V_1 and then for V=h^-1(U), f∘ h is holomorphic on V, and thus (ii) holds. Continue the above discussion and assume U is a Jordan domain. Then V is also a Jordan domain and by Riemann mapping theorem there exists a conformal mapping h_1 from U onto V and by Caratheodory's extension theorem h_1 can be extended to be homeomorphic from U onto V, and thus the extension of h∘ h_1 is the desired mapping φ in (iii). Since equivalent surfaces have the same area, boundary length and Ahlfors error term (<ref>), when we study a surface Σ=(f,U) in 𝐅 in which U is not specifically given, we can always assume that f is holomorphic on U. We shall denote by D(a,δ) the disk on S with center a and spherical radius δ. Then Δ⊂ S is the disk D( 0,π/2) . Let Σ=( f,U) ∈ℱ and let p∈∂ U. If f is injective near p, then f is homeomorphic in a closed Jordan neighborhood N_p of p in U, and then f(N_p) is a closed Jordan domain on S whose boundary near f(p) is an SCC arc, or two SCC arcs joint at f(p), and thus the interior angle of f(N_p) at f(p) is well defined, called the interior angle of Σ at p and denoted by ∠( Σ,p) . In general, we can draw some paths {β_j}_j=1^k in U with ∪_j=1^kβ_j\{p}⊂ U and β_j∩β_i={p}if i≠ j, such that each ( f,β_j) is a simple line segment on S, ∪_j=1^kβ_j divides a closed Jordan neighborhood N_p of p in U into k+1 closed Jordan domains U_jwith p∈U_j,j=1,…,k+1, and U_i∩ U_j=∅if i≠ j, and f restricted to U_j is a homeomorphism with ( f,U_j) ∈ℱ for each j. Then the interior angle of Σ at p is defined by ∠( Σ,p) =∑_j=1^k+1∠( ( f,U_j) ,p) . The existences of {β_j} _j=1^k will be given later in Corollary <ref> (v). This definition is independent of coordinate transform of U, and thus one can understood it with the assumption that f is holomorphic on U. The following result is a consequence of the previous theorem. Let (f,U)be a surface, U be a domain on ℂ bounded by a finite number of Jordan curves and ( f,∂ U) is consisted of a finite number of simple circular arcs and let q∈ f(U). Then, for sufficiently small disk D(q,δ) on Swith δ<π/2, f^-1(D(q,δ ))∩U is a finite union of disjoint sets {U_j }_1^n in U, where each U_j is a Jordan domain in U, such that for each j, U_j∩ f^-1(q) contains exactly one point x_j and (A) or (B) holds: (A) x_j∈ U_j⊂U_j⊂ U and f:U_j →D(q,δ) is a BCCM such that x_j is the only possible branch point. (B) x_j∈∂ U, f is locally homeomorphic on U_j \{x_j}, and when ( f,U) ∈ℱ, the following conclusions (B1)–(B3) hold: (B1) The Jordan curve ∂ U_j has a partition α_1( p_1,x_j) +α_2( x_j,p_2) +α_3( p_2,p_1) such that α_1+α_2=( ∂ U) ∩∂ U_j is an arc of ∂ U, α_3^∘⊂ U, c_j=( f,α_j) is an SCC arc for j=1,2, and c_3=( f,α_3) is a locally SCC[The condition δ<π/2 makes ∂ D( q,δ) strictly convex, and it is possible that ( f,α_3^∘) may describes ∂ D( q,δ) more than one round, and in this case ( f,α_3^∘) is just locally SCC.] arc in ∂ D( q,δ) from q_2=f( p_2) to q_1=f( p_1). Moreover, f is homeomorphic in a neighborhood of α_j\{x_j}, for j=1,2, in U and ∂( f,U_j) =( f,∂ U_j) =c_1+c_2+c_3. (B2) The interior angle of ( f,U_j) at p_1 and p_2 are both contained in [7π/16,9π/16]. (B3) There exists a rotation ψ of S with ψ(q)=0 such that the following conclusion (B3.1) or (B3.2) holds: (B3.1) q_1=q_2,( f,α_1) =q_1q =q_2q=-( f,α_2) , say, ( f,α _1+α_2) =q_1q+qq_1, and ( ψ∘ f,U_j) is equivalent to the surface[Here δ z^ω_j is regarded as the mapping z↦δ z^ω_j∈ S,z∈Δ^+, via the stereographic projection P.] ( δ z^ω_j:Δ^+) on S so that ( δ z^ω_j,[-1,1]) =a_δ ,0+0,a_δ, where ω_j is an even positive integer and a_δ∈( 0,1) with d( 0,a_δ) =δ. (B3.2) q_1≠ q_2, as sets c_1∩ c_2={q}, and ( ψ∘ f,U_j) is equivalent to the the surface ( F,Δ^+∪𝔇_1^' ∪𝔇_2^') so that the following holds. (B3.2.1) 𝔇_1^'=𝔇^'( -1,0,θ_1)and 𝔇_2^'=𝔇^'( 0,1,θ_2), such that for each j=1,2,θ_j∈0,π/4]. Moreover θ_1=0 (or θ_2=0) when c_1=q_1q (or c_2=qq_2), and in this case 𝔇_1^'=∅ (or 𝔇_2^'=∅). See Definition <ref> for the notation 𝔇^'( ·,·) . (B3.2.2) ( F,Δ^+) is the surface T=( δ z^ω_j,Δ^+), where ω_jis a positive number which is not an even number and even may not be an integer, ( F,𝔇_1^') is the lune ψ( 𝔇^'( q_1q ,c_1) ) and ( F,𝔇_2^' ) is the lune ψ( 𝔇^'( qq_2,c_2) ) . That is to say, ( f,U_j) is obtained by sewing the sector ψ^-1( T) with center angle[This angle maybe larger than 2π as the sector ( z^3,Δ^+) .] ω_jπ, and the closed lunes 𝔇^'( q_1q,c_1) and 𝔇 ^'( qq_2,c_2) along q_1q and qq_2 respectively. (A) follows from Stoilow's theorem directly when x_j∈ U. (B) follows from (A) and the assumption ( f,U) ∈ℱ, by considering the extension of f which is an OPCOFOM in a neighborhood of x_j in ℂ. We list more elementary conclusions deduced from the previous lemma directly and more notations. Let Σ=(f,U) with Σ∈ℱ, q∈ f(U), δ, x_j, U_j and α_1+α_2 be given as in Lemma <ref>. (A) If for some j, x_j∈Δ, then by Lemma <ref> (A), f is a BCCM in the neighborhood U_j of x_j in Δ, and the order v_f(x_j) of f at x_j is well defined, which is a positive integer, and f is a v_f(x_j)-to-1 CCM on U_j\{x_j}. (B) If for some j,x_jis contained in ∂Δ, then, using notations in Lemma <ref> (B), there are two possibilities: (B1) q_1=q_2, the interior angle of Σ at x_j equals ω_jπ, and the order v_f( x_j) is defined to be ω_j/2, which is a positive integer. (B2) q_1≠ q_2,c_1+c_2 is a simple arc from q_1 to q, and then to q_2. In this case the interior angle of Σ at x_j equals ω_jπ+φ_1+φ_2, where φ_1 and φ_2 are the interior angles of 𝔇^'( q_1 q,c_1) and 𝔇^'( qq_2 ,c_2) at the cusps, and we defined the order of f at x_j to be the least integer v_f( x_j) with v_f( x_j) ≥( ω_jπ+φ_1+φ_2) /2π. Since ω_jπ+φ_1+φ_2≥ω_jπ>0, we have v_f( x_j) ≥1 and f is injective on U_j \{ c_1+c_2} iff v_f( x_j) =1. This is also easy to see by Corollary <ref> (v). (C) The number v_f(x_j) can be used to count path lifts with the same initial point x_j: when x_j∈Δ, any sufficiently short line segment on S starting from q=f(x_j)has exactly v_f( x_j) f-lifts starting from x_j and disjoint in Δ\{x_j}; and when x_j∈∂Δ, for each arc β of the two sufficiently short arcs of ∂Δ with initial point x_j, (f,β) is simple and has exactly v_f(x_j)-1 f-lifts {β_j} _j=1^v_f( x_j) -1 with the same initial point x_j, β_j\{x_j}⊂Δ for each j and they are disjoint in Δ. This is also easy to see by Corollary <ref> (v). (D) A point x∈U is called a branch point of f (or Σ) if v_f(x)>1, or otherwise called a regular point if v_f( x) =1. We denote by C_f the set of all branch points of f, and CV_f the set of all branch values of f. For a set A⊂U, we denote by C_f( A) =C_f∩ A the set of branch points of f located in A, and by CV_f(K)=CV_f∩ K the set of branch values of f located in K⊂ S. We will write C_f^∗( A) =C_f( A) \ f^-1 (E_q) and C_f^∗=C_f\ f^-1(E_q)=C_f( U) \ f^-1(E_q). (E) For each x∈U, b_f( x) =v_f( x) -1is called the branch number of f at x, and for a set A⊂U we write B_f( A) =∑_x∈ A b_f( x) . Then we have b_f( x) ≠0 iff C_f( x) ={x}, and B_f( A) =∑_x∈ C_f( A) b_f(x). We also define B_f^∗( A) =B_f( A\ f^-1(E_q)) . Then B_f^∗( A) ≥0, equality holding iff C_f^∗( A) =∅. When A=U is the domain of definition of f, we write B_f=B_f( U) and B_f^∗ =B_f^∗( U) . Let Σ=( f,U) be a surface in ℱ, let x∈U, and let V be a (relatively)[This means that V=U∩ V^∗, where V^∗ is an open set on ℂ. Thus when x∈∂ U, V contains the neighborhood V^∗∩∂ U of x in ∂ U.] open subset of U. The pair ( x,V) is called a (relatively) disk of Σ with center x and radius δ, if xand V satisfy all conclusions in Lemma <ref> (A) or (B) as x_j and U_jand δ. If x∈ U and ( x,V) is a disk of Σ=( f,U) with radius δ, then V is open in U and V⊂ U. If x∈∂ U and ( x,V) is a disk of Σ with radius δ, then ∂ V has a partition α_1( x^',x) +α_2( x,x^'') +α _3( x^'',x^') such that α_1 +α_2is the old boundary of V, α_3 is the new boundary of V, V=V^∘∪( α_1+α_2) ^∘, c_1=( f,α_1) and c_2=( f,α_2) are SCC arcs, c_3=( f,α_3) is a locally SCC arc, which maybe more than a circle and which is contained in the circle d( f(x),w) =δ, and the interior angles of ( f,V) at x^' and x^'' are contained in [7π/16,9π/16]. If x is fixed and δ tends to 0, then x^' and x^'' tend to x and the interior angles of ( f,V) at x^' and x^'' both tend to π/2. The paths -α_1 and α_2 are called boundary radii of the disk V. Now we can state a direct Corollary to Lemma <ref>. Let Σ=( f,U) ∈ℱ and let ( x_1,U_1) be a disk of Σ with radius δ_1. Then, the following hold. (i) f is locally homeomorphic on U_1\{x_1}; and if ( x_1,U_1^') is another disk of Σ with radius δ_1^'>δ_1, then U_1⊂ U_1^', whether x_1is in ∂ U or U. (ii) If f is homeomorphic in some neighborhood of x_1 in U (which may be arbitrarily small), or if f locally homeomorphic on U, then the disk ( x_1,U_1) is a one sheeted closed domain of Σ, say, f restricted to U_1 is a homeomorphism onto f( U_1) . (iii) For each x_2∈ U_1\{x_1}, any closed disk ( x_2,U_2) of Σ is a one sheeted closed domain of Σ, moreover, U_2⊂ U_1 when the radius of ( x_2,U_2) is smaller than δ-d( f(x_1 ),f(x_2)) . (iv) If x_1∈∂ U, f is regular at x_1 and ( f,∂ U) is circular near x_1, then ( f,U_1) is a convex and one sheeted closed domain of Σ, which is in fact the closed lens 𝔇( I,c_1,c_1^'), where c_1 and c_1^' are circular subarcs of ∂Σ and the circle ∂ D( f(x_1),δ_1) , I is the common chord, and the three paths c_1,-c_1^',I have the same initial point. Moreover, if Σ is regular at x_1 and ∂Σ is straight near x_1, then f(U_1)=𝔇^'( -I,c_1^') =𝔇^'( -c_1,c_1^')is "half" of the disk D(f(x_1),δ_1) on the left hand side of "diameter" c_1 (see Definition <ref> for lenses and lunes). (v) For any x∈U_1, there exists a path I( x_1,x) in U_1 from x_1 to x such that I( x_1,x) is the unique f-lift of f(x_1 )f(x). That is to say, ( f,U_1) can be foliated by the family of straight line segments {( f,I( x_1,x) ) :x∈∂ U_1} and for each pair { I( x_1,x) ,I( x_1,y) } of the family { I( x_1,x) :x∈∂ U_j} with x≠ y, one has I( x_1,x) ∩ I( x_1,y) ={x_1}. Let Σ=(f,U)∈ℱ and let x∈∂ U. Then ∠(Σ,x)>0. This is clear by Lemma <ref> (B) and Remark <ref> (B). If the assumption Σ=(f,U)∈ℱ is not be satisfied, the conclusion may fail. For example, for the convex closed half disk Δ^+ and the disk B={z∈ℂ:| z-1/2| <1} in ℂ, T=Δ^+\ B can be regarded as a surface on S (via the sterographic projection), whose interior angle at the origin equals 0, and it is clear that T ∉ℱ. In fact the part of ∂ T lying on ∂ B is not convex on S. Lemma <ref> also directly implies the following lemma. Let (f,U)∈ℱ. Then the following hold. (A) For each p∈U,f restricted to some neighborhood of p in U is a homeomorphism if one of the following alternatives holds. (A1) p∈ U and p is a regular point of f. (A2) p∈∂ U, p is a regular point of f and (f,∂ U) is simple in a neighborhood of p on ∂ U. (B) For any SCC arc ( f,α) of ∂Σ=( f,∂ U) , f restricted to a neighborhood of α^∘ in U is homeomorphic if and only if h has no branch point on α^∘. (A) follows from the definition of ℱ. (B) follows from (A). The hypothesis in (A2) that (f,∂ U) is simple is necessary: f(z)=z^2,z∈Δ^+, is regular at z=0 but not injective in any neighborhood of 0. (<cit.> p. 32–35) Let Σ=(f,Δ)∈ℱ and let β=β( q_1,q_2) be a path on S from q_1 to q_2. Assume that α=α( p_1,p) is a path in Δ from p_1 to p which is an f-lift of a subarc β( q_1,q) of β from q_1 to q, with α\{p_1}⊂Δ and f( p_1) =q_1. Then α can be extended to an f-lift α^'=α( p_1,p^') of a longer subarc of β with α^'∘⊂Δ, such that either p^'∈∂Δ, or p^'∈Δ and α^' is an f-lift of the whole path β. The following lemma is obvious. Any two distinct great circles on S intersect at exactly two points, which are antipodal points on S. The following result follows from Definition <ref>, which is essentially Lemma 5.2 of <cit.>. Let (f,Δ)∈𝐅 and let D be a Jordan domain on S such that f^-1 has a univalent branch[Univalent branch for the inverse of an OPCOFOM always means an OPH in this paper.] g defined on D. Then g can be extended to a univalent branch of f^-1 defined on D. The following result follows from the Argument principle. Let D_1 and D_2 be Jordan domains on ℂ or S and let f:D_1→D_2 be a mapping such that f:D_1→ f(D_1) is a homeomorphism. If f(∂ D_1)⊂∂ D_2, then f(D_1 )=D_2. The following result is a generalization of the existence of lifts of curves for a CCM. Let U be a domain on S enclosed by a finite number of Jordan curves, f:U→ Sbe a finite-to-one mapping which is locally homeomorphic on U, Γ_n:[0,1]→ S be a sequence of paths on S which converges to a path Γ_0:[0,1]→ S uniformly, and let a∈U. If for each n,Γ_n has an f-lift I_n :[0,1]→Ufrom a, say, I_n is a path with f( I_n( s) ) =Γ_n( s) for all s∈0,1], and I_n( 0) =a. Then I_n( s) uniformly converges to a path I_0( s) ,s∈0,1], in U, such that I_0 is an f-lift of Γ_0 with I_0( a) =a. We first show the following. For any s_0∈0,1], if lim_n→∞I_n( s_0) → a_0, then s_0 has a neighborhood N_s_0( ε) =[s_0-ε ,s_0+ε]∩0,1] in [0,1], such that I_n(s) converges to an arc I_0( s) uniformly on N_s_0 with I_0 (s_0)=a_0 and f(I_0( s) )=Γ_0(s) for all s on N_s_0. Let w_0=Γ_0( s_0). Then w_0=f( a_0) and f^-1(w_0)={a_j}_j=0^m is a finite set. Since f is locally homeomorphic, each a_j has a connected and (relatively) open neighborhood U_j in U such that U_i∩ U_j=∅ when i≠ j and f:U_j→ f(U_j) is a homeomorphism for each j=0,1,2,…,m. Then we have w_0 is outside the compact subset f( U\∪_j=0^mU_j)of S, say, there exists a disk D( w_0,δ) on S such that f^-1(D( w_0,δ) )⊂∪_j=0^mU_j. s_0 has a connected neighborhood N_s_0 in [0,1] so that Γ_0( N_s_0) ⊂ D( w_0,δ/2) and thus Γ_n( N_s_0) ⊂ D( w_0 ,δ) for all n>n_0 for some n_0>0. Then I_n( N_s_0) ⊂∪_j=0^mU_j and, since I_n(N_s_0) is connected, we have I_n( N_s_0) ⊂ U_0 for all n>n_0. Therefore, Γ_n( N_s_0) ⊂ f(U_0) for n>n_0. It is clear that Γ_0( N_s_0) ⊂f(U_0) since Γ_n(N_s_0) converges to Γ_0( N_s_0) . Since f:U_0→ f(U_0) is homeomorphic, we conclude that I_n( s) converges to the path I_0( s) =f^-1(Γ_0( s) )∩U_0 uniformly for s∈ N_s_0with I_0(s_0)=a_0. It is obvious that I_0( s) ,s∈ N_s_0, is an f-lift of Γ_0( s) ,s∈ N_s_0 ,and Claim <ref> is proved. Let Abe the set of t∈0,1] such that lim_n→∞I_n( t) exists. Then A is an open subset of [0,1] by Claim <ref>. Ley B=[0,1]\ A. We show that B is also an open subset of [0,1]. Let s_1∈ B. Since U is compact, I_n( s_1) has two subsequences I_n_k^j( s_1) such that I_n_k^j( s_1) → a_1j( k→∞) with j=1,2 and a_11≠ a_12. Then Claim <ref> applies to Γ_n_k^j and Γ_0, and s_1 has a neighborhood N_s_1 in [0,1] such that I_n_k^j( s) converges uniformly to a path I_0j( s) ,s∈ N_s_1, and I_0j is an f-lift of Γ _0|_N_S_1,j=1,2. Then both I_0j( s) are continuous with I_01(s_1)=a_11≠ a_12=I_02(s_1), and then I_01∩ I_02=∅ when N_s_1 is chosen small enough. This implies that I_n( s) cannot converges as n→∞ for each s∈ N_s_1 and so B is open. Since 0∈ A. We have A=[0,1] and B=∅. We have proved that I_n converges to a path I_0 uniformly on [0,1], and it is clear that I_0 is the f-lift of Γ_0. § SEWING TWO SURFACES ALONG A COMMON BOUNDARY ARC We now introduce the method to sew two surfaces sharing a common boundary arc. We let H^+ and H^- be the upper and lower open half planes of ℂ, Then H^+ and H^-1 can be regarded as open hemispheres on S and H^+ and H^-1 can be regarded as closure of H^+ and H^- on S. For a closed curve γ in ℂ, we write γ^±=H^±∩γ. But recall that Δ^±always denotes H^±∩Δ, not H^±∩Δ. Then ( ∂Δ) ^±=( ∂Δ) ∩H^±={ z=e^± iθ:θ∈0,π]}. A surface Σ=( f,Δ) can be cut into two subsurfaces Σ_1=( f,Δ^+) and Σ_2=( f,Δ^-) by the diameter [-1,1] of Δ. Conversely, we can recover Σ by sewing Σ_1 and Σ_2 along ( f,[ -1,1] ) . The interval [ -1,1] in Δ is called the suture line when we sew Σ_1 and Σ_2. This trivial observation can be generalized in Lemma <ref> and Lemma <ref>. Let Σ=( f,Δ)be a surface and B={z∈ℂ:|z-1/2|<1/2}. Then ∂ B cut the surface Σ into two subsurfaces Σ_1=( f,Δ\ B) and Σ_2=( f,B) . Conversely, we can glue Σ_1 and Σ_2 to recover the surface Σ. This trivial observation can be generalized in Corollary <ref>. For j=1,2, let Σ_j=(f_j,U_j) be a surface and let α_j=α_j( x_j1,x_j2) be a proper arc of ∂ U_j such that ( f_j,α_j) is a simple arc with distinct endpoints. If γ=(f_1,α_1)∼-(f_2,α_2), then (f_1,U_1) and (f_2,U_2)can be sewn along γ=( f_1,α_1) to become a surface Σ_3=(f_3,Δ) with suture line [-1,1], such that the following hold: (i). There exist orientation-preserving homeomorphisms (OPHs) h_1:U_1→Δ^+ and h_2:U_2→Δ^-, called identification mappings (IMs), such that (h_1,α_1)∼-1,1]∼-( h_2,α_2) =( h_2,-α_2) , f_1∘ h_1^-1( x) =f_2∘ h_2^-1(x),∀ x∈-1,1], and f_3(z)={[ f_1∘ h_1^-1(z),z∈Δ^+,; f_2∘ h_2^-1(z),z∈Δ^-\-1,1], ]. is a well defined OPCOFOM, and we have the equivalent relations (f_3,Δ^+)∼(f_1,U_1),(f_3 ,Δ^-)∼(f_2,U_2), ∂Σ_3=( f_3,( ∂Δ) ^+) +( f_3,( ∂Δ) ^-) ∼( f_1,( ∂ U_1) \α_1^∘) +( f_2,( ∂ U_2) \α_2^∘) , and (f_3,[-1,1])∼(f_1,α_1)∼(f_2,-α_2). (ii) L(∂Σ_3)=L(∂Σ_1)+L(∂Σ_2)-2L(γ), A(Σ_3)=A( Σ_1) +A(Σ_2), n( Σ_3) =n( Σ_1) +n( Σ_2) +#( γ^∘∩ E_q) , and R(Σ_3)=R(Σ_1)+R(Σ_2)-4π#( γ^∘∩ E_q) . (iii). z∈ C_f_3( Δ\{-1,1}) if and only if h_1^-1(z)∈ C_f_1( U_1\∂α_1) or h_2^-1(z)∈ C_f_2( U_2\∂α_2). In particular, if f_1(∂α_1)⊂ E_q, then f_2(∂α _2)⊂ E_q and in addition CV_f_3(S\ E_q)=CV_f_1(S\ E_q)∪ CV_f_2 (S\ E_q). Recall that C_f( A)is the set of branch points of flocated in A and CV_f( T) is the branch value of f located in T (see Remark <ref> (D)), ∂α denotes the endpoints of α and α^∘ denotes the interior of α (see Definition <ref>). The condition (f_1,α_1)∼ (f_2,-α_2) is crucial (see Remark <ref> for the relation ∼) and note that (f_2,-α_2)=-( f_2,α_2) and ( f_2,α_2) are the same path with opposite direction. Two copies of the hemisphere H^+ on S cannot be sewn along their common boundary section ∞,-1,0⊂ S to become a surface, but H^+ and H^- can be sewn along ∞,-1,0 to become the surface ( f_3,Δ) , where f_3|_Δ^± are homeomorphisms from Δ^± onto H^±, and f_3 maps [-1,1] onto ∞,-1,0, ( ∂Δ) ^+=( ∂Δ) ∩H^+ onto 0,1,∞, and ( ∂Δ) ^-=( ∂Δ) ∩H^- onto ∞,1,0. The conclusion (i) in fact gives a routine how to sew Σ _1 and Σ_2, which is inspired by Example <ref>. By (<ref>), there exists an orientation-preserving homeomorphism (OPH) [Note that -α_2 is the same path with opposite direction, not the set {-y:y∈α_2}.] φ:α_1→ -α_2 such that ( f_1,α_1) =( f_2∘φ,α_1) , that is f_2( φ(x)) ≡ f_1(x),∀ x∈α_1. Let h_1:U_1→Δ^+ be any OPH such that h_1(α_1)=[-1,1]. Then let h_2:U_2 →Δ^- be an OPH such that h_2( y) ≡ h_1( φ^-1(y)) ,∀ y∈α_2. In fact, h_2|_α_2 defined by (<ref>) is an OPH from α_2 onto [1,-1] and can be extended to be an OPH h_2 from U_2 onto Δ^-. The pair of h_1 and h_2 are the desired mappings satisfying (i). Then (ii) is trivial to verify. To prove (iii) we may assume that Σ_1 and Σ_2 are the surfaces Σ_±=(f_±,Δ^±) such that f_± agree on [-1,1], and then f_3 defined by f_± on Δ^±is an OPCOFOM. Then f_± are the restrictions of f_3 to Δ^±, and thus x∈( -1,1) is a branch point of f_3, say x∈ C_f_3∩( -1,1) , iff x is a branch point of f_+ or f_-, say x∈ C_f_1( -1,1) ∪ C_f_2( -1,1) . In consequence we have C_f_3( Δ\{-1,1}) =C_f_1( Δ^+\{-1,1}) ∪ C_f_2( Δ^-\{-1,1}) , and then all conclusions of (iii) follow. The suture line may not be straight: the sewn surface Σ_3=( f_3,Δ) can be reparametrized as Σ^'=( f_3^',U) with f_3^' =f_3∘ψ, where ψ is a OPH from U onto Δ for some Jordan domain U, and for Σ^', the suture line becomes h^-1([-1,1]). In the previous lemma, the sewing process can be understood as an abstract process via equivalent relations. Let U_j ,Σ_j=( f_j,U_j) ,α_j⊂∂ U_j satisfy all assumption of the previous lemma. Then we can define an equivalent relation ∼ on the disjoint union U_1 ⊔U_2. For any pair of points x and y in U_1⊔U_2, x∼ y if and only if one of the three conditions holds: (1)x=y∈U_1, (2) x=y∈U_2, (3) x∈α_1,y∈α_2 and f_1(x)=f_2(y). Since ( f_1,α_1) is simple and ( f_1,α_1) ∼-( f_2,α_2) , ( f_2,α_2)is also simple, and thus for each x∈α_1 that y∈α_2 with f_1(x)=f_2(y) is unique, and vice versa for each y∈α_2. We write [x] the equivalent class of x: x]={y∈U_1⊔U_2:y∼ x}. Then for x∈( U_1\α_1^∘) ⊔( ( U_2\α_2^∘) ) , [x] contains only one point in U_1⊔U_2, and for x∈α_1,[x] contains two points in U_1⊔U_2, say [x]={x,y} with f_1(x)=f_2(y) and y∈α_2. The previous lemma show that the quotient space Q=( U_1⊔U_2 ) /∼, with the quotient topology, is topologically equivalent to the unit disk Δ. Then the sewn surface ( f_3,Δ) can be identified as a representation of the abstract space ( f̃,Q) , where f̃( [x]) =f_1(x) when x∈ x]∩U_1, or f̃( [x]) =f_2( y) when y∈ x]∩U_2 ,∀ x]∈Q, is well defined. In this abstract version, [α_1]=[α_2] is the suture line, the quotient mapping x→ x] is the IM. U_1 may intersects U_2, but for the union U_1⊔U_2 a point in U_1 and a point in U_2 are always regarded as distinct points. In fact we may assume U_1 and U_2 are bounded with positive distance. For the sewn surface ( f_3,Δ) in the lemma there exists a homeomorphism h from Q=( U _1⊔U_2) /∼ onto Δ, such that h( [ α_1] ) =[-1,1], with h([x])={[ h_1(x),x∈U_1,; h_2( x) ,x∈U_2. ]. The topological equivalence h:Q→Δ is also called the IM, when we use Δ to represent Q. All surfaces obtained by sewing surfaces along arcs can be interpreted in this way. Though this way is abstract, it keeps more information of the elder surfaces than the new sewn surface as Σ_3 in Lemma <ref>, and moreover, it is easier to state the process than that version which involves the concrete IMs and suture lines . Let γ be a simple arc on S with[Recall that #∂γ=2means γ contains two distinct endpoints.] #∂γ=2 (recall that ∂γ is the endpoints of γ). Let S_γ=( f,U) be a surface in 𝐅 such that f:U→ S\γ is a homeomorphism and ∂ S_γ=( f,∂ U) =γ-γ. Then R(S_γ)=4π#γ^∘∩ E_q+4π#( ∂γ) ∩ E_q-8π≤4π#γ^∘∩ E_q, equality holding if and only if ∂γ⊂ E_q. Since #E_q=q,we have n( S_γ) =#( S\γ) ∩ E_q=q-#γ∩ E_q=q-#γ^∘∩ E_q-#( ∂γ) ∩ E_q =q-2-#γ^∘∩ E_q+2-#( ∂γ) ∩ E_q Since A(S_γ)=4π, (<ref>) follows from R(S_γ) =( q-2) A(S_γ)-4πn( S_γ) =4π#γ^∘∩ E_q-8π+4π#( ∂γ) ∩ E_q ≤4π#γ^∘∩ E_q, with equality if and only if ∂γ⊂ E_q . Here is a useful result directly from Lemma <ref>: Let Σ=( f,Δ) be a surface in 𝐅 and α be a simple arc in Δ such that α^∘⊂Δ,∂α⊂∂Δ and #∂α=2. If γ=( f,α) is simple, then α cuts the disk Δ into two Jordan domains Δ_1 and Δ_2 and for the surfaces Σ_j=( f,Δ_j ) ,j=1,2, we have R(Σ)=R(Σ_1)+R(Σ_2)-4π#γ^∘∩ E_q. Moreover, if ∂α⊂ E_q, then we have CV_f( S\ E_q) =CV_f_1(S\ E_q)∪ CV_f_2( S\ E_q) . In fact Σ can be recovered by gluing the surfaces ( f,Δ_1) and (f,Δ_2)along γ, and so the desired equalities follows from (<ref>) and (<ref>). For two equivalent curves γ_j:α_j→ S, we will write γ_1=γ_2 when there is no confusion. Then Lemma <ref> can be restated as simplified but more useful versions (Lemmas <ref> and <ref>). Let Σ_1 and Σ_2 be two surfaces in 𝐅 such that ∂Σ_1=γ+Γ_1 and ∂Σ_2 =-γ+Γ_2, where γ is a simple and proper arc of ∂Σ_1 which is not closed, say, it has distinct endpoints. Then -γ is a proper subarc of ∂Σ_2, and Σ_1 and Σ_2 can be sewn along γ, resulting a surface Σ_3=( f_3,Δ) such that the following hold. (i). ∂Σ_3=Γ_1+Γ_2, L(∂Σ_3)=L(∂Σ_1)+L(∂Σ_2)-2L(γ), A(Σ_3)=A( Σ_1) +A(Σ_2), n( Σ_3) =n( Σ_1) +n( Σ_2) +#( γ^∘∩ E_q) , R(Σ_3)=R(Σ_1)+R(Σ_2)-4π#( γ^∘∩ E_q) . (ii). If ∂γ⊂ E_q, then CV_f_3(S\ E_q)=CV_f_1(S\ E_q)∪ CV_f_2 (S\ E_q). (iii). If ∂Σ_2=-γ+γ and the interior of Σ_2 is the simple domain S\γ, then ∂Σ_3=∂Σ_1=γ+Γ_1, and if in addition[note that by assumption #∂γ=2, say, γ contains two distinct endpoints.] ∂γ⊂ E_q, then the following two equalities hold: R( Σ_3) =R( Σ_1) , and CV_f_3(S\ E_q)=CV_f_1(S\ E_q). All conclusions, except (<ref>) and (<ref>), follow from Lemma <ref>. Assume ∂Σ_2=-γ+γ, the interior of Σ_2 is the simple domain S\γ, and ∂γ⊂ E_q. Then we have CV_f_2(S\ E_q)=∅, and thus (<ref>) follows from (ii). On the other hand, by Lemma <ref> we have R(Σ_2)=4π#( γ^∘∩ E_q) , which with (<ref>) implies (<ref>). Σ_3 in (iii) can also be obtained by continuously extending Σ_1: let the two endpoints of γ be fixed and let γ continue to move to the right hand side, and return to the initial position of γ after scanning the whole sphere. Let Σ=( f,Δ) be a surface such that ∂Σ=γ+Γ where γ is a simple arc on S with #∂γ=2. Let T be a closed Jordan domain on S with ∂ T=γ-γ^'. Then Σ and T^c=S\ T^∘ can be sewn along γ, resulting a surface Σ^'=( f^',Δ) such that ∂Σ^'=Γ+γ^'. Moreover, in the case ∂γ⊂ E_q we have R( Σ) =R(Σ^')+R(T)-4π#E_q∩γ ^'∘, and CV_f^'( S\ E_q) =CV_f( S\ E_q) . It is trivial to see that ∂ T^c=-γ+γ^', and then by Lemma <ref> (i) we have ∂Σ^'=Γ +γ^'.Assume ∂γ⊂ E_q. Then by Lemma <ref> (ii) we have CV_f^'(S\ E_q)=CV_f (S\ E_q)∪ CV_id(T_c\ E_q )=CV_f(S\ E_q). The surface Σ^' can also be obtained in this way: first sew Σ and the surface whose interior is S\γ and boundary is γ-γ, obtaining a surface Σ^''=( f^'',Δ) , and then cut from the new surface Σ^'' the the domain T^∘, together with the open boundary γ^∘, along γ^', obtaining the surface Σ^'=( f^',Δ) . Then we have by Lemma <ref> (iii) that R(Σ^'')=R(Σ). And by Lemma <ref> we have R(Σ)=R(Σ^'')=R(Σ^')+R(T)-4π#E_q ∩γ^'∘. Lemma <ref> can be extended to the case that α_2 is the whole boundary ∂ U_2 but α_1 is a proper arc of ∂ U_1, as the reverse process of Example <ref>. Let Σ_1=(f_1,U) be a surface in 𝐅 and assume ∂Σ_1 has a partition ∂Σ_1=γ+Γ such that γ=( f,α) is a Jordan curve on S, where α=α( x_1,x_2) is a proper arc of ∂ U. Let T_γ be the closed Jordan domain enclosed by γ and let T^c=S\ T_γ^∘. Then Σ_1 and T^c can be sewn along γ becoming a surface Σ_2=( f_2,Δ)such that for the disk B={z∈ℂ:|z-1/2|<1/2} the following holds: (i) There exist an OPCOFOM h_1:U→Δ\ B and an OPH f_1^':B→ T^c such that h_1:U\{x_1,x_2}→( Δ\ B) \{1} is an OPH, (h_1,∂ U\α^∘)=∂Δ, (h_1,α)=-∂ B, h_1( x_1) =h_1( x_2) =1, f_1^'( y) =f_1∘ h_1^-1( y) , y∈∂ B, and f_2( z) ={[ f_1∘ h_1^-1( z) ,z∈Δ\ B,; f_1^',z∈ B. ]. . (ii) p∈Δ\∂ B is a branch point of f_2 if and only if h_1^-1(p)∈ C_f_1( U\α) . (iii) p∈( ∂ B) \{1} is a branch point of f_2 if and only if h_1^-1(p)∈ C_f_1( α^∘) . (iv) ∂Σ_2=Γ, when Γ is viewed as a closed curve on S, and moreover L(∂Σ_1)=L(∂Σ_2)+L(γ), A(Σ_1)=A(Σ_2)+A(T_γ)-4π, n( Σ_1) =n( Σ_2) +n( T_γ) -q+χ_E_q(f( x_1) ), where χ_E_q( w) ={[ 0,w∉ E_q,; 1,w∈ E_q. ]. R(Σ_1)=R(Σ_2)+R(T_γ)+8π-4πχ_E_q(f_1 (x_1))≥ R(Σ_2)+R(T_γ)+4π, equality holding if and only if f( x_1) =f( x_2) ∈ E_q; and when (<ref>) holds, we have CV_f_1( S\ E_q) =CV_f_2( S\ E_q) . (i) in fact gives the method how to sew Σ_1 and T^c along γ, all of (i) is easy to see. Then (ii) and (iii) follows from (i). The relation ∂Σ_2=Γ and (<ref>)are trivial to see. (<ref>) follows from the equalities A(Σ_1)=A(Σ_2 )-A(T^c)and A(T^c)=A(S\ T_γ)=4π-A(T_γ). It is clear that[Note that by definition of n, n( T^c) =n( S\ T_γ) =#( S\ T_γ) ∩ E_q, say, any point of E_q on the boundary is not counted for n.] n( T^c) =q-#γ∩ E_q-n( T_γ) , and that n( Σ_1) =n( Σ_2) -n( T^c) -#γ∩ E_q+χ_E_q(f( x_1) ), where χ_E_q(f( x_1) ) appears because after the sewing, the endpoints of α are sewn to one boundary point of Δ. In consequence we have (<ref>). (<ref>) follows from (<ref>) and (<ref>). When (<ref>) holds, (<ref>) follows from (ii) and (iii) directly. Therefore all conclusions in (iv) hold. Assume that the arc α=α( x_1,x_2) in Corollary <ref> is the whole boundary ∂ U, say, x_2=x_1, and γ=( f,∂ U) =∂Σ is still simple, say, γ=∂Σ is a Jordan curve. Then we can sew Σ_1 and the closed domain T_c=S\ T_γ^∘ so that the result surface Σ_2 is a closed surface, say, Σ_2=( f_2,S) , where S is the sphere. Then above equations in the proof all hold when we replace χ_E_q(f(x_1)) by 0, even if f(x_1)∈ E_q. This can be explained in another way: we may choose the end point x_2=x_1 of α not contained in f_1^-1(E_q), then χ_E_q(f(x_1))=0 and thus the above argument works. This means that we have Let Σ_1=( f_1,Δ) be a surface such that ∂Σ is a Jordan curve on S and let T_∂Σ be the closed domain enclosed by ∂Σ on S. Then for the closed surface Σ_2=( f_2,S) over S which is obtained by sewing Σ and T^c=S\ T_∂Σ^∘ along ∂Σ, we have R(Σ)=R(Σ_2)+R(T_∂Σ)+8π. Moreover, we have CV_f_2=CV_f_1. The equivalent class argument for two surface in Remark <ref> can be used for the above method. We left this to the reader. For a surface Σ=( f,Δ) ∈ℱ, we can cut Δ by the radius [0,1] to obtain a surface Σ_1 whose interior is ( f,Δ\0,1]) and boundary is ∂Σ_1+( f,[ 1,0] ) +( f,[0,1]) . Σ_1 can be expressed as ( f_1,Δ^+) which split the arc ( f,[0,1]) into two boundary arcs of Σ_1, where f_1( z) =f( z^2). Conversely, we can sew Σ_1 along ( f_1,[0,1]) to recover the surface Σ. This trivial observation is generalized in Lemma <ref>. Let Σ=(f,U )be a surface in 𝐅 such that ∂ U has a partition ∂ U=α_1( x_1,x_2) +α_2( x_2,x_3) +A( x_3,x_1) , ∂Σ has the corresponding partition ∂Σ=γ-γ+Γ=( f,α_1) +( f,α_2) +( f,A) , and γ=( f,α_1) =-( f,α_2) is a simple arc with distinct endpoints. Then Σ can be sewn along γ becoming a surface Σ_1 such that the following hold: (i) If Γ is a just a point, say ∂Σ=γ-γ, then Σ_1 is a closed surface Σ_1=( f,S) such that R(Σ)=R(Σ_1)+4π#γ∩ E_q, and CV_f=CV_f_1. (ii) If Γ is not a point, then Σ_1 is a surface Σ _1=( f_1,Δ) , such that ∂Σ_1=Γ A(Σ_1)=A( Σ) ,L(∂Σ_1)=L(∂Σ)-2L(γ), CV_f_1( Δ) =CV_f( Δ) or CV_f_1( Δ) =CV_f( Δ) ∪{f( x_1) }, R(Σ)=R(Σ_1)+4π#( [ γ\{f( x_1) }] ∩ E_q) , and if f(x_1)∈ E_q, then the following two equalities hold: CV_f_1(S\ E_q)=CV_f( S\ E_q) , and R(Σ)=R(Σ_1)+4π#γ∩ E_q-4π. Assume Γ is a point. Then we may understand the surface Σ is obtained by a closed surface Σ _1=( f_1,S) so that f_1 maps [0,1] homeomorphically onto γ and that the interior of Σ is the open surface f_1:S\0,1]→ S and the boundary of ∂Σ is ( f_1,[0,1]) +(f_1,[1,0])=γ-γ. From this we have the result (i). Assume Γ is not a point, say, α_1+α_2 is a proper arc of ∂ U, then (f,U) can be sewn along γ to become a surface Σ_1=(f_1,Δ): There exists an OPCOFOM h:U→Δ such that h|_U\α_1+α_2→Δ\0,1] is an OPH, ( h,α_1) =-[0,1]=[1,0],( h,α_2) =[0,1], f(h^-1( y) ∩α_1)=f(h^-1( y) ∩α_2),y∈0,1], and f_1( z) =f∘ h^-1( z) ,z∈Δ. It is clear that Σ_1=( f_1,Δ) is a well defined surface and (<ref>)–(<ref>) hold. Moreover we have n( Σ) =n( Σ_1) -#E_q∩[ γ\{f( x_1) }] , which with the first equality in (<ref>) implies (<ref>). Then, under the special case f( x_1) ∈ E_q, (<ref>) and (<ref>) are just speecial case (<ref>) and (<ref>). § THE SPHERICAL ISOPERIMETRIC INEQUALITIES In this section we list some results follows from Bernstein's isoperimetric inequalities. Let Γ be a closed curve on S. (i) (Bernstein <cit.>) If L=L(Γ)≤2π, Γ is simple and contained in some hemisphere S_1 on S, then the area A of the Jordan domain of S_1 bounded by Γ satisfies A≤2π-√(4π^2-L^2), with equality if and only if Γ is a circle. (ii) (Radó <cit.>) If L(Γ)<2π and Γ is simple, then Γ lies in some open hemisphere on S. (iii) If L(Γ)<2π and Γ consists of finitely many circular arcs on S, then Γ also lies in some open hemisphere on S. (i) and (ii) are known, and (iii) can be proved by (i) and (ii) as in <cit.>. Let f:Δ→ Sbe an OPCOFOM. If f(Δ)⊂ S\ E_q and f(∂Δ) lies in some open disk D in S\ E_q, then f(Δ)⊂ D. By the assumption, f(∂Δ)∩( S\ D) =∅. If f(Δ)∩( S\ D) ≠∅, then by the argument principle we have f(Δ)⊃ S\ D⊃ E_q, contradicting the assumption f(Δ)⊂ S\ E_q. The following lemma is Lemma 3.5 in <cit.>. For each k=1,…,n, let F_k≠∅be a domain in a hemisphere S_k on S which is enclosed by a finite number of Jordan curves and let l_k=L(∂ F_k). If l=∑_k=1^nl_k<2πand D_l is a disk in some hemisphere on S with L(∂ D_l)=l, then A(F_1)+⋯+A(F_n)≤ A(D_l)=2π-√(4π^2-l^2), with equality if and only if n=1 and F_1 is also a disk. The following result is a consequence of Theorem 3.6 in <cit.>. Let Σ=(f,U)∈ℱ. Assume f(U) is contained in some open hemisphere and L(f,∂ U)<2π. Then A(f,U)≤ A(T)<L(f,∂ U), where T is a disk in some open hemisphere on S with L(∂ T)=L(f,∂ U). But the author of <cit.> did not proved that the equality in (<ref>) holds only if Σ is a simple disk with perimeter L(f,∂ U). We will improve this and give a self-contained proof after we introduce an area formula. For a rectifiable Jordan curve γ in ℂ which is oriented anticlockwise, the spherical area A(γ) of the domain D_γ in ℂ enclosed by γ can be defined by A(γ)=∬_D_γ4dx∧ dy/(1+zz)^2 =2/i∫_γzdz/1+zz, since the exterior differential of zdz/1+zz equals dzdz/1+zz=dz∧ dz/1+zz-zzdz∧ dz/( 1+zz) ^2=dz∧ dz/( 1+zz) ^2=2idx∧ dy/( 1+zz) ^2. Note that the formula for A(γ) with γ⊂ℂ depends on the orientation of γ, which is positive when γ is anticlockwise, or is negative when γ is clockwise. If Σ=(f,U) is a surface over S, then the area may not be determined by the boundary ∂Σ=( f,∂ U) only. But if ∞∉∂Σ, then A(Σ) is determined by ∂Σ and the covering number _f(∞) of f, where _f(∞) is defined to be lim_q_n→∞ #f^-1(q_n) in which q_n is a sequence of regular values of f converging to ∞. That is to say, we have the following. Let Σ=( f,U) be a surface in which U is a Jordan domain enclosed by a finite number of piecewise smooth Jordan curves and ∂Σ=( f,∂ U) is also piecewise smooth. Assume that ∞∉∂Σ. Then A(Σ)=4π_f(∞)+A(∂Σ), where A(∂Σ)=2/i∫_∂Σw dw/1+|w|^2=2/i∫_∂ Uf(z) df(z)/1+|f(z)|^2. This is essentially follows from Argument principle of meromorphic functions. By Theorem <ref> (ii), we may assume f is meromorphic on U. Let p_1,…,p_k be all poles, which are all distinct points of f^-1(∞), of f with multiplicities m_1,…,m_k. Then {p_j}⊂ U and ∑_j=1^km_j=_f(∞). For each j=1,…,k, let C_j,ε be small circles centered at p_j with radius ε. Then we have 2/i∫_∂ U-C_1,ε-⋯-C_k,ε f(z)f^'(z)dz/1+|f(z)|^2= ∬_U\∪_j=1^kD( p_j ,ε) 4| f^'(z)| ^2dx∧ dy/( 1+|f(z)|^2) ^2→ A(Σ) as ε→0. For each j=1,…,k, we have by Argument principle that lim_ε→02/i∫_-C_j,ε f(z)f^'(z)dz/1+|f(z)|^2=lim_ε→02/i∫_-C_j,ε| f(z)| ^2dlog f(z)/1+|f(z)|^2 =lim_ε→02/i∫_-C_j,εdlog f(z)=4π m_j. Then the conclusion follows from ∑ m_j=_f(∞). Assume Σ_j=( f_j,Δ) ,j=1,2, are two surfaces with ∂Σ_1=∂Σ_2 and ∂Σ_1 is piecewise smooth. Then A(Σ_1)-A(Σ_2)=n_04π, where n_0 is an integer. When ∞∉∂Σ_1, this follows from the previous lemma directly. When ∞∈∂Σ_1 we can consider the surfaces Σ_j^'=( φ∘ f_j,Δ) ,j=1,2, where φ is a rotation of S such that ∞∉∂Σ_j^',j=1,2. It is clear that A(Σ_j )=A(Σ_j^'), and then we can apply the previous lemma to Σ_j^',j=1,2, to obtain the conclusion. Let Γ:∂Δ→ S\{∞} be a closed curve in ℂ consisted of a finitely many simple circular arcs. If φ_t, t∈0,1], is a family of rotations on S which is continuous with respect to t and φ_t( Γ) ∩{∞}=∅for all t∈0,1]. Then A(φ_t∘Γ)=A(Γ). For each a∈ℂ\Γ let n( Γ,a) =1/2π i∫_Γdz/z-a. Then for each component V of ℂ\Γ, n( Γ,a) is a constant integer n_V=n( Γ,a) for all a∈ V, and then we can write n_V=n( Γ,V) . We call n_V=n( Γ,V) the index of V with respect to Γ. It is clear that the index of the unbounded component is zero. Let V_1,…,V_m be all distinct bounded components of ℂ \Γ. Then we have A(Γ)=∑_j=1^mn_V_jA(V_j). By the assumption it is clear that for each t∈0,1], V_t,j =φ_t( V_j) ,j=1,…,m, are all distinct bounded components of ℂ\φ_t(Γ), and we have n_V_t,j=n( φ_t( Γ) ,V_j) =n_V_j,A(V_t,j)=A(V_j),t∈0,1]. Thus A(φ_t( Γ) )=∑_j=1^mn_V_t,jA(V_t,j )=∑_j=1^mn_V_jA(V_j)=A(Γ),t∈0,1]. For any piecewise smooth curve Γ=( f,∂Δ) with ∞∉Γ and any rotation φ of the sphere S with ∞∉φ( Γ) , A(Γ) need not equals A(( φ∘ f,∂Δ) ). For example, for the family of congruent circles C_x=∂ D( x,π/4) in S\{∞} oriented anticlockwise, we have A(C_x)=A(C_0) when d( x,∞) >π/4, but A(C_x)=4π -A(C_0) when d( x,∞) <π/4. In general we have Let Γ be a piecewise smooth Jordan curve in ℂ oriented anticlockwise. Then we have (i) 0<A(Γ)=A(D)<4π, where D is the Jordan domain in ℂ bounded by Γ. (ii) For any rotation φ of S with ∞∉φ( Γ), A(φ(Γ))={[ A(Γ), if ∞∉φ( D); A(Γ)-4π, if ∞∈φ( D) ]. Γ divides ℂ into two components D and D_∞, where D_∞ contains ∞. It is clear that A( φ(D)) =A(D) and A(φ( D_∞) )=A(D_∞). If φ( D) does not contains ∞, then n_φ(D)=n( φ( Γ) ,φ( D) ) =n( Γ,D) =n_D=1 and A(φ(Γ))=A(φ(D))=A(D)=A(Γ). If ∞∈φ( D) , then ∞∉φ( D_∞) and n( φ( Γ) ,φ( D_∞) ) =-1. Thus we have A(φ( Γ) )=-A( φ( D_∞) ) =-A(D_∞)=A(D)-4π=A(Γ)-4π. Now we prove the following lemma. For j=0,1, let Γ_j=( f_j,∂Δ) be a closed curve on S consisted of a finite number of simple circular arcs, such that Γ_j is contained in some open hemisphere S_j on S with S_j⊂ℂand there exists a rotation φ of S such that φ∘ f_0=f_1. Then A(Γ_1)=A(Γ_0). It is clear that the circle Γ_1=∂ D( 0,π/4) and Γ_2=-∂ D( ∞,π/4) are both in ℂ and oriented anticlockwise, but they do not satisfy the hypothesis of the lemma, since Γ_2 can not be contained in any open hemisphere which does not contain ∞. By the assumption, there exists a family φ_t,t∈0,1], of rotations of S so that φ_t( z) is continuous for ( z,t) ∈ S×0,1], φ_0=id and φ_1=φ, and the family Γ_t( z) =φ_t∘Γ_1(z),z∈∂Δ, never meet ∞. This implies that A( Γ_t) is locally invariant for all t∈0,1]. Thus we have A(Γ_0)=A(Γ_1). Now we can enhance Lemma <ref> as follows. Let Γ=( f,∂Δ)be a closed curve consisted of finitely many simple circular arcs such that ∂Σ is contained in some open hemisphere S_1 in ℂ and L(Γ)<2π. Then the following hold. (i) A(Γ)≤ A(T), where T is a disk in some hemisphere on S with perimeter L, with equality holding if and only if Γ is a circle oriented anticlockwise (say, Γ is a convex circle). (ii) If, in addition, Γ is the boundary of a surface Σ=( f,Δ) ∈ℱ with f(Δ)⊂ S_1⊂ℂ, then A(Σ)≤ A(T)<L(f,∂ U), with equality holding if and only if Σ is a simple disk. Since Γ⊂ S_1⊂ℂ, Γ is an anticlockwise circle if and only it is a convex circle on S. If Γ is of the form Γ =I_1-I_1. Then A(Γ)=0 and (i) holds. If Γ is a Jordan curve and D is the domain bounded by Γ in S_1⊂ℂ, then D⊂ S_1 and |A(Γ)|=A(D) and (i) follows from Lemma <ref> (i). In general, ∂Δ has a partition {{α _ij} _j=1^k_i} _i=1^n of a finite number of subarcs with α_ij^∘∩α_i_1j_1^∘=∅ when i≠ i_1 or j≠ j_1 such that ( f,α_ij) are all simple circular arcs and for each i,α_i1+α_i2 +…+α_ik_i may not be a subarc of ∂Δ, but γ_i=(f,α_i1)+( f,α_i2) +…+( f,α_ik_i) is either a Jordan curve on S or is of the form γ_i=I_i-I_i, where I_i is a simple arc on S. Then we have A(Γ)=∑_i=1^nA(γ_i), and by Lemma <ref> (i) | A(γ_i)| ={[ 0, if γ_i is of the form I_i-I_i; A(D_i), if γ_i is a Jordan curve bounding a domain D_i in S_1 ]. ≤ A(T_i), where T_i is a disk in ℂ with perimeter L(γ_i), with equality holding if and only if D_j is a disk. Then we have, by Lemma <ref>, A(Γ)≤∑_j=1^k| A(γ_j)|≤∑ _j=1^kA(T_j)≤ A(T), where T is a disk in S_1 with L(∂ T)=L(Γ), and A(Γ)=A(T) if and only if k=1 and Γ is a circle oriented anticlockwise. (i) is proved. Assume Γ=( f,∂Δ) is the boundary of a surface Σ=( f,Δ)with f(Δ)⊂ S_1⊂ℂ. Then by Lemma <ref> and Lemma <ref> (i), we have A(Σ)=A(Γ)≤ A(T), equality holding if and only if Σ is a simple disk. By Lemma <ref>, we also have A(T)<L(f,∂Δ) and (ii) is proved. Let ∂Δ=α( a_1,a_2) +β( a_2,a_1) be a partition of ∂Δ, and for each j=1,2, let Γ _j=( f_j,∂Δ) be a closed curve consisted of a finite number of circular arcs such that Γ_j is contained in an open hemisphere S_j⊂ℂ, and assume ( f_1,α) =( f_2,α) , L( f_1,β) =L( f_2,β) , L(f_1,β)+q_1q_2<2π, and ( f_2,β) is an SCC arc, where q_j=f(a_j) for j=1,2. Then A( Γ_1) ≤ A(Γ_2), equality holding if and only if (f_1,β)=( f_2,β)when q_1≠ q_2 or (f_1,β)is a convex circle when q_1=q_2. It is permitted that q_1=q_2, and in this case, ( f_j,α) and ( f_j,β) are both closed curves for j=1,2, and the conclusion follows from Lemma <ref> directly. Note that when q_1=q_2, L( f,β_1) =L( f,β_2) <2π. Any closed hemisphere on S can not contain ∞ if it contains 0 in its interior. Thus there exists a rotation φ of S, such that φ( q_1) =0and ∞∉φ(S_j), and thus φ( S_j) ⊂ℂ. Then ( φ∘ f_j,∂Δ) is contained in the open hemisphere φ( S_j) ⊂φ( S_j) ⊂ℂ, and thus by Lemma <ref>, for j=1,2, we have A( ( φ∘ f_j,∂Δ) ) =A( (f_j,∂Δ)) =A(Γ_j). So we may assume q_1=0.If c=( f_2,β) is the line segment q_2q_1=q_2,0, then ( f_1 ,β) is also the line segment q_2,0, and then ∂Γ_1∼∂Γ_2 and the conclusion of the lemma holds with A( Γ_1) =A(Γ_2). Thus we may assume that c is strictly convex. Let C be the circle on S determined by c, and let D be the disk enclosed by C. Then C is strictly convex and D is contained in an open hemisphere S_C on S. Since q_1=f_1(a_1)=f_2 (a_1)=0, we may take S_C so that S_C⊂ℂ. Define Γ_1^'=( f_1^',∂Δ) =( f_1^',α) +( f_1^',β) =C\ c^∘+( f_1,β) . Then L(Γ_1^')=L(C\ c)+L( f_1,β) =L(C)<2π, which together with Lemma <ref> (ii), implies that Γ_1^' is contained in some open hemisphere S_1^' on S. Since 0∈Γ_1^', we have S_1^' ⊂ S\{∞}=ℂ. Therefore Lemma <ref> (i) implies A(Γ_1^')≤ A( D) =A(C), equality holding if and only if Γ_1^' is an anticlockwise circle in S_1^', say, ( f_1,β) =c if c≠ C or Γ_1^'=( f_1,β) is an anticlockwise circle in S_1^' if c=C. Then we have A(Γ_1) =A(( f_1,α) +( f_1,β)) =A(( f_1,α) -C\ c^∘)+A(Γ _1^') ≤ A(( f_1,α) -C\ c^∘)+A(C) =A(( f_1,α) +c)=A( Γ_2) . say A(Γ_1)≤ A(Γ_2), equality holding if and only if (f_1,β) is the arc cor (f_1,β) is a convex circle. Let Σ=(f,Δ)∈ℱ. Assume that the restriction I=-( f,[ -1,1] ) is a simple line segment Ion S, Σ is contained in some open hemisphere on S and L(f,∂Δ)≤2πsinL(I)/2. Then the following hold. (i) There exists θ_1∈(0,π) such that L(f,∂Δ)=L(∂𝔇(I,θ_1,θ_1 )) and A(Σ)≤ A(𝔇(I,θ_1,θ_1)). (ii) A(Σ)=A(𝔇(I,θ_1,θ_1)) if and only if Σ is a simple closed domain congruent to 𝔇 (I,θ_1,θ_1). Since Σ is contained in some open hemisphere on S, we have L(I)<π, and the condition (<ref>) implies that L(f,∂Δ) is not larger than the perimeter of the circle ∂ D( 0,L(I)/2) with diameter L(I), and so L(f,∂Δ)<2π. See Definition <ref> for the notation 𝔇 (I,θ,θ). Geometrically, it is clear that L(θ)=L(∂( 𝔇(I,θ,θ) ) is strictly increasing on [0,π/2] as a continuous function of the angle θ, and thus θ_1 is uniquely determined by L(θ_1)=L(f,∂Δ). (i) is essentially implied in the proof of Theorem 3.8 in <cit.>, based on Theorem 3.6 in <cit.>, though in Theorem 3.8 of <cit.>, I is replaced by the special segment 1,0 and the condition (<ref>) is replaced by L(f,∂Δ)≤2πsind(0,1)/2=2πsinπ/4=√(2)π. But if we use the same method of <cit.> based on Lemma <ref>, we can prove (ii) by the way. Recall that for a circular arc c, k( c) ≥0 denotes the curvature. Since f(Δ) is contained in a hemisphere on S, L(f,∂Δ)=2L(I) implies f(Δ)=I, which contradicts Σ∈ℱ. Thus 2L(I)<L(f,∂Δ)<2π. We now use Lemma <ref> to give a new proof. Without loss of generality, assume q_1=f(1)=0. Let q_2=f(-1), γ_1=( f,( ∂Δ) ^+) ,γ_2=( f,( ∂Δ) ^-) , c_1 and c_1^' be the SCC arcs from q_1 to q_2 with L(c_1)=L(f,( ∂Δ) ^+) and L(c_1^')=1/2L(f,∂Δ), and let c_2 and c_2^' be the SCC arcs from q_2 to q_1 with L(c_2)=L( f,( ∂Δ) ^-) and L(c_2^')=1/2L( f,∂Δ) . Then we have L(c_1+c_2)=L(c_1^'+c_2^')=L(γ_1+c_2)=L( c_1+γ_2) =L( f,∂Δ) <2π, and thus Σ and the close curves c_1+c_2,c_1^' +c_2^',γ_1+c_2,c_1+γ_2 are contained in five open hemispheres on S in ℂ=S\{∞} respectively. By Lemmas <ref> and <ref> we have A(Σ)=A(γ_1+γ_2)≤ A(c_1+γ_2)≤ A(c_1 +c_2), with the second equality holding iff γ_1=c_1 and the last equality holding iff γ_2=c_2. We will show that A(c_1+c_2)≤ A(c_1^'+c_2^'), with equality if and only if c_1=c_1^' and c_2=c_2^'. It is clear that c_1+c_2 encloses the lens D=𝔇( I,k_1,k_2) , where k_j is the curvature k( c_j) of c_j,j=1,2; and c_1^'+c_2^' encloses the lens D^'=𝔇( I,k_1^',k_1^') , where k_1^'=k( c_1^') =k( c_2^') . D and D^' are contained in two open hemispheres S_1⊂ℂ and S_1^'⊂ℂof the five. Thus we have A(D)=A(c_1+c_2) and A(D^')=A(c_1^'+c_2^'). We show that A(D)≤ A(D^'), equality holding only if c_1 =c_1^' and c_2=c_2^'. Let C_1^' be the convex circle determined by c_1^'. Then C_1^' is contained in some open hemisphere S_1^''⊂ℂ on S, and c_1^' is the arc of C_1^' from q_1 to q_2. By (<ref>) c_1^' is at most half of C_1^', and then C_1^'\ c_1^'∘ contains the arc 𝔠_2^'=𝔠_2^'( q_2 ,q_3) with d(q_2,q_3)=d( q_1,q_2), and write 𝔠_3=C_1^'\( c_1^'+𝔠_2^') ^∘. When c_1^' is half of C_1^', 𝔠_3={q_3}={q_1}. Then 𝔠_2^' is congruent to c_2^' and we have a partition C_1^'=c_1^'+𝔠_2^'+𝔠_3. When we replace c_1^' by c_1, 𝔠_2^' by the convex arc 𝔠_2=𝔠_2( q_2,q_3) congruent to c_2=c_2( q_2,q_1) , the convex circle C_1^' becomes the Jordan curve C_1=c_1+𝔠_2+𝔠_3, with L(C_1)=L(C_1^')=L(f,∂Δ)+L(𝔠_3)<2π. Then,by Lemma <ref> (i), for the Jordan domain D_1 enclosed by C_1 and the disk D_1^' enclosed by C_1^', we have A(D_1)≤ A(D_1^'), equality holding if and only if C_1 is a circle, say C_1=C_1^', c_1=c_1^', 𝔠 _2=𝔠_2^', which implies c_2=c_2^'. We have the disjoint unions D_1=D_c_1∪q_1q_2^∘∪ D_𝔠_2 ∪q_2q_3^∘∪ D^'', D_1^'=D_c_1^'∪q_1q_2^∘∪ D_𝔠_2^'∪q_2q_3^∘∪ D^'', where D_c_1 and D_𝔠_2 are the disjoint lunes in D_1 of circular arcs c_1( q_1,q_2) and 𝔠 _2( q_2,q_3) , and D_c_1^' and D_𝔠_2^' are the disjoint lunes in D_1^' of circular arcs c_1^'( q_1,q_2) and 𝔠 _2^'( q_2,q_3) . Then we have A( D_c_1∪ D_𝔠_2) =A(D)≤ A(D_c_1 ^'∪ D_𝔠_2^')=A(D^'), equality holding only if c_1^'=c_1 and c_2=c_2^'. Let l be a positive number and for j=1,2, let I_j=q_j1q_j2 be a line segment with L(I_j)<πand let 𝔇_j^'(l_j)be the lune such that ∂𝔇_j^'( l_j) =c_j (l_j)-I_j, where c_j is a convex circular arc from q_j1 to q_j2 with L(c_j(l_j))=l_j. Assume l_1+l_2=l,I_1 and I_2 are fixed, but l_1 and l_2 vary, and for j=1,2,𝔇_j^'(l_j) is contained in some open hemisphere S_j on S. Then we have the following. (A) If the curvature k( c_1(l_1)) of c_1(l_1) is larger than k( c_2(l_2)) when l_1=l_1^0,l_2 =l_2^0=l-l_2^0. Then there exists a δ>0, such that A( 𝔇_1^'(l_1)) +A( 𝔇 _2^'(l_2)) =A( 𝔇_1^' (l_1)) +A( 𝔇_2^'(l-l_1)) is a strictly decreasing function of l_1∈(l_1^0-δ,l_1 ^0+δ). (B) If k( c_1(l_1)) =k( c_2(l_2)) when l_1=l_1^0 and both c_1(l_1) and c_2(l_2) are major circular arcs, say, the interior angle of 𝔇_j^'(l_j) at the cusps >π/2, then there exists a δ>0 such that A( 𝔇_1^'(l_1)) +A( 𝔇 _2^'(l-l_1)) >A( 𝔇_1^'(l_1 ^0)) +A( 𝔇_2^'(l-l_1^0)) , when 0<| l_1-l_1^0| <δ. (C) (A) and (B) still hold when q_11=q_12, or q_21=q_22, or both, holds (when q_j1=q_j2, 𝔇 _j^'(l_j) becomes into a disk, say, c_j(l_j) is a circle). Assume k( c_1(l_1^0)) >k( c_2(l_2^0)) =k( c_2(l-l_1^0)) . For sufficiently small ε>0, we may take small arcs c_j,ε=c_j( q_j1,ε,q_j2,ε) in c_j( l_j^0) so that the center points of c_j,ε and c_j( l_j^0) are the same and the chard q_j1,εq_j2,ε has the same length ε for j=1,2, and let 𝔠_j,ε =𝔠_j,ε( q_j1,ε,q_j2,ε) be the convex circular arc from q_j1,εto q_j2,ε so that L(𝔠_j,ε)=1/2( L(c_1,ε) +L(c_2,ε)),j=1,2. When ε is small enough by (<ref>) we have 𝔠 _1,ε^∘⊂𝔇_1^'( l_1 ^0) and 𝔠_2,ε^∘ is on the right side of c_2,ε. Let D_j be the lune enclosed by c_j,ε-q_j1,εq_j2,ε, and let D_j^' be the lune enclosed by 𝔠_j,ε-q_j1,εq_j2,ε, j=1,2. Then by Lemma <ref> we have A(D_1)+A(D_2)<A(D_1^')+A(D_2^'), and when ε is small enough, the domain T_j=( 𝔇_j^'( l_j^0) \ D_j) ∪ D_j^'is a Jordan domain for j=1,2, and we have by (<ref>) A( 𝔇_1^'( c_1( l_1^0) ) ) +A( 𝔇_2^'( c_2( l_2^0) ) ) <A( T_1) +A(T_2). It is clear that ∂ T_j=( [ c_j( l_j^0) \ c_j,ε] ∪𝔠_j,ε) -I_j. Now we replace [ c_j( l_j^0) \ c_j,ε] ∪𝔠_j,ε with the whole SCC arc c_j( l_j( ε) ) from q_j1 to q_j2 with l_j( ε) =L(c_j( l_j( ε) ) )=L( [ c_j( l_j^0) \ c_j,ε] ∪𝔠_j,ε) . Then l_1( ε) <l_1^0and l_2( ε) =l-l_1( ε) >l_2^0. By Lemma <ref>, we have A(𝔇_j^'( c_j( l_j( ε) ) ) >A(T_j),j=1,2, which, with (<ref>) implies ∑_j=1^2A(𝔇_j^'( c_j( l_j( ε) ) ) >∑_j=1^2A(𝔇 _j^'( l_j^0) ). Since l_1( ε) <l_1^0 and l_1( ε) depends on ε continuously when ε>0 is small enough, (<ref>) implies that ∑_j=1^2A(𝔇_j^'( c_j( l_j) ) >∑_j=1^2A(𝔇_j^'( l_j^0) ), when l_1<l_1^0 and l_1^0-l_1 is small enough. Then we have proved that there exists a small enough δ>0 such that (A) holds for l_1∈(l_1^0-δ,l_1^0). Now, assume k(c_1( l_1^0) )=k( c_2( l_2^0) ) and both c_1( l_1^0) and c_2( l_2^0) are major arcs. Then on (l_1^0 -δ,l_1^0], k(c_1( l_1) ) strictly decreases but k( c_2( l-l_1) ) strictly increases when δ is small enough. Thus by (A) for sufficient small δ >0,∑_j=1^2A(𝔇_j^'( c_j( l_j) ) , as a function of l_1, strictly decreases on (l_1^0 -δ,l_1^0) and strictly increase on (l_1^0,l_1^0+δ). On the other hand, it is clear that ∑_j=1^2A(𝔇_j^'( c_j( l_j) ) is a continuous function of l_1 when | l_1-l_1^0| small enough. Hence (B) holds. The proof of (C) is the same as (A) and (B). Let I=pq be a line segment with L(I)<π, l∈(2πsinL(I)/2,2π), and let 𝔇( pq,x,l-x) be the lens enclosed by SCC arcs c_x =c_x( p,q) and c_l-x^'=c_l-x^'( q,p) with L(c_x)=x and L(c_l-x^')=l-x. Assume x_0 is the number such that 𝔇( pq,x_0 ,l-x_0) is a disk and x_0>l-x_0. Then the following hold. (i) If x>l-x and ∠( 𝔇( pq ,x,l-x) ,p) >π, then x∈(l/2,x_0). (ii) The function A( 𝔇( pq,x,l-x) ) is strictly increases for x∈l/2,x_0]. If x>l-x, then x>l/2. If x>x_0, we must have ∠( 𝔇( pq,x,l-x) ,p) <π. Thus by assumption of (i) we have x∈(l/2,x_0), and thus (i) holds true. By Lemma <ref> and Corollary <ref> (B), A( 𝔇( pq,x,l-x) ) assumes the minimum when x=l/2 and the maximum when x=x_0. Either x=l/2 or x=x_0 is the condition such that c_x and c_l-x^' have the same curvature, say, when x≠ l/2,x_0, the two circular arc of 𝔇 ( pq,x,l-x) have distinct curvature. On the other hand the curvature continuously depends on xand when x increase from l/2 a little, the curvature of c_x decrease a little (note that c_l/2 is a major circular arc and so is c_x for x>l/2). Thus when x increases in (l/2,x_0), the curvature k( c_x)of c_x decreases and the curvature of c_l-x^' increases. Then (ii) follows from Corollary <ref> (A). The following lemma is a direct consequence of Lemma <ref>. Let Σ=( f,Δ^+) ∈ℱ be a surface such that I=( f,[-1,1]) is a line segment on S and Σ is contained in some open hemisphere S^' on S. Assume L( ∂Σ) <2π. Then A(Σ)≤ A(𝔇^'(I,θ)) for the unique θ with L(∂𝔇^'(I,θ ))=L(∂Σ), with equality if and only if Σis a simple closed domain congruent to 𝔇^'(I,θ ). Let L<2π be a positive number, x∈0,L] and T_x and T_L-x be two disks in some open hemisphere on S with L(∂ T_x)=x and L(∂ T_L-x)=L-x. Then 2A(T_L/2)≤ A(T_x)+A(T_L-x)≤ T_L and A(T_x)+A(T_L-x) strictly decreases on [0,L/2]. This follows from Corollary <ref> (C). But here we can give a simpler proof. Let f(x)=A(T_x)+A(T_L-x). Then we have f(x)=4π-√(4π^2-x^2)-√(4π^2-( L-x) ^2), and f^'(x) =x/√(4π^2-x^2)+( x-L) /√(4π^2-( L-x) ^2) =x√(4π^2-( L-x) ^2)+( x-L) √(4π^2-x^2)/√(4π^2-x^2)√(4π^2-( L-x) ^2). Thus we have f^'( x) <0 on [0,L/2), and so f(x) strictly decreases on x∈0,L/2]. Let c_n=c_n( q_n^',q_n^'') be a sequence of SCC arcs convergent to an SCC arc c_0 =c_0( q_0^',q_0^'') with q_0^'≠ q_0^''. Assume that either c_0 is straight with L(c_0)>π or c_0 is strictly convex. Then c_0+q_0^''q_0^' is a convex Jordan curve and, for sufficiently large n, c_n+q_n^'' q_n^' is also a convex Jordan curve and converges to c_0 +q_0^''q_0^'. Thus for sufficiently large n, the lune enclosed by c_n+q_n^''q_n^' is convex and converges to the lune enclosed by c_0+q_0^''q_0^'. This is clear when c_0 is strictly convex. Assume c_0 is straight and L(c_0)>π. Then d( q_0^',q_0^'') <π and thus for sufficiently large n, d( q_n^',q_n^'') <π and q_n^'q_n^'' converges to q_0^'q_0^''. It is clear that c_0+q_0^''q_0^' is a great circle and thus, by the assumption of convergence, c_n+q_n^''q_n^'converges to c_0+q_0^''q_0^'. § THE MONOTONICITY OF THE FUNCTION H_1(Δ) For a line segment I on S with L(I)=δ<π, and a positive number θ∈0,π] define A(δ,θ,θ)=A(𝔇( I,θ,θ) ), and L(δ,θ,θ)=L(∂𝔇( I,θ,θ) ), where 𝔇( I,θ,θ) is the lens defined in Definition <ref>. Then for a constant A_0define h(A_0,δ,θ,θ)=A_0+( q-2) A(δ ,θ,θ)/L(δ,θ,θ)=( q-2) A_0/q-2+A(δ,θ,θ)/L(δ,θ,θ). For δ∈(0,π) and θ∈0,π/2] we have L(δ,θ,θ)=4tanδ/2/√(sin^2θ +tan^2δ/2)[ arctan√(sin^2θ+tan ^2δ/2)/cosθ] , A(δ,θ,θ)=4θ-L(δ,θ,θ)sinθ/tanδ/2, and h(A_0,δ,θ,θ)=( A_0/4+( q-2) θ) √(sin^2θ+tan^2δ/2) /tanδ/2[ arctan√(sin^2θ+tan^2 δ/2)/cosθ] -( q-2) sinθ/tanδ/2. Let a_δ be the positive number such that δ=∫_0^a_δ2dx/1+x^2=2arctan a_δ, Then a_δ=tanδ/2, and we have A(δ,θ,θ)=A(𝔇(0,a_δ,θ,θ) and L(δ,θ,θ)=L(∂𝔇(0,a_δ ,θ,θ). We assume θ∈(0,π/2]. Then, the line segment 0,a_δ on the sphere S divides 𝔇(δ,θ,θ) into two symmetric lunes 𝔇 ^'(0,a_δ,θ) and 𝔇^'(a_δ,0,θ), and the upper lune is 𝔇 ^'(a_δ,0,θ), as in Figure <ref> which is in the plane ℂ. As shown in the plane figure, let α be the circular arc on ℂ of ∂𝔇^' (a_δ,0,θ) from a_δ to 0, let c_θ with Imc_θ≤0 be the center of the circle on ℂ containing α, and let c_θ^'=2c_θ. Then |c_θ^'-c_θ|=|0-c_θ|=|c_θ-a_δ|, and thus c_θ^' is on the circle containing α, and the triangle in ℂ with vertices 0,a_δ and c_θ^' is a right triangle whose interior angle at c_θ^' equals θ. Thus we have |c_θ^'|=a_δ/sinθ. On the other hand, for any point z∈α, the triangle with vertices 0,c_θ^' and z is also a right triangle, as in the figure, whose angle at c_θ^' has value θ- z. Thus |z|=sin(θ-t)|c_θ^'|=a_δsin(θ-t)/sinθ, where t= z. Then, we obtain a parametric expression of the circular path α: α(t)=a_δsin(θ-t)/sinθe^it,t∈0,θ], and |dα(t)|=a_δ|-e^itcos(θ-t)+ie^itsin(θ -t)|dt/sinθ=a_δdt/sinθ,t∈0,θ]. Therefore, we have L(δ,θ,θ) =2L(α)=2∫_α2|dz|/1+|z|^2 =∫_0^θ4|dα(t)|/1+|α(t)|^2 =4∫_0^θa_δsinθ/sin^2θ+a_δ^2sin^2(θ-t)dt =4∫_0^θa_δsinθ/sin^2θ+a_δ^2sin^2xdx, and so (<ref>) follows from L(δ,θ,θ) =-4a_δ/√(sin^2θ +a_δ^2)∫_0^θdsinθ x/√(sin^2θ+a_δ^2)/[ sinθ x/√(sin^2θ+a_δ^2)] ^2+1 =. -4a_δ/√(sin^2θ+a_δ^2) arctansinθ x/√(sin^2θ+a_δ^2) | _0^θ =2π a_δ/√(sin^2θ+a_δ^2) -4a_δarctancosθ/√(sin^2θ+a_δ ^2)/√(sin^2θ+a_δ^2) =4a_δarctan√(sin^2θ+a_δ^2)/cosθ/√(sin^2θ+a_δ^2) =4tanδ/2/√(sin^2θ+tan^2δ/2)arctan√(sin^2θ+tan^2δ/2)/cosθ It is then clear that (<ref>) will follow from (<ref>), the first line of (<ref>) and A(δ,θ,θ) =2A(𝔇^'(0,a_δ ,θ))=2 ∬_𝔇^'(0,a_δ ,θ) 4dxdy/( 1+|z|^2) ^2 =2∫_0^θdt∫_0^|α_θ(t)|4rdr/( 1+r^2) ^2=2∫_0^θ( 2-2/1+|α_θ(t)|^2) dt =4θ-4∫_0^θdt/1+|α_θ(t)|^2 =4θ-sinθ/a_δ∫_0^θ4|dα _θ(t)|/1+|α_θ(t)|^2 =4θ-sinθ/tanδ/2L(δ,θ,θ). Then h(A_0,δ,θ,θ) =A_0+( q-2) A(δ,θ)/L(δ,θ) =A_0+4θ( q-2) -( q-2) sinθ/tanδ/2L(δ,θ,θ)/L(δ ,θ,θ) =A_0+4θ( q-2) /L(δ,θ,θ) -( q-2) sinθ/tanδ/2 =( A_0+4θ( q-2) ) √(sin ^2θ+tan^2δ/2)/4tanδ/2arctan√(sin^2θ+tan^2δ/2)/cosθ-( q-2) sinθ/tanδ/2 =( A_0/4+θ( q-2) ) √(sin^2θ+tan^2δ/2)/tanδ/2 arctan√(sin^2θ+tan^2δ/2)/cosθ -( q-2) sinθ/tanδ/2. It is clear that when the boundary length of 𝔇( I,θ,θ) is fixed while θ increases, δ=L(I) has to decrease. If 0<δ_2<δ_1<π, θ_1<θ_2≤π/2 and L( δ_1,θ_1,θ_1) =L(δ_2,θ_2,θ_2), then A(δ_1,θ_1,θ_1)<A(δ_2,θ_2,θ_2). We fix θ_1 and L(δ_1,θ_1,θ_1)=L(δ _2,θ_2,θ_2)=L, and show that the conclusion holds when θ_2-θ_1 is small enough. Assume 𝔇(AB,θ_1,θ_1) with L(AB)=δ_1 is given as in Figure <ref>, in which A and B are the two cusps. For a pair of points C and C^' on ∂𝔇(AB ,θ_1,θ_1) which are symmetric to AB, we let γ_CC^' be the circular arc with chord CC^' and length equal to the part C^'AC of ∂𝔇(AB,θ_1,θ_1). It is clear that for sufficiently small ε, there exists a pair of C and C^' such that γ_CC^' intersects AB at A^' with d(A^',B)=δ_1-ε as in Figure <ref> (2). Let D_CBC^'A^'C be the domain enclosed by the Jordan curve consisted of γ_CC^' and the part of ∂𝔇 (AB,θ_1,θ_1) under the segment CC^'. Then by Lemma <ref> we have A(δ_1,θ_1,θ_1)≤ A(D_CBC^'A^'C), equality holding only if C^'AC is a circular arc. But by the hypothesis θ_1<θ_2≤π/2, C^'AC can not be circular, and so the equality can not hold. It is clear that the boundary length of the domain D_CBC^'A^'C equals L, and by Lemma <ref> the domain 𝔇(A^' B,θ_2,θ_2) with L(𝔇(A^'B ,θ_2,θ_2))=L has larger area than D_CBC^'A^' C. Thus we have A(δ_1,θ_1,θ_1)<A(D_CBC^'A^'C )<A(δ_1-ε,θ_2,θ_2). Since ε is arbitrary, the result follows. (i) For δ∈δ_E_q,π), let D(δ) be a closed disk on S with diameter δ and write A(δ)=A(D(δ )), L(δ)=L(∂ D(δ)). Then . c]ll h( δ) =R(D(δ))+4πL(δ) =4π-4πn( D(δ) )+( q-2) A(δ)L(δ) =h(4π( 1-n( D(δ) ) ),δ,π/2,π/2), . where h( A_0,δ,θ,θ) is defined in (<ref>). (ii) For any open interval I^∘ in [δ_E_q,π), if n( D( δ) ) is a constant for each δ∈ I^∘, then h( δ) =R(D(δ))+4π/L(δ) is real analytic on I^∘ with h^'( δ) =-( q-2n( D(δ) )) cosδ/2+q-2/2sin^2δ/2, and the following hold. (ii1) If n( D( δ) ) =0 on I^∘ and δ_E_q≥2arccosq-2/q, then h( δ) strictly increases on I^∘∩δ_E_q ,π). (ii2) If n( D( δ) ) =0 on I^∘ and δ_E_q<2arccosq-2/q, then h( δ) strictly decreases on I^∘∩δ_E_q ,2arccosq-2/q] and strictly increases on I^∘∩2arccosq-2/q,π]. (ii3) If n( D( δ) ) ≥1 on I^∘, then h( δ) strictly increases on I^∘. (ii4) h( δ) cannot assume the maximum value for every δ∈ I^∘. By definition, (<ref>) holds trivially. It is clear that h(δ) =R(D(δ))+4π/L(δ)=h( 4π-4πn( D( δ) ) ,δ,π/2,π/2) =4π-4πn( D(δ) )+( q-2) A(δ)/L(δ) =4π-4πn( D(δ) )+2π( q-2) (1-cosδ/2)/2πsinδ/2 =q-2n( D(δ) )-( q-2) cosδ/2/sinδ/2. Thus we have (<ref>). If h(δ) assume a maximum value at some point δ∈ I^∘, then we must have -( q-2n( D(δ) )) cosδ/2+q-2=0. Therefore, (ii1)–(ii3) follow from (<ref>) directly. Now, (ii4) follows from (ii1)–(ii3). For any pair of constants δ∈(0,π) and A_0, let h( A_0,δ ,θ,θ) be given by (<ref>) with expression (<ref>). Then h(θ)=h( A_0,δ,θ,θ) strictly increases in an interval [0,θ_0] for some θ_0>0. Since δ∈(0,π), we have tanδ/2>0, and thus it is clear that the functions h( θ) is real analytic in a neighborhood I_0 of 0in (-1,1). It is clear that d/dθ√(sin^2θ+tan^2δ/2)/tanδ/2[ arctan√(sin^2θ+tan^2 δ/2)/cosθ] =0 at θ=0. Since δ∈(0,π), we have by (<ref>), h^'( 0) =q-2/δ/2-q-2/tanδ/2>0, and so the existence of θ_0 follows. For δ∈( 0,π) and for a constant A_0, let h(δ)=max_θ∈0,π/2]h(A_0,δ ,θ,θ), where h(A_0,δ,θ,θ) is given by (<ref>) with expression (<ref>). Let δ_0∈(0,π) and assume h( δ_0) =h( A_0,δ_0,θ _0,θ_0) for some θ_0∈( 0,π/2). Then for each δ<δ_0 which is sufficiently close to δ_0, we have h( A_0,δ_0,θ_0,θ_0) <h( A_0 ,δ,θ_δ,θ_δ) , where θ_δ is determined by L(∂𝔇( δ_0,θ_0,θ_0) )=L(∂𝔇( δ,θ_δ,θ_δ) ), and thus h( δ) =max_θ∈0,π/2]h(A_0 ,δ,θ,θ)>h( δ_0) . It is clear that for sufficiently small h>0, there exists θ _1^'∈(θ_0,π/2) such that L(δ_0-h,θ_1^',θ_1^')=L(δ_0 ,θ_0,θ_0), and then by Lemma <ref> we have h(A_0,δ_0,θ_0,θ_0) =A_0/L(δ _0,θ_0,θ_0)+( q-2) A(δ_0,θ _0,θ_0)/L(δ_0,θ_0,θ_0) <A_0/L(δ_0-h,θ_1^',θ_1^' )+( q-2) A(δ_0-h,θ_1^',θ _1^')/L(δ_0-h,θ_1^',θ_1^') =A_0+( q-2) A(δ_0-h,θ_1^' ,θ_1^')/L(δ_0-h,θ_1^',θ_1^') ≤max_θ∈0,π/2]A_0+( q-2) A(δ_0-h,θ,θ)/L(δ_0-h,θ,θ) =h(δ_0-h). Let δ_E_q=min{ d(𝔞_i,𝔞 _j):𝔞_i∈ E_q,𝔞_j∈ E_q,𝔞_i ≠𝔞_j} . Then δ_E_q≤2π/3, equality holding only if q=3 and 𝔞_1,𝔞 _2,𝔞_3 are on a great circle on S and d(𝔞 _i,𝔞_j)=2π/3 for each pair of i and j with i≠ j. Assume δ_E_q>2π/3. Then we may assume 𝔞 _2,𝔞_3,…,𝔞_q are contained in the open disk D complementary to the disk d(z,𝔞_1)≤2π/3. But it is clear that the diameter of D is 2π/3 and since q≥3 we have δ_E_q<2π/3. This is a contradiction. Assume δ_E_q=2π/3. Then we may assume d( 𝔞_1,𝔞_2) =2π/3 and consider the great circle C on S passing through 𝔞_1 and 𝔞 _2. If some 𝔞_j of 𝔞_3,…,𝔞_q is contained in S\ C, then we have d( 𝔞 _j,{𝔞_1,𝔞_2}) <2π/3, contradicting to δ_E_q=2π/3. Thus {𝔞_3,…,𝔞_q}⊂ C, q=3, and 𝔞_1,𝔞_2,𝔞_3 divide C equally. For the number δ_E_q, there exists a disk T_0⊂ S with perimeter 3δ_E_q and T_0∩ E_q=∅. Let 𝔞_1 and 𝔞_2 be two points of E_q such that d( 𝔞_1,𝔞_2) =δ_E_q. Then there exists a convex disk T_0 on S whose boundary contains 𝔞_1, 𝔞_2 and some a∈ S such that 𝔞_1,𝔞_2,a divide ∂ T_0 equally. It is clear that T_0 is the desired disk. Let T be a convex disk on S with T∩ E_q =∅. Then H( T) is strictly increase as a function of L=L(∂ T)∈(0,min{L_0,2π})and H( T) ≤( q-2) , where L_0=max{ L: [ there is a convex open disk T on S with; T∩ E_q=∅ and L(∂ T)=L} ]}≥3δ_E_q. This follows from Lemma <ref>, n( T) =0, and that, as a function of L=L(∂ T), H(T) =( q-2) A(T)/L=( q-2) ( 2π-√(4π^2-L^2))) /L =( q-2) L/( 2π+√(4π^2-L^2))) strictly increase for L∈(0,min{L_0,2π})] and H( T) ≤( q-2) on (0,min{L_0,2π}]. § THE RIEMANN HURWITZ FORMULA AND BRANCH POINTS The simplest version of Riemann Hurwitz formula is that for any BCCM f:S→ S with degree d B_f=∑_z∈ Sb_f( z) =∑_z∈ S( v_f( z) -1) =2d-2. Recall that B_f,b_f( z)and v_f( z) are defined in Remark <ref> (D) and (E). The formula implies that f has exactly 2d-2 branch points on S. This formula implies the following directly. Let Σ=(f,S) be a surface, say, f:S→ S is a d to 1 BCCM. Then ∑_v=1^qn̅(Σ,𝔞_v)=qd-∑_z∈ E_q b_f(z)≥ qd-∑_z∈ Sb_f(z)=( q-2) d+2, and R(Σ)=( q-2) A(Σ)-4π∑_v=1^qn (Σ,𝔞_v)≤-8π. both equality holding iff C_f^∗=C_f( S\ f^-1 (E_q)) =∅, say, CV_f⊂ E_q (see Remark <ref> (D) for the notations C_f^∗ and CV_f). In fact, the first two parts of (<ref>) are trivial, and the third part of (<ref>) follows from the equation (<ref>). Then (<ref>), together with the area formula A(Σ)=4π d, implies (<ref>). Let Σ=( f,Δ) ∈ℱ such that ∂Σ is a Jordan curve and let T be the closed domain enclosed by ∂Σ on S. Then for the closed surface Σ _0=( f_0,S) which is obtained by sewing Σ and the domain S\ T along ∂Σ, we have R(Σ)=R(Σ_0)+R(T)+8π≤ R(T), with equality holding if ond only if Σ∈ℱ_r, where ℱ_r={( f,U) ∈ℱ:C_f^∗=C_f( U\ f^-1(E_q)) =∅}. The first equality of (<ref>) follows from Corollary <ref>. By Lemma <ref>, R(Σ_0)+8π≤0, equality holding iff CV_f_0 ⊂ E_q. On the other hand, by Corollary <ref> we have CV_f_0=CV_f. Thus CV_f_0⊂ E_q iff CV_f⊂ E_q, say, Σ∈ℱ_r. Let K be a Jordan domain on S, let Σ=( f,U)be a surface in 𝐅 such that f|_∂ U:∂ U→∂ K is an orientation preserving CCM with degree d and that f covers K by d_0 times[This only means that each point of K has d_0 inversers, counted with multiplicity.]. Then the following hold. (i) In the case E_q⊂ K, #f^-1(E_q)≥( q-2) d_0+d+1. (ii) In the case E_q∩ K=q-1 and E_q∩∂ K=∅ #f^-1(E_q)≥( q-2) d_0+1. By the convention, ∂ K is oriented in the way that K is on the left of ∂ K, and so d≤ d_0, with equality only if f( U) ⊂ K. This is a consequence of Riemann-Hurwitz formula (<ref>). We may extend f to be a BCCM F:S=ℂ→ S so that F restricted to ℂ\U is a BCCM onto S\ K and that F contains no branch point in ℂ \U when d=1, or contains only one branch point p in ℂ\U with v_f(p)=d>1. (i) If E_q⊂ K, then #f^-1(E_q) =#F^-1(E_q)=d_0q-∑_z∈ F^-1(E_q)( v_F(z)-1) ≥ d_0q-∑_z∈ U( v_F(z)-1) =d_0q-[ ∑_z∈ℂ( v_F(z)-1) -∑_z∈ℂ\U( v_F (z)-1) ] =d_0q-[ ( 2d_0-2) -( d-1) ] =( q-2) d_0+d+1. (ii) If E_q∩ K=q-1, then we may construct F so that F(p)∈ E_q and v_F(p)=d for some p∈ℂ\U, and then we have #f^-1(E_q) =#F^-1(E_q)-1=[ d_0q-∑_z∈ F^-1 (E_q)( v_F(z)-1) ] -1 ≥[ d_0q-( 2d_0-2) ] -1=( q-2) d_0+1. Now, we apply the previous lemma to prove the following lemma and theorem. Let Σ=( f,Δ) ∈𝐅 and assume that there exists a Jordan curve Γ in S\ E_q such that for the two components K_1 and K_2 of S\Γ, K_1is contained in the lower half sphere D( 0,π/2) on S, #E_q∩ K_2≥ q-1, and ∂Σ⊂ K_1. Then R(Σ)≤( q-2) A( ∂Σ) , and moreover, if in addition n( Σ) ≠0, then R(Σ)≤( q-2) A( ∂Σ) -4π. Since f has a finite number of branch values, we may assume that Γ contains no branch value of f. For otherwise, we can replace Γ with another Jordan curve Γ^' close to Γ enough so that Γ^' contains no branch value of f and satisfies all the hypothesis. Firstly, consider the case ∞∉ f(Δ). Then by Lemma <ref> A(Σ)=A( ∂Σ) +4π_f( ∞) =A( ∂Σ) , and then R(Σ)=( q-2) A(∂Σ)-4πn( Σ) , which implies the desired result when ∞∉ f(Δ). Now assume ∞∈ f(Δ) and let _f( ∞) =d_0. Then d_0is a positive integer and f^-1(Γ) is not empty and is consisted of a finite number of disjoint Jordan curves in Δ. Then f^-1(Γ)∩∂Δ=∅, and f^-1(Γ) divides Δ into a finite number of domains, and we let V be the component of Δ\ f^-1(Γ) with ∂Δ⊂∂ V. Then f(V)∩Γ=∅ and f(V)⊂ K_1, and then Δ\V is consisted of a finite number of Jordan domains U_j,j=1,…,k, such that for each j, U_j ⊂Δ, U_i∩U_j=∅ if i≠ j, and f|_∂ U_j is a CCM from ∂ U_j onto Γ. We assume f|_U_j covers K_2 with degree d_j and f|_∂ U_j covers Γ with degree d_j^',j=1,…,k. Then we have d_0=d_1+⋯+d_k. Note that[It is possible that some Jordan curve of f^-1(Γ )encloses another Jordan curve of f^-1(Γ) in Δ. Thus U_1,…,U_k maybe Jordan domains enclosed by just a part of Jordan curves of f^-1(Γ). If some U_j contains some Jordan curve of f^-1(Γ), then d_j>d_j^' and f(U_j)⊃ S.] d_j>d_j^' when f|_U_j⊃ S and d_j =d_j^' when f|_U_j=K_2. We write d_0^'=d_1^'+⋯+d_k^'. By Lemma <ref> we have A(Σ)=A(∂Σ)+4π d_0, where A(∂Σ)=2/i∫_∂Σw dw/1+|w|^2. The restriction (f,U_j),∂ K_2,d_j,d_j^' satisfies the hypothesis of Lemma <ref> (as (f,U),∂ K,d_0,dthere). Thus we have n( U_j) =#f^-1(E_q)∩ U_j≥( q-2) d_j+1, for j=1,…,k. Therefore n( Σ) ≥#( f^-1(E_q)∩∪_j=1^kU_j) =∑_j=1^k#( f^-1(E_q)∩ U_j) ≥∑_j=1^k( ( q-2) d_j+1) =( q-2) d_0+k>( q-2) d_0+1. In consequence we have by (<ref>) R(Σ) =( q-2) A(Σ)-4πn( Σ,E_q) ≤( q-2) [ A( ∂Σ) +4π d_0] -4π[ ( q-2) d_0+1] =( q-2) A( ∂Σ) -4π. This completes the proof. Let L be a positive number with L≤2δ_Eq, let Σ=( f,Δ) ∈𝐅 with L( ∂Σ) ≤ Land let T_0 be a closed disk in some hemisphere on S with perimeter L and T_0∩ E_q=∅. Then H(Σ)≤ H(T_0) with equality holding iff Σ is a simple disk with f(Δ)∩ E_q=∅. By Lemma <ref>, T_0 must exist. By Rado's theorem, ∂Σ is contained in some open hemisphere S^' on S. We may assume S^'=D(0,π/2), otherwise we replace the surface Σ and the set E_q with Σ^'=( φ∘ f,Δ) and E_q^'=φ (E_q)so that ∂Σ^'⊂ D(0,π/2), where φ is a rotation of S so that φ( ∂Σ) ⊂ D(0,π/2). Let A be the convex hull of ∂Σ in D(0,π/2). Then by the assumption we have L(∂ A)≤ L(∂Σ)≤ L and we can conclude that A contains at most two points of E_q. If A contains two points 𝔞 and 𝔟 of E_q, then we have L=2δ_Eq, ∂Σ=𝔞𝔟 +𝔟𝔞 and ( ∂Σ) ∩ E_q={𝔞,𝔟}, and then we can sew Σ along 𝔞𝔟 to obtain a surface Σ _0=( F,S) such that F is a BCCM from S onto S. Then n(Σ_0)=n( Σ) +2, A(Σ_0)=A(Σ) and thus we have by Lemma <ref> R(Σ)=R(Σ_0)+8π≤0<R(T_0)=( q-2) A(T_0), which implies (<ref>). Assume A contains at most one point of E_q. Then there exists a Jordan curve Γ in D( 0,π/2) \ A whose interior domain contains A such that for the doubly connected domain V between Γ and ∂ A in D( 0,π/2) , V\∂ A contains no point of E_q. Let K_1 be the component of S\Γ which contains A and K_2 the other component. Then we have ∞∈ K_2, K_1∪ K_2=S\Γ, and K_2 contains at least q-1 points of E_q. Thus by Lemma <ref> and Lemma <ref> (i) we have R(Σ)≤( q-2) A( ∂Σ) ≤( q-2) A(T_1), where T_1 is a disk in S\ E_q with perimeter L( ∂Σ), which with Lemma <ref>, implies (<ref>). This completes the proof. § THE SPACES ℱ,ℱ(L),ℱ_R,ℱ _R(L),𝒞( L,M) ,𝒞^∗( L,M) ,ℱ(L,M),ℱ_R(L,M) Continuing Definition <ref>, in which ℱ and ℱ( L) have been defined, we introduce some subspaces of ℱ and ℱ( L) . (a) 𝒞(L,m) denotes the subspace of ℱ(L) such that Σ=( f,U) ∈𝒞(L,m) if and only if ∂ U and ∂Σ have partitions ∂ U=α_1+α_2+⋯+α_m, ∂Σ=c_1+c_2+⋯+c_m, such that c_j=( f,α_j) is an SCC arc on S for j=1,…,m (it is permitted that some c_j may be a whole circle). In this case, the partitions (<ref>) and (<ref>) are both called 𝒞( L,m)-partition of ∂Σ. (b) 𝒞^∗(L,m) denotes the subspace of 𝒞 (L,m)such that Σ=( f,U) ∈𝒞^∗(L,m) if and only if ∂ U and ∂Σ have partitions (<ref>) and (<ref>) such that c_j=( f,α_j) is an SCC arc on S and f has no branch point in α_j^∘∩ f^-1(E_q), for every j=1,…,m. In this case, the partitions (<ref>) and (<ref>) are both called 𝒞^∗( L,m)-partition of ∂Σ. (c) ℱ(L,m) denotes the subspace of 𝒞^∗(L,m) such that Σ=( f,U) ∈ℱ(L,m)if and only if ∂ U and ∂Σ have partitions (<ref>) and (<ref>), and that the following (i)—(iii) hold. (i) Each c_j=( f,α_j) is an SCC arc. (ii) For each j=1,2,…,m, f has no branch point in α_j^∘. (iii) For each j=1,2,…,m, f restricted to a neighborhood D_j of α_j^∘ in Δ is a homeomorphism onto a one side neighborhood T_j∪ c_j^∘ of c_j^∘, where T_j is a Jordan domain enclosed by c_j and another circular arc c_j^', say, ∂ T_j=c_j-c_j^' and c_j^' is a circular arc. The partitions (<ref>) and (<ref>) are both called ℱ(L,m)-partitions of ∂Σ if they satisfy (i)–(iii) in addition. (d) ℱ_r, as defined in (<ref>), is the subspace of ℱ such that Σ=( f,U) ∈ℱ_r if and only if f has no branch point in U\ f^-1(E_q), and define ℱ_r(L)=ℱ_r∩ℱ(L), ℱ_r(L,m)=ℱ_r∩ℱ(L,m). Note that 𝒞^∗( L,m)-partitons and ℱ( L,m)-partitions envolve the inerior information of the corresponding surfaces. But 𝒞( L,m)-partitions does not involve the interior information of the surface, and thus 𝒞( L,m)-partitions can be defined for closed curves which may not be bondary curves of surfaces. The conditions (ii) and (iii) in the definition are equivalent by Lemma <ref> (also see Lemma <ref> later in this section), we list (iii) just to emphasize. On the other hand it is clear that 𝒞( L,m) ⊃𝒞^∗( L,m) ⊃ℱ( L,m) ⊃ℱ_r( L,m) , 𝒞( L,m+1) ⊃𝒞( L,m) , 𝒞^∗( L,m+1) ⊃𝒞^∗( L,m) , ℱ( L,m+1) ⊃ℱ( L,m) , ℱ^∗( L,m+1) ⊃ℱ^∗( L,m) , and all inclusion relationship are strict. On the hand, we have ∪_m=1^∞𝒞( L,m) =∪_m=1^∞𝒞^∗( L,m) =∪_m=1^∞ℱ( L,m) =ℱ( L) , since for every Σ=( f,U) ∈ℱ( L) , ∂Σ has an ℱ(L,m)-partition for some m∈ℕ. A curve ( f,∂Δ) is called parametrized by length, if for each θ∈0,2π] and the arc Θ_θ={e^√(-1)t:t∈0,θ]}⊂∂Δ, L(f,Θ_θ)=θ/2πL(f,∂Δ). When a closed curve Γ=( φ,∂Δ) is parametrized by length and has 𝒞(L,m) -partitions (<ref>) and (<ref>), the partitions are uniquely determined by the initial point of α_1 and the length of c_j,j=1,2,...,m, since we have L(α_1):L(α_2):…:L(α_m)=L(c_1):L(c_2 ):...:L(c_m). Assume Σ=( f,Δ) ∈ℱ( L,m) and ∂Σ is parametrized by length. Then for any ℱ( L,m)-partition ∂Δ=α_1( a_1,a_2) +⋯+α_m( a_m,a_1) of ∂Σ with a_1=1 and the corresponding ℱ( L,m)-partition ∂Σ=c_1( q_1,q_2) +⋯+c_m( q_m,q_1) , we have a_j=e^√(-1)θ_jfor some θ_j such that 0=θ_1<θ_2<…<θ_m<θ_m+1=2π, and θ_j=L(f,Θ_θ_j)/L( ∂Σ) 2π,j=1,2,…,m+1. Then for the homeomorphism φ:[1,m+1]→0,2π] which maps each interval [j,j+1] linearly onto [θ_j,θ_j+1], we have a_j=e^√(-1)φ( j) , and for x∈(j,j+1), a_x=e^√(-1)φ( x) is a point in α_j, and q_x=f(a_x) is a point in c_j. For example, a_1+1/2 is the middle point of α_1 and q_1+1/2 is the middle point of c_1, q_m+1/2 is the middle point of c_m, while q_m+1=q_1. Let Σ=( f,Δ) ∈ℱ( L,m) , {a,b}⊂∂Δ, α be an arc of ∂Δ from a to b, oriented by ∂Δ, β be a simple arc in Δ from a to b, and assume that the following (a)–(c) hold. (a) (<ref>) and (<ref>) are ℱ( L,m)-partitions of ∂Σwith c_j=( f,α_j) ,j=1,…,m. (b) α∩β contains a connected component I=I(a^' ,b^'), oriented by ∂Δ, with b^'≠ a,b, say, b^'∈α^∘∩β^∘. (c) ( f,-β) is an SCC arc. Then the following hold. (i) b^'∈{a_j}_j=1^m. (ii) If a^'≠ a, then a^'∈{a_j}_j=1^m, and I is either a point of {a_j}_j=1^m, or an arc of ∂Δ which can be written as I=α_j_1( a_j_1,a_j_1+1) +α_j_1+1( a_j_1+1,a_j_1+2) +⋯+α_j_1+k( a_j_1 +k,a_j_1+k+1) for some j_1≤ m and some 0≤ k<m (with a_j=a_j-m for each j with j>m). (iii) If b^' is not a branch point of f, then ( f,∂Δ) is not convex at b^'. (iv) If, in addition to (a)–(c), ( f,-β) is strictly convex, then a^'=b^'∈{a_j}_j=1^m, say, I is a point in {a_j}_j=1^m, and moreover β^∘∩∂Δ⊂{a_j}_j=1^m. Let γ be a simple arc on S. Then for any x∈γ^∘ there exist a positive number δ_x,γand a subarc γ^' of γ in D(x,δ_x,γ) such that ∂γ^'⊂∂ D(x,δ_x,γ) and x∈γ ^'∘⊂ D(x,δ_x,γ). Hence γ^' divides D(x,δ_x,γ) into two Jordan domains D^l( x,δ _x,γ) and D^r( x,δ_x,γ) , on the left and right hand side of γ^' (by convention, γ^' inherits the orientation of γ). From this observation, we can introduce the following definition. Let γ_1 and γ_2 be two arcs on S with γ_1^∘∩γ_2^∘≠∅and assume γ_2 is simple. We say that γ_1 is on the left hand side of γ_2, if for each x∈γ_1^∘∩γ_2^∘, x has a neighborhood in γ_1 contained in D^l( x,δ_x,γ_2) \∂ D( x,δ _x,γ_2) . When we replace D^l( x,δ _x,γ_2) with D^r( x,δ_x,γ_2) for every x∈γ_1^∘∩γ_2^∘, we obtain the definition of that γ_1 is on the right hand side of γ_2. By this definition, γ_2 is on the left hand side of itself, and on the right hand side as well. Let γ_1 and γ_2 be two simple arcs on S and assume γ_1^∘∩γ_2^∘≠∅. When γ_1 is on the left (right) hand side of γ_2, and γ _2 is on the right (left) hand side of γ_1, we say that the direction of γ_1 is determined by γ_2. The proof of the previous lemma is similar as the proof of Lemma 5.4 in <cit.>, which is based on an observation stated after that lemma in <cit.>. The proof here is based on a similar observation but is a little more general: Let γ_1 and γ_2 be two simple and convex arcs on S with γ_1^∘∩γ_2^∘≠∅. Assume that γ_1 is on the left hand side of γ_2 and the direction of -γ_1 is determined by γ_2 in the sense of Definition <ref>. Then γ_1∩γ_2 is a simple line segment on S, and each of the two endpoints of γ_1∩γ_2 is an endpoint of γ_1 or γ_2, say, γ_1^∘∩γ_2 ^∘ has no compact connected component. We may write I=I( a^',b^') =α( a^',b^') =β( a^',b^') , which, by convention, means that I is from a^' to b^' and I is a subarc of α (and β) whose direction is determined by α (and β). Since Σ∈ℱ( L,m) and ( f,-β) is simple and circular, α∩β is consisted of a finite number of connected components. Thus α and β have subarcs[By convention, subarcs inherits the oriention.] α_b^'=α( a^'',b^'') and β_b^'=β(A^'',B^'') which are neighborhoods of b^' in α and in β, respectively, such that (d) When b^'≠ a^', we have A^''=a^''∈ I^∘, α_b^'∩β_b^'=α_b^'∩ I=β_b^'∩ I=α( a^'',b^') =β( a^'',b^') ; and when b^'=a^', we have α_b^'∩β_b^'={b^'}⊂α _b^'^∘∩β_b^'^∘. (e) α_b^'\{b^'} and β_b^' \{b^'} contain neither point of {a_j}_j=1^mnor branch point of f. (f) ( f,-β_b^') is an SCC arc. (g) If b^' is not a branch point of f and ( f,∂Δ) is not folded at b^', then f is an OPH in a neighborhood[When b^' satisfies the assumption of (g), f is a homeomorphism in a neighborhood of b^' in Δ, and so the conclusion holds when α_b^' and β_b^' is contained in that neighborhood.] of α_b^'∪β_b^' in Δ, γ_1=(f,α_b^' ) is a simple arc on the left hand side of γ_2=( f,-β_b^') , and the direction of γ_1 is determined by -γ_2. To prove (i), assume its opposite b^'∉{a_j}_j=1^m. Then by (e) γ_1=(f,α_b^') is contained in some c_j of the partition (<ref>) and thus is convex. Hence, (f), (g) and Observation <ref> imply that f(b^') is in the interior ( γ _1∩γ_2) ^∘, and thus b^' is an interior point of α_b^'∩β_b^', which contradicts (d). (i) is proved and the proof of (ii) is the same. To prove (iii), assume that b^' is not a branch point of fand the conclusion fails, say, ( f,∂Δ) is convex at b. Then ( f,∂Δ) is not folded at b^' and so all conclusions of (g) hold, and moreover, γ_1=(f,α_b^' ) is convex by (d) and (e). Thus (f), (g) and Observation <ref> again imply that b^' is an interior point of α_b^'∩β_b^', contradicting (d), as in the proof of (i). This proves (iii). Assume ( f,-β) is strictly convex, and I is not a point, say, a^'≠ b^'. Then I has a subarc I^' such that I^'⊂( ∂Δ) \{a_j}_j=1 ^m, and then by (c) and definition of ℱ( L,m) , both ( f,I^') and ( f,-I^') ⊂( f,-β) are convex arcs, which implies that ( f,I^') is a line segment and thus ( f,-β) cannot be strictly convex. This proves (iv). The following result and the method of proof will be used a lot of times. Let { L_j} _j=1^N and {R_j}_j=1^N be sets of positive numbers and assume R_1 /L_1≥R_j/L_j,j=2,…,N. Then R_1/L_1≥∑_j=1^NR_j/∑_j=1^NL_j, with equality holding if and only if R_1/L_1=R_2/L_2=…=R_N/L_N. It follows from ∑_j=1^NR_j=∑_j=1^NR_j/L_jL_j≤∑_j=1 ^NR_1/L_1L_j=R_1/L_1∑_j=1^NL_j. The following lemma is very useful (recall definitions (<ref>) and (<ref>) of R(Σ) and H(Σ)). Let Σ_j,j=0,1,…,n, be surfaces in 𝐅, and let ε_0be a positive number such that R(Σ_0)-ε_0≤∑_j=1^nR(Σ_j) and L(∂Σ_0)+ε_0≥∑_j=1^nL(∂Σ_j). Then H(Σ_0)-ε_0/L(∂Σ_0)/1+ε _0/L(∂Σ_0)≤max_1≤ j≤ nH(Σ_j). This follows from (<ref>), (<ref>) and Lemma <ref> directly. We define H_L,m=H_L,m( E_q) =sup_Σ∈ℱ(L,m) H(Σ). Then by Definition <ref> we have H_L=lim_n→+∞H_L,m. If L≤2δ_E_q, then H_L=H_L,m=H_L,1=( q-2) A(T)/L=( q-2) 2π-√(4π^2-L^2)/L, H_L increases strictly as a function of L∈(0,2δ_E_q] for any given positive integer m, where T is a disk with perimeter Land diameter less than π. By Theorem <ref>, for any L∈(0,2δ_E_q], we have (<ref>). Thus, by Lemma <ref>, H_L=H_L,m strictly increase on (0,2δ_E_q]. For any L>0, let 𝒯_L be the set of all closed convex[A convex disk on S is contained in some hemisphere on S, and vice versa.] disks T with T^∘⊂ S\ E_q and L(∂ T)≤ L, and let L_0=sup_T∈𝒯_LL(∂ T). Then for any positive integer mand any disk T_L_0 in 𝒯_Lwith L(T_0)=L_0 H_L≥ H_L,m≥ H_L,1≥sup_T∈𝒯_LH(T)=H(T_L_0)≥ H(T_min{L,2δ_E_q})>0. This follows from the relation 𝒯_L⊂ℱ( L,1) ⊂ℱ( L,m) ⊂ℱ( L), T_L_0∈𝒯_L and Lemma <ref>. Let L∈ℒ. Then there exists a positive number δ_L such that for each L^'∈(L-δ_L,L+δ_L), H_L-π/2L<H_L^'<H_L+π/2L, This follows from Definition <ref>. In <cit.> it is proved that for any surface Σ in 𝐅, there exists a surface Σ^' in 𝐅=( f,Δ) with piecewise analytic boundary, such that f has no branch point in Δ\ f^-1(E_q), A(Σ^')≥ A(Σ),L(∂Σ)≥ L(∂Σ^'), and moreover, ∂Σ^' is consisted of subarcs of ∂Σ. In <cit.>, using the similar method and more analysis, the authors proved the following theorem, which is a key step for proving the first main theorem. Let L∈ℒ, m be a positive integer, Σ=( f,Δ) ∈𝒞^∗(L,m), ∂Δ=α_1+⋯+α_m, is a 𝒞^∗( L,m)-partition of ∂Σ and ∂Σ=c_1+⋯+c_m is the corresponding 𝒞^∗( L,m)-partition, and assume that H(Σ)>H_L-π/2L(∂Σ). Then there exists a surface Σ^'=( f^',Δ) such that (i) Σ^'∈ℱ_r(L,m). (ii) H(Σ^')≥ H(Σ) and L(∂Σ^')≤ L(∂Σ). Moreover, at least one of the inequalities is strict if Σ∉ℱ_r(L,m). (iii) When L(∂Σ^')=L(∂Σ) we have ∂Σ^'=∂Σ and (<ref>) and (<ref>) are ℱ( L,m)-partitions of ∂Σ^'. The following Theorem is proved in <cit.>, which is also a key step for proving the first main theorem. There exists an integer d^∗=d^∗(m,E_q) depending only on m and E_q such that for any Σ=( f,Δ) ∈ℱ_r( L,m), there exists a surface Σ_1=( f_1,Δ) in ℱ _r( L,m) such that d_max( Σ_1) =d_max( f_1) =max_w∈ S\∂Σ_1#f_1^-1(w)≤ d^∗, ∂Σ_1=∂Σ, and H(Σ_1)=H(Σ). ℱ_r^'(L,m) is defined to be the subspace of ℱ_r(L,m) such that Σ=( f,U) ∈ℱ_r^'(L,m)iff d_max(Σ)≤ d^∗=d^∗(m,q), where d^∗ is the integer determined in Theorem <ref> . Let L∈ℒ. Then for sufficiently large m, we have H_L,m=sup_Σ∈𝒞^∗(L,m)H(Σ)=sup_Σ∈ℱ(L,m)H(Σ)=sup_Σ∈ℱ_r(L,m) H(Σ)=sup_Σ∈ℱ_r^'(L,m)H(Σ). It suffices to show that, for sufficiently large m, sup_Σ∈𝒞^∗(L,m)H(Σ)≤sup_Σ∈ℱ_r^'(L,m)H(Σ). By Remark <ref>, H_L=lim_m→∞sup_Σ∈𝒞^∗(L,m) H(Σ)>H_L-π/2L. Let m be any large enough integer in ℕ such that sup_Σ∈𝒞^∗(L,m)H(Σ)>H_L-π/2L. Then there exists a sequence Σ_n in 𝒞^∗(L,m) such that lim_n→∞H(Σ_n)=sup_Σ∈𝒞^∗ (L,m)H(Σ) and H(Σ_n)>H_L-π/2L>H_L-π/2L( ∂Σ_n) . Thus by Theorems <ref> and <ref>, there exists a sequence Σ _n^'∈ℱ_r^'(L,m) such that H(Σ_n ^')≥ H(Σ_n). Thus (<ref>) holds. Let Σ=( f,Δ) ∈𝐅 and assume either (1) f^-1(E_q∩Δ)≠∅, or (2) ∂Σ contains a simple arc with distinct end points in E_q. Then R(Σ)+4π/L(∂Σ)≤ H_0, where H_0 is defined by (<ref>). This is Theorem 1.7 in <cit.>. In fact when (1) holds, we may assume 0∈ f^-1(E_q) and let Σ_n=( f_n,Δ) be the surface in 𝐅 with f_n( z) =f( z^n) ,z∈Δ. Then we have R(Σ _n)=( q-2) nA(Σ)-4π( nn( Σ) -( n-1) ) =nR(Σ)+4π( n-1) , and L(∂Σ_n)=nL(∂Σ), and thus H_0≥R(Σ_n)/L(∂Σ_n)=R(Σ )+4π( n-1) /n/L(∂Σ)→R(Σ)+4π/L(∂Σ) as n→∞, which implies (<ref>). When (2) holds, (<ref>) follows from the following Lemma which is also proved in <cit.>. Let Σ=( f,Δ) ∈𝐅 and assume that ∂Σ contains a simple arc γ with distinct end points in E_q, say, (2) of the previous lemma holds. Then there exists a surface Σ_1=( f_1,Δ) ∈𝐅 such that L(∂Σ)=L(∂Σ_1), H(Σ)=H(Σ_1) and f_1^-1( E_q) ∩Δ≠∅. In fact Σ_1 can be obtained by sew Σ and the surface whose interior is S\γ and boundary is γ-γ. Then we have A(Σ_1)=A(Σ)+4π, n( Σ _1) =n( Σ) +q-2 and ∂Σ=∂Σ_1. Therefore H(Σ)=H(Σ_1) and f_1^-1( E_q) ∩Δ≠∅, and then (<ref>) holds by the discussion of (1) of the previous lemma. The above discussion about Lemmas <ref> and <ref> implies the following If Σ∈ℱ satisfies (1) or (2) of Lemma <ref>, then there exists a sequence Σ_n in ℱ such H(Σ _n)=R(Σ_n)/L(∂Σ_n)→R(Σ)+4π/L(∂Σ) as n→∞. Let Σ=( f,Δ) ∈ℱ_r(L), let α=α( b_1 ,b_2) be an arc of ∂Δ such that c=c( p_1,p_2) =( f,α)is a simple arc (it is possible that p_1=p_2 even if b_1≠ b_2). If α^∘ contains no branch point of f, then there exists a closed domain T on S such that (i) ∂ T=c-c^', where c^' is a simple arc from p_1 to p_2 which is consisted of a finite number of line segments on S such that c∩ c^'={p_1,p_2}. (ii) The interior angles of T at p_1 and p_2 are positive. (iii) f^-1 has a univalent branch g defined on T\{p_1 ,p_2} with g(c)=α. (iv) If p_1≠ p_2, then T is a closed Jordan domain and the univalent branch g of f^-1 can be extended to a homeomorphism defined on the closed domain T. Since f has no branch point on α^∘ and c is simple, the results follows from Lemmas <ref> and <ref>. Let Σ=( f,Δ) ∈ℱ. (i) Σ=( f,Δ) can be understood as a branched Riemann surface with boundary ∂Σ=( f,∂Δ), such that every point of Σ is in fact a pair (f,p)=( f(p),p) with p∈Δ: Σ can be regarded as a union of a finite number of disks ( x_j,U_j) of Σdefined in Definition <ref>. Then ( f,U_j) plays the role of chards for Riemann surfaces when Σ is regarded as the set of pairs ( f,p) =( f(p),p) ,p∈Δ. (ii) Assume a is a point in Δ and D is a closed domain in Δ which is a neighborhood of a in Δ. If the restriction f:D→ Twith T=f(D) is a homeomorphism, we will call the subsurface K=( f,D) a simple closed domain of Σ determined by a (when f(D) is given), and use the pair ( T,a) or ( T,( f,a) )to denote this closed domain. It is clear that K and D are uniquely determined by T and a. (ii1) Then the term "( W,a) is a closed simple domain of Σ" means that "W is a closed domain on S and f has a univalent branch g defined on W and g(W) is a neighborhood of a in Δ". (ii2) When D is a closed Jordan domain in Δ such that ( ∂ D) ∩∂Δ contains an arc α of ∂Δ and f:D→ T=f(D) is a homeomorphism. Then D, as an f-lift of T, is uniquely determined by the pair (T,( ∂ D) ∩∂Δ), or (T,α), or ( T,( f,α) ) , or ( T,a) , where ais any interior point of α, say a∈α^∘. So we will write ( f,D) =( T,( ∂Δ) ∩∂ D) )=( T,α) =( T,( f,α) ) =( T,a) , call ( f,( ∂Δ) ∩∂ D) the old boundary of ( f,D) , and ( f,Δ∩∂ D) the new boundary of ( f,D) . (iii) Assume Σ,α,c,c^',T,g satisfies all conditions of Lemma <ref>. If p_1≠ p_2, then g can be extended to T so that for D=g(T), f(D) is the closed Jordan domain Tand ( T,a) is a closed simple Jordan domain of Σ, where a∈α^∘. Then ( T,α)and ( T,c) with c=( f,α) both denote ( f,D) . If p_1=p_2, then D is still a Jordan domain, while T is not. In this case we still use ( T,α) , or ( T,a) , where a is an interior point of α, to denote the surface ( f,D) , and call it a simple closed domain as well. In fact, T can be expressed as a union of Jordan curves, each pair of which only intersect at f(a). (iv) Now assume Σ=( f,Δ) ∈ℱ ( L,m) and let ∂Σ =c_1( q_1,q_2) +⋯+c_m( q_m,q_1) =( f,α_1( a_1,a_2) ) +⋯+( f,α_m( a_m,a_1) ) , be an ℱ( L,m)-partition of ∂Σ. For each j, by definition of ℱ( L,m) and ℱ( L,m)-partitions, f has no branch point in α_j^∘ and f is homeomorphism in a neighborhood of α _j^∘ in Δ. Then Lemma <ref> applies to each c_j, and there exist a positive number θ>0 and a closed domain T_j,θ on S enclosed by c_j and c_j^' such that: (iv1) If q_j≠ q_j+1, c_j^' is a circular arc from q_1 to q_2, ∂ T_j,θ=c_j-c_j^' and ∠( T_j,θ,q_j) =∠( T_j,θ ,q_j+1) =θ∈(0,min_i=j+1∠( Σ,a_i) ), and ( T_j,θ,α_j) is a simple Jordan domain of Σ. (iv2) If q_j=q_j+1, c_j^' is consisted of two convex circulars arcs such that ∂ T_j,θ=c_j-c_j^', c_j^' is contained in the disk enclosed by c_j and c_j^'∩ c_j={q_j}, the two interior angles of T_θ at q_j are equal to θ, and there exists a Jordan domain D in Δ such that ∂ D=α_j-α_j^' with α_j^'∘⊂Δ, and f restricted to D\{a_j,a_j+1} is a homeomorphism onto T_j,θ\{q_i}. We then can use ( T_j,θ,α _j) or ( T_j,θ,c_j) , in which c_j=( f,α_j) , to denote the subsurface ( f,D) , and call it a closed simple domain of Σ as in (iii). Let Σ=( f,Δ) be a surface of 𝐅. (a) A point a∈Δ is called a simple point of f if f is homeomorphic in a neighborhood of a in Δ. (b) A point ( f,a) of Σ is called a simple point of Σ if a is a simple point of f. (c) For a subset A of Δand a point a∈ A, a is called a simple point of f in A if f is homeomorphic in a neighborhood of a in A, and ( f,a) is called a simple point of Σ in ( f,D) if a is a simple point of f in D. By definition a point ( f,a) of Σ^∘, say, a∈Δ, is a simple point of Σ if and only if a is a regular point of f. A point ( f,a) ∈∂Σ is a simple point of Σ, if and only if a is a regular point of f and ∂Σ is simple in a neighborhood of a in ∂Δ. Note that a simple point of a subsurface of Σ needs not be a simple point of Σ. Consider a surface Σ=( f,Δ) ∈ℱ( L,m) with ℱ( L,m)-partition (<ref>). Assume that for some pair j_1 and j_2 with 1≤ j_1<j_2≤ m, L(c_j_i)<π,c_j_i^∘∩ E_q=∅, and one of the following hold (a) The curvatures k(c_j_i) of c_j_i are distinct for i=1,2. (b) k(c_j_1)=k(c_j_2) and both c_j_1 and c_j_2 are major circular arcs. We will show that we can deform Σ by changing the two arcs c_j_1 and c_j_2 to obtain a surface Σ^'∈ℱ( L,m) such that H(Σ^')>H(Σ),L(∂Σ^')=L(∂Σ). By Corollary <ref>, using the notations ( T_j_i,θ_i,c_j_i) with old boundary c_j_i and new boundary c_j_i^'∘,i=1,2, in Remark <ref> (iv2), we can deform the simple domain ( T_j_i,θ_i ,c_j_i) of Σ, i=1,2, as follows. We replace ( T_j_i,θ_i,c_j_i) with ( T_j_i,θ_i^'^',𝔠 _j_i) , where T_j_i,θ_i^'^' is a domain on S enclosed by 𝔠_j_i-c_j_i^' and 𝔠_j_i is a convex circular are from q_j_i to q_j_i+1, which is a small perturbation of c_j_i with the same endpoints and 𝔠_j_i∩ c_j_i^'={q_j_i ,q_j_i+1}. Then by (a), or (b), and Corollary <ref>, we may choose 𝔠_j_i such that ∑_i=1^2A(T_j_i,θ)<∑_i=1^2A(T_j_i,θ^'^') and ∑_i=1 ^2L(c_j_i)=∑_j=1^2L(𝔠_j_i). After this deformation we obtain a new surface Σ^'∈ℱ( L,m)such that the ℱ( L,m)-partition (<ref>) changes into ℱ( L,m)-partition ∂Σ^'=c_1+⋯+c_j_1-1+𝔠_j_1 +c_j_1+1+⋯+c_j_2-1+𝔠_j_2+c_j_2+1+⋯+c_m of ∂Σ^'. Thus the surface Σ^' with H(Σ^') larger and L(∂Σ^') unchanged exists. Σ^' is in fact obtained by moving c_j_i to its left (or right) hand side a little to the position of 𝔠_j_i,i=1,2. § THE DISTANCES ON SURFACES IN ℱ We first introduce some results for counting terms of partitions. Assume that m≥3, Σ=( f,Δ) ∈ℱ_r( L,m) , (<ref>) and (<ref>) are ℱ( L,m)-partitions of ∂Σwith c_j( q_j,q_j+1) =( f,α_j( a_j ,a_j+1) ) ,j=1,…,m, and that the following (a) and (b) hold. (a) a and b are two points on ∂Δ,a≠ b, γ_1 is an arc on ∂Δ from a to b and γ_0=( ∂Δ) \γ_1^∘, both oriented by ∂Δ. (b) I is a simple arc in Δ from a to b such that I^∘∈Δ, I∩ f^-1(E_q)=∅, and either (b1) ( f,-I) is an SCC arc with L( f,I) ≤ L(f,γ_0), or (b2) ( f,I) is straight with L(f,I)<π. Then the following hold. (i) I divides Δ into two Jordan domains Δ_0 and Δ_1 on the left and right hand side of I, respectively, ∂Δ _0=γ_0+I, and ∂Δ_1=γ_1-I. (ii) The surface Σ_1=( f,Δ_1) is contained in ℱ_r( L,m_1^') with m_1^'=m+2-#[ γ_0∩{a_j}_j=1^m] . (iii) When (b2) holds Σ_0=( f,Δ_0) is also contained in ℱ_r( L,m_0^') with m_0^'=m+2-#[ γ_1∩{a_j}_j=1^m] . By the assumption, (i) is trivial to verify. By (b), f has no branch point in I^∘ and thus we have: (c) f is homeomorphic in a neighborhood of I^∘ in Δ, and thus Σ_1=( f,Δ_1) ∈ℱ_r, and when (b2) holds Σ_0∈ℱ_r as well. It is clear that by (b) L≥ L( f,∂Δ) =L(γ_1+γ_0)≥ L(γ_1)+L(f,I)=L(∂Σ_1). and for the same reason L≥ L(∂Σ_0) when (b2) holds. Therefore we have by (b) and (c) that: (d) Σ_1∈ℱ_r( L) ; and if (b2) holds, then Σ_0∈ℱ_r( L) also holds. The endpoints {a,b} gives a refinement of the ℱ( L,m)-partition (<ref>) which contains m+2 terms, among which at most two are just points, and we let 𝐀 be the set of all these terms. It is easy to see that if γ_0 contains s points of {a_j}_j=1^m, then γ_1 is a sum of m+1-s terms of 𝐀, no matter what #{a,b}∩{a_j}_j=1^m is equal to. Thus by (c) and (d) ∂Δ_1 has an ℱ_r( L,m^')-partitionconsisted of the term I and m+1-s terms of 𝐀, for Σ_1, with s=#γ_0 ∩{a_j}_j=1^m. Thus we have Σ_1∈ℱ_r( L,m_1^') and for the same reason, Σ_0∈ℱ_r( L,m_0^') in the case (b2). Assume m>3, Σ=( f,Δ) ∈ℱ_r( L,m), (<ref>) and (<ref>) are ℱ( L,m)-partitions of ∂Σwith c_j( q_j,q_j+1) =( f,α_j( a_j,a_j+1) ) ,j=1,…,m, and the following (a)–(e) hold (see Figure <ref> for k=3). (a) k is a positive integers with 2≤ k<m, b_2 and b_2k+1 are two points on ∂Δ with b_2≠ b_2k+1; γ_0 is the arc of ∂Δ from b_2k+1 to b_2and γ_0^c is the arc ( ∂Δ) \γ_0^∘, both oriented by ∂Δ; and γ_0^'=γ_0^'( a_i_0,a_i_2) is the smallest arc on ∂Δ containing γ_0 such that ∂γ_0^'={a_i_0 ,a_i_2}⊂{a_j}_j=1^m, that is, γ_0^' is the union of all the terms in (<ref>) which intersect γ_0^∘. (b) I=I( b_2,b_2k+1) is a simple arc in Δ from b_2∈∂Δ to b_2k+1∈∂Δ which has the partition I( b_2,b_2k+1) =I_2( b_2,b_3) +I_3( b_3,b_4) +⋯+I_2k( b_2k,b_2k+1 ) , such that for each j=2,…,k,I_2j-1⊂∂Δ, while for each j=1,…,k,∅≠ I_2j^∘⊂Δ. (c) I∩γ_0={b_2,b_2k+1}, say, I_3( b_3 ,b_4) ,…,I_2k-1( b_2k-1,b_2k) are all contained in the open arc ( γ_0^c) ^∘and b_2,b_3,…,b_2k,b_2k+1 are arranged anticlockwise on ∂Δ. (d) ( f,-I)is an SCC arc on S, L(f,-I)≤ L(f,γ _0) and ∪_j=1^kI_2j^∘⊂Δ\ f^-1 (E_q). (e) One of the conditions (e1)–(e3) holds: (e1) γ_0^'∩ I^∘=∅, say, I_3∩γ _0^'=I_2k-1∩γ_0^'=∅, as in Figures <ref> (1) and (2); (e2) γ_0^'∩ I^∘=∅ and γ_0^∘ ∩{a_j}_j=1^m≠∅; (e3) γ_0^'∩ I^∘=∅ and γ_0^'=γ_0, as in Figure <ref> (1). Then the following hold: (i) For each j=2,…,k, the two end points of I_2j-1 are contained in {a_j}_j=1^m, say {b_3,…,b_2k}⊂{a_j}_j=1^m. (ii) I divides Δ into k+1 Jordan domains {Δ _i} _i=0^k such that Δ_0 is on the left hand side of I and Δ_1,…,Δ_k are on the right hand side of I. (iii) For each j=1,2,…,k, one of the following holds. (iii1) Σ_j=( f,Δ_j) ∈ℱ _r( L,m) if (e1) holds. (iii2) Σ_j=( f,Δ_j) ∈ℱ _r( L,m-1)if (e2) or (e3) holds. (iv) min{ L(∂Σ_1),L(∂Σ_k)}≥min{L(f,γ_01),L(f,γ_02)} where γ_0i,i=1,2, are the two components of γ_0^'\γ_0^∘. (i) follows from Lemma <ref>, and (ii) is trivial. Let γ_j be the arc of ∂Δ from b_2j to b_2j+1 for j=1,…,k. Then we may arrange Δ_j so that ∂Δ_j=γ_j-I_2j,j=1,2,…,k. It is clear that γ_j ∩γ_0 contains at most one point for j=1,…,k, and then by (d) we have L(f,∂Δ_j)=L( f,γ_j-I_2j) ≤ L(f,γ_j)+L(f,γ_0)≤ L(f,∂Δ)≤ L, and moreover f restricted to a neighborhood of I_2j^∘ in Δ is a homeomorphism, by (d) and the assumption Σ=( f,Δ) ∈ℱ_r(L,m). We may assume { a_i_0,a_i_2}⊂{a_j}_j=1^m with i_0<i_2<m+i_0( a_m+i=a_i) . Assume (e1) holds. Then {b_3,b_2k}is contained in I^∘ ∩{a_j}_j=1^m by (i), and is outside γ_0^' by (e1), and then a_i_2 and b_3 are distinct and both contained in γ_1∩{a_j}_j=1^m, and for the same reason, a_i_0 and b_2k are distinct and both contained in γ_k∩{a_j}_j=1 ^m. Then it is easy to see s_1=#[ ( ∂Δ) \γ_1^∘] ∩{a_j}_j=1^m≥#γ _k∩{a_j}_j=1^m≥2and Σ_1 is contained in ℱ_r( L,m) , by applying Lemma <ref> to I_2; and so is Σ_k for the same reason. It is trivial to see that when k>2, for j=2,3,…,k-1, s_j=#[ ( ∂Δ) \γ_j^∘] ∩{a_j}_j=1^m≥#{a_i_2 ,b_3,b_2k,a_i_0}=4, and thus by Lemma <ref> Σ_j ∈ℱ_r( L,m-2) . Thus (iii1) holds. Assume (e2) holds. Then there exists i_1 with i_0<i_1<i_2 and a_i_1∈γ_0^∘. Consider Δ_1 and Δ_k. It is clear that a_i_0 and a_i_1 are both contained in ( ∂Δ) \γ_1. On the other hand, by (e2) and (i) b_3∈{∂γ_1}∩{a_j}_j=1^m. Thus s_1=#[ ( ∂Δ) \γ _1^∘] ∩{a_j}_j=1^m≥3 and by Lemma <ref> we have Σ_1∈ℱ_r( L,m+2-s) ⊂ℱ_r( L,m-1) . For the same reason Σ_k ∈ℱ_r( L,m-1) . It is trivial to see that when k>2, for j=2,3,…,k-1, s_j=#[ ( ∂Δ) \γ_j^∘] ∩{a_j}_j=1^m≥#{a_i_2 ,b_3,b_2k,a_i_0,a_i_1}=5, and thus we have by Lemma <ref> Σ_j∈ℱ_r( L,m-3) . Assume (e3) holds. Then we still have s_1=#[ ( ∂Δ) \γ_1^∘] ∩{a_j}_j=1 ^m≥3 and thus Σ_1∈ℱ_r( L,m-1) . For the same reason, we also have Σ_k∈ℱ_r( L,m-1). Assume k>2 and let j∈{2,…,k-1}. Then by (i) ∂γ_j are contained in {a_j}_j=1^m, and by the assumptions, a_i_0,a_i_2 are outside γ_j^∘ and thus s_j=#[ ( ∂Δ) \γ_j^∘] ∩{a_j}_j=1^m≥#{ a_i_2,b_3,b_2k ,a_i_0} =4, and then by Lemma <ref> we have Σ_j ∈ℱ_r( L,m+2-4) =ℱ_r(L,m-2). (iii2) has been proved. It is clear that L( ∂Σ_1) ≥ L( γ_01) and L( ∂Σ_2) ≥ L( γ_02) this implies that (iv) holds true. Let Σ=( f,Δ) ∈ℱ. For any two points a and b in Δ, define their d_f-distance d_f( a,b) by d_f(a,b)=inf{L(f,I):I is a curve in Δ with endpoints a and b}; for any two sets A and B in Δ define their d_f-distance by d_f( A,B) =inf{ d_f( a,b) :a∈ A,b∈ B} ; and for any set A in Δ and any ε>0 define the d_f-ε-neighborhood of A (in Δ) by N_f(A,ε)={ x∈Δ:d_f(A,x)<ε} . The distance d_f( a,b) is also called the distance of Σ between the two points ( f,a) and ( f,b) of Σ. Sometimes we will write d_Σ( ( f,a) ,( f,b) ) =d_f( a,b) . Then the notation d_Σ( ( f,A) ,( f,B) ) between two sets of Σ, and N_Σ(( f,A) ,ε) is well defined. When ε is small enough, (a,N_f( a,ε) ) is the disk of Σ with radius ε (see Lemma <ref>, Corollary <ref> and Definition <ref>). On the other hand, Corollary <ref> (iv) directly implies Let Σ=( f,Δ) ∈ℱ(L,m), let (<ref>) and (<ref>) be ℱ( L,m)-partitions of ∂Σwith c_j( q_j ,q_j+1) =( f,α_j( a_j,a_j+1) ) ,j=1,…,m, for any j let a∈α_j^∘, and finally let I be a d_f-shortest path in Δ. Then for any disk ( a,U_δ) of Σ with small enough radius δ<π/2 the following hold: (i) f:U_δ→ f(U_δ) is a homeomorphism and f(U_δ) is a convex lens. If c_j is straight, then f(U_δ) is half of a disk whose diameter is contained in c_j and f(U_δ) is on the left hand side of c_j. (ii) For any two points a_1 and a_2 in U_δ, the d_f-shortest path from a_1 to a_2 exists, which is the unique f-lift of the line segment f(a_1)f(a_2) in U. (iii) If I∩U_δ≠∅, then I∩U_δ is a subarc of I. (iv) If a∈ I^∘∩α_j^∘, then c_j is straight and I^∘∩α_j^∘ is an open neighborhood of a in α_j. Thus I^∘∩α_j^∘=∅ if c_j is strictly convex. For any Σ=( f,Δ) ∈ℱ, d_f(x,y) is a continuous function on Δ×Δ. In other words, d_Σ( · ,·) is a continuous function on Σ×Σ. The proof is simple and standard and left to the reader. Let Σ=( f,Δ) ∈ℱ and let ( x,U) be a disk of Σ with radius δ (see Definition <ref>). Then for any y∈Δ\ U, d_f(x,y)≥δ, and d_f( x,y) =δ for every y∈( ∂ U) \∂Δ. This is trivial by Definition <ref> and Remark <ref>. Let Σ=( f,Δ) ∈ℱ, let x∈Δ, and let ( x,U_x), ( x,U_x^') and ( x,U_x^'') be three disks in Σ with radius δ/4,δ/2, and δ (see Definition <ref>), respectively. Then for any two distinct points a and b in U_x, the d_f-shortest path I(a,b) exists; and more precisely, putting A=f(a),B=f(b),X=f(x), one of the following holds. (i) If a=x, then the f-lift I( x,b) of XB is the unique shortest path from x to b. (ii) If A≠ B and AB has an f-lift I=I(a,b) from a to b, then I is the unique d_f-shortest path. (iii) If A=B, or A≠ B but AB has no f-lift from a to b, then the f-lift I=I(a,b) of AXB, from a to x, and then to b, is the unique d_f-shortest path. (i) follows from Corollary <ref> (v). (ii) is trivial. We only prove (iii). By definition there exists a sequence of paths I_n⊂Δ from a to b such that for[If I_n is the path I_n :[0,1]→Δ, the length L( f,I_n) should be understood to be L(f∘ I_n,[0,1]).] s_n=L( f,I_n) lim_n→∞s_n=d_f( a,b) . It is clear that d_f( a,b) ≤ L(AXB)<δ/2 by Corollary <ref> (v), since d( X,A) <δ/4,d( X,B) <δ/4. Thus, for sufficiently large n, we see that I_n⊂ U_x^', for otherwise we have d_f( a,b) ≥δ/2. We may parametrize I_nby length with L( f,I_n|_[0,s]) =s,s∈0,s_n] and s_n=L(f,I_n)→ d_f( a,b) >0. By Aazela-Ascoli theorem, we may assume Γ_n=(f,I_n), as a mapping from [0,s_n] to S, has a subsequence uniformly converging to a path Γ_0:[0,s_0]→ S from A to B and we assume the subsequence is Γ_n itself. Then we have L(Γ_0)≤ d_f( a,b) ≤ L(AXB), and Γ_0⊂D( x,δ/2) . If X∈Γ_0, then we have L(Γ_0)=d_f( a,b) =L(AXB) and by (i) the f-lift I(a,b) of AXB is a d_f-shortest path from a to b. Let I^'( a,b) be another d_f-shortest path. If x∈ I^'( a,b) , then x gives a partition I^'( a,b) =I^'( a,x) +I^'( x,b) , I^'( a,x) and I^'( x,b) have to be the d_f-shortest paths from a to x, and x to b, respectively by (i), and thus I^'( a,x) +I^'( x,b) =I( a,b) by(i). If x∉ I^'( a,b) , we can show, as the following discussion for the case X∉Γ_0, that that I^'( a,b) is the unique f-lift of AB, which implies d_f( a,b) =L(I^'( a,b) )=L( AB) =L(AX)+L(XB)=d_f( a,b) . This is a contradiction, since L( AB) <L(AX)+L(XB) when X∉AB. Assume X∉Γ_0. Then there exists a disk ( x,V_x) of Σ in ( x,U_x) such that I_n⊂ U_x\V_x, f is locally homeomorphic on U_x\ V_x, and Γ_0⊂ f(U_x \ V_x). Then Γ_0 has an f-lift I_0 such that I_n uniformly converges to I_0, by Lemma <ref>. Then I_0 is a d_f-shortest path from a to b. We will show that Γ _0=AB. It is clear that I_0 is simple, for otherwise there is another path I_0^'=I_0^'( a,b) which is obtained from I_0 by omitting a loop of I_0 so that L( f,I_0^') <d_f( a,b) contradicting I_0 being shortest. It is also clear that any subarc of I_0 is a d_f-shortest path. Let y∈ I_0^∘. Then by the assumption we have x∉ I_0 and by Corollary <ref> (i) and (iv), y has a neighborhood I_y =I_y( y^',y^'') in I_0 so that I_y is contained in a disk ( y,U_y) of ( x,U_x^')with x∉ U_y. Then f(U_y) is convex and f:U_y→ f(U_y) is homeomorphic. Thus both f( I_y) and f(y^')f(y^'')can be lift into U_y⊂ U_x^' from y^' to y^''. Then I_y has to be the lift of f(y^')f(y^''). Thus ( f,I_0) is straight everywhere and we have ( f,I_0) =Γ_0=AB. Then in this case, I_0 is the unique d_f-shortest from a to b. Let Σ=( f,U) ∈ℱ and let a_1 and a_2 be two distinct points in Δ. Then the d_f-shortest path I from a_1 to a_2 exists, and for any such path I, ( f,I) is a polygonal path on S from f(a_1) to f(a_2). It is clear that, for some positive integer δ_0, There are 3s_0 disks ( p_s,U_s) , ( p_s,U_s^') , ( p_s,U_s^'') of Σ, with radius δ/4,δ/2,δ for each s=1,…,s_0, such that 𝒪={U_s}_s=1^s_0 is an open covering of U. Note that by Lemma <ref> and Definition <ref>, for each s=1,…,s_0, the four relations U_s∩∂ U≠∅ ,U_s^'∩∂ U≠∅,U_s^''∩∂ U≠∅ and p_s∈∂ U are equivalent. Write C_s=U∩∂ U_s, the closure of the part of ∂ U_s located in U, for s=1,2,…,s_0. We assume that no U_s^'' contains U, and then C_s≠∅ and, as α_3 in Lemma <ref> (B) (B1), C_s is connected for all s=1,…,s_0. By definition, there exists a sequence of paths J_n in U from a_1 to a_2 such that lim_n→∞L(f,J_n)=d_f(a_1,a_2). It is clear that a_1∈ U_s_1∈𝒪for some s_1≤ s_0. If C_s_1∩ J_n=∅ for some J_n, then {a_1,a_2}⊂ U_s_1 and the d_f-shortest path I, such that ( f,I) is polygonal, exists by Lemma <ref>. Thus we may assume that for each n,C_s_1∩ J_n≠∅, and let a_n2 be the latest point of J_n contained in J_n∩ C_s_1 .By taking subsequence we may assume that a_n2→ a_02∈ C_s_1, and then for the smaller arc C_s_1^n of C_s_1 between a_02 and a_n2, the length L(f,C_s_1^n) tends to 0, since we assumed f is holomorphic on U(note that C_s_1 may be a circle). By the previous lemma, the d_f-shortest path I_01=I_01( a_01,a_02) =I_01( a_1 ,a_02) , which means by convention that I_01 is an arc from a_01=a_1 to a_02, exists, and ( f,I_01) is polygonal. Let J_n,1=C_s_1^n+J_n( a_n2,a_2) and J_n^1=I_01( a_01,a_02) +J_n,1. Then we still have L(f,J_n^1)→ d_f(a_1,a_2). Let U_s_2 be an element of 𝒪 such that a_02∈ U_s_2 . Then s_2≠ s_1 and when n is large enough C_s_1^n⊂ U_s_2. Thus we have by definition of a_n2, J_n,1⊂∪_s∈{1,2,…,s_0}\{ s_1}U_s. Applying the same argument to J_n,1, we can show that there exist a point a_03∈ C_s_2, a d_f-shortest path I_02( a_02 ,a_03) , a path J_n,2from a_03 to a_2 such that J_n,2⊂∪_s∈{1,2,…,s_0}\{ s_1 ,s_2}U_s, and for the path J_n^2=I_01( a_01,a_02) +I_02( a_02,a_03) +J_n,2 from a_1=a_01 to a_2 L(f,J_n^2)→ d_f(a_1,a_2). Repeating the above method a finite number of times, we can finally prove that there exists a path I=I_01+I_02+⋯+I_0s^∗ with s^∗≤ s_0 such that L(f,I)=d_f(a_1,a_2). The existence of I is proved. For any d_f-shortest path I from a_1 to a_2, we may apply the above argument to I to show that (f,I) is a polygonal. This completes the proof. Let m≥3, Σ=( f,Δ) ∈ℱ_r(L,m) and let b_1 and b be two distinct points on ∂Δ with d_f(b_1,b)<π. Let ∂Δ=α_1( a_1,a_2) +α_2( a_2,a_3) +⋯+α_m( a_m,a_1) be an ℱ(L,m)-partition of ∂Σ and ∂Σ=c_1( q_1,q_2) +c_2( q_2 ,q_3) +⋯+c_m( q_m,q_1) be the corresponding ℱ(L,m)-partition. Then for any d_f-shortest path I=I( b_1,b) from b_1 to b, say, L(f,I)=d_f(b_1,b), the following hold: (i) I is simple. (ii) For each component J of I∩Δ\ f^-1(E_q), ( f,J) is a simple straight arc with L(f,J)<π. (iii) For each j∈{1,…,m}, if c_j is strictly convex, then I∩α_j^∘⊂{b_1,b}, and thus I^∘∩α _j^∘=∅. (iv) For each j∈{1,…,m}, if c_j is straight and I^∘ ∩α_j^∘≠∅, then one of the following holds: (iv1) α_j⊂ Ior I⊂α_j; (iv2) α_j∩ I is an subarc of α_j with α_j∩ I=α_j( a_j,b^') , or α_j∩ I=α_j( b^',a_j+1) , where b^'∈{b_1,b}; (iv3) α_j∩ Iis consisted of two subarcs I^' and I^'' of α_j, with I^'∩ I^''=∅,I^'=α_j( a_j,b_1^'), I^''=α_j( b_2^',a_j+1) , and {b_1^',b_2^'}={b_1,b}. This occurs only if {b_1,b}⊂α_j and the subarc of c_j with endpoints f(b_1) and f(b) has length >π;Thus each compact component of I^∘∩∂Δ is a point of { a_j} _j=1^m, or an arc of ∂Δ with two endpoints in { a_j} _j=1^m. (v) I has a partition I=I_1+I_2+⋯+I_2k+1,k≤ m+1, such that for each j=1,2,…,k, I_2j^∘≠∅ and I_2j^∘⊂Δ; and for each j=1,2,…,k+1, I_2j-1⊂∂Δ and either I_2j-1 is a point or ( f,I_2j-1 ) is an arc of ∂Σ which is a polygonal path on S (if I∩Δ=∅, k=0 and I=I_1). (vi) If k≥1, and if for some j=1,2,…,k, I_2j^∘∩ f^-1(E_q)=∅, then ( f,I_2j) is a simple line segment with L( f,I_2j) <πand f restricted to a neighborhood of I_2j^∘ is a homeomorphism. (vii) If k≥1 and I_1 is not a point, then the joint point of I_1 and I_2 is contained in {a_j}_j=1^m, and if I_2k+1 is not a point, then the joint point of I_2k and I_2k+1 is also contained in {a_j}_j=1^m. (viii) If k≥2, then the endpoints of all I_j for j=3,5,…,2k-1, are contained in {a_j}_j=1^m. (ix) Assume k≥1. Then I_2 divides Δ into two Jordan domains Δ_1and Δ_2. Assume, in addition to k≥1, Δ∩ I_2∩ f^-1(E_q)=∅, and one of the following (a)–(d) holds: (a) ( ∂Δ_i) ∩∂Δ contains at least two points of {a_j}_j=1^m for each i=1,2; or (b) k>1, and the two endpoints of I_2are not simultaneously contained in any term α_j in (<ref>), for j=1,…,m; or (c) k=1, the two endpoints of I_2are not simultaneously contained in any term α_j in (<ref>), for j=1,…,m, and either one of I_1 and I_3 is not a point, or one of I_1 and I_3 is a point contained in {a_j}_j=1^m. Then both ( f,Δ_1) and ( f,Δ_2)are contained in ℱ_r(L,m). (x) If b_1∈α_1^∘and L(f,I_1)<d( { q_1,q_2} ,f(b_1)) , then I_1 is a point, and if in addition k≥1, (<ref>) holds, b_2∉( α_m+α_1+α_2) ^∘, and L(f,I_2)<d( { q_1,q_2} ,f(b_1)) , then the conclusion of (ix) holds: I_2 cuts Σ into two surfaces contained in ℱ_r(L,m). (i) trivially holds. For each regular point x∈Δ of f, there exists a disk ( x,U_x) of Σ such that f restricted to U_x is a homeomorphism onto a disk on S. Then for any two points a and b in U_x, the path I(a,b) such that ( f,I(a,b)) is a simple straight path is a d_f-shortest path from a to b. On the other hand all singular points are all contained in f^-1(E_q). Therefore (ii) follows from Lemma <ref> (ii) and (<ref>). (iii) and (iv) follows from Lemma <ref> (iv). It follows from (iii) and (iv) that each component of I∩∂Δ contains at least one point of {a_j}_j=1^m∪{b_1,b_2}. Thus, by (i), we conclude that I∩∂Δ contains at most m+2 components, and then k≤ m+1. (v) is proved. Now, it is clear that (ii) implies (vi), and that (iii) and (iv) imply (vii) and (viii). To prove (ix) assume that k≥1 and (<ref>) holds. Then by (vi) ( f,I_2) is a simple line segment and by the assumption Σ∈ℱ_r(L,m) we have f restricted to a neighborhood of I_2^∘ in Δ is a homeomorphism. We first assume that (a) holds and write I_2=I_2(B_1,B_2). Then Δ\ I_2 is consisted of two Jordan domains Δ _j,j=1,2. By (a) B_1 and B_2 give a refinement of the partition (<ref>) and each of the two arcs on ∂Δ with endpoints B_1 and B_2 contains at most m-1 terms. Then Condition <ref> implies that both Σ_1=( f,Δ_1) and Σ_2=( f,Δ_2) are contained in ℱ_r(L,m) if both L(∂Σ_1)≤ L(∂Σ) and L(∂Σ_2)≤ L(∂Σ)hold. But these two inequalities easily follows from that ( f,I_2) is straight and less than π. We have proved (ix) when (a) holds. To compete the proof of (ix), it suffices to show that (b) or (c) implies (a). When k>1, the terminal point B_2 of I_2 is contained in {a_j}_j=1^m by (viii), and thus the hypothesis (b) implies (a). When k=1, by (c) and (vii) at least one endpoints of I_2 is contained in {a_j}_j=1^mand thus (c) also implies (a). (ix) is proved. Assume b_1∈α_1^∘ and (<ref>) holds. Then for δ=d( { q_1,q_2} ,f(b_1)) L(f,I_1)<δ≤min{L(c_1( q_1,f(b_1)) ,L(c_1( f(b_1),q_2) }, and then ( f,I_1) ⊂ c_1^∘, which implies I_1⊂α_1^∘ and thus ∂ I_1 does not intersects {a_j}_j=1^m, contradicting (vii) if I_1 is not a point. Thus I_1 is a point and we have I_1+I_2=I_2( b_1,B_2) =I_2( B_1,B_2) . In addition to (<ref>), assume (<ref>), (<ref>) and (<ref>) hold. Then Condition <ref> also holds. We show that the condition (a) is satisfied. First consider the case k>1. Then by (viii) we have B_2∈{a_j }_j=1^m. If B_2=a_1 or a_2, then we have L(f,I_2 )=L(f,I_2( b_1,B_2) )≥δ since b_1=B_1 ∈α_1^∘, contradicting (<ref>), and so B_2∈{a_j}_j=3^m. Thus (a) holds. Assume k=1. If I_3 is not a point, then by (vii) we have b_3=B_2 ∈{a_j}_j=1^m, and if in addition B_2=a_1 or a_2 we can obtain a contradiction by (<ref>) again, and thus we have B_2∈{a_j}_j=3^m and (a) holds again. If I_3 is a point, then I_2=I( b_1,b_2) =I_2=I( B_1,B_2) (note that k=1) and (a) also holds, by (<ref>). We have proved that (<ref>), (<ref>), (<ref>) imply (a), and then all conclusion of (x) hold, by (ix). The lemma is proved completely. Let Σ=( f,Δ)be a surface in ℱ,ε be a positive number and let A,B be two compact sets in Δ. Then d_f(N_f(A,ε),N_f(B,ε))≥ d_f(A,N_f (B,ε))-ε≥ d_f(A,B)-2ε. It suffices to prove the second inequality. Let a and b^' be any two points of A and N_f(B,ε). Then there exists b∈ B such that d_f(b,b^')<ε. Then d_f(a,b^')≥ d_f(a,b)-d_f(b,b^')≥ d_f (a,b)-ε≥ d_f(A,B)-ε. This implies d_f(A,N_f(B,ε))≥ d_f(A,B)-ε. Let L be a positive number in ℒ (see Definition <ref> for ℒ) with L≥2δ_E_q. Then there exists a positive number δ_0 such that d_f(Δ∩ f^-1(E_q),∂Δ)>δ_0 holds for all surfaces Σ=( f,Δ) in ℱ(L) with L(∂Σ)≥δ_E_q, Δ∩ f^-1(E_q)≠∅, and with H(Σ)>H_L-π/2L( ∂Σ) . Since L∈ℒ, by Lemma <ref>, there exists a sequence δ_L,n with 0<δ_L,n<1/n such that H_L+1/n>H_L+δ_L,n>H_L,n=1,2,…. Assume δ_0 does not exists, then there exists a sequence Σ _n=( f_n,Δ) ∈ℱ(L) such that for every n, L(∂Σ_n)≥δ_E_q, Δ∩ f_n^-1(E_q)≠∅, H(Σ_n)>H_L-π/2L( ∂Σ_n) , and d_f_n(a_n,∂Δ)<δ_L,n/2 for some a_n∈Δ∩ f_n^-1(E_q). Then for each n there is an arc α_n=α( a_n,b_n) in Δ such that b_n∈∂Δ, α_n\{b_n }⊂Δ and f_n restricted to α_n is a homeomorphism onto a polygonal path β_n with L(β_n)<δ_L,n/2, and then the new surface Σ_n^' which is obtained by cutting Σ_n along β_n so that the interior of Σ_n^' is equivalent to ( f_n,Δ\α_n) and ∂Σ_n^'=β_n+∂Σ_n-β_n. Then for all n=1,2,…, R(Σ_n^')=R(Σ_n)+4π, and 2δ_E_q<L(∂Σ_n^')=L(∂Σ_n )+2L(β_n)<L(∂Σ_n)+δ_L,n≤ L+δ_L,n, Thus by (<ref>)–(<ref>), we have a contradiction estimation H_L+1/n ≥ H_L+δ_L,n≥ H(Σ_n^' )=R(Σ_n^')/L(∂Σ_n^')≥R(Σ_n)+4π/L(∂Σ_n)+δ_L,n =H(Σ_n)+4π/L(∂Σ_n)/1+δ_L,n/L(∂Σ_n)≥H_L-π/2L( ∂Σ_n) +4π/L(∂Σ_n)/1+δ_L,n/L(∂Σ_n) =H_L+7π/2L( ∂Σ_n) /1+δ_L,n/L(∂Σ_n)≥H_L+7π/2L /1+δ_L,n/δ_E_q → H_L+7π/2L ( as n→ 0) . § UNDECOMPOSABILITY OF PRECISE EXTREMAL SEQUENCES This section is to study precise extremal sequences of ℱ _r^'(L,m), ℱ_r(L,m), or ℱ(L,m). (1) A surface Σ_L,m=( f,Δ) is called extremal in ℱ _r^'(L,m), if H(Σ_L,m)=H_L,m=sup_Σ∈ℱ_r^'(L,m) H(Σ). (2) If in addition H(Σ_L,m)>H_L-π/2L( ∂Σ_L,m) , and for any other extremal surface Σ in ℱ_r^' (L,m), L(∂Σ_L,m)≤ L(∂Σ), then Σ_L,m is called a precise extremal surface of ℱ_r^'(L,m). Extremal surfaces and precise extremal surfaces of ℱ_r( L,m) (or ℱ( L,m)) are defined when the above ℱ_r^'(L,m) is replaced by ℱ_r( L,m) (or ℱ( L,m)). Recall in Definition <ref>, where we have defined precise extremal surfaces of ℱ( L) . In that definition, we do not require (<ref>). But for a precise extremal surface Σ_L,m in ℱ _r^'(L,m),ℱ_r( L,m) or ℱ( L,m), we always assume H(Σ_L,m)is sufficiently close to H_L=sup_Σ∈ℱ(L,m)H(Σ), that is, (<ref>) is satisfied. By Definition <ref>, the equality (<ref>) and Corollary <ref>, if m is large enough and Σ_L,m is extremal in ℱ_r^'(L,m), ℱ_r( L,m) or ℱ( L,m), then H(Σ_L,m)>H_L-π/2L, which implies (<ref>). (1) A sequence Σ_n=( f_n,Δ) ∈ℱ_r^'(L,m) is called an extremal sequence in ℱ_r^'(L,m), if lim_n→∞H(Σ_n)=H_L,m. (2) If in addition H(Σ_n)≥ H_L-2π/L(∂Σ_n),n=1,2,…, lim_n→∞L(∂Σ_n) exists and for any other extremal sequence Σ_n^' in ℱ_r^'(L,m), lim_n→∞inf L(∂Σ_n^')≥lim_n→∞L(∂Σ_n), Σ_n is called a precise extremal sequence in ℱ_r^'(L,m). Extremal sequences and precise extremal sequences of ℱ_r( L,m) (or ℱ( L,m)) are defined, if ℱ_r^'(L,m) is replaced by ℱ _r( L,m) (or ℱ( L,m)). Any precise extremal surface (sequence) of ℱ _r( L,m) is a precise extremal surface (sequence) of ℱ(L,m), and any precise extremal surface (sequence) of ℱ_r^'( L,m) is a precise extremal surface (sequence) of ℱ_r(L,m) and ℱ( L,m) . This follows from Corollary <ref>. Note that surfaces in an extremal sequence need not be extremal, that is, it is possible that H(Σ_n)<H_L,m for some n. Lemma <ref> can be extended to extremal sequences of ℱ( L,m) . Let L≥2δ_E_q be a positive number with L∈ℒ. Then there exists a positive integer m_0 and a positive number δ_0 such that: for any m>m_0 and any sequence Σ _n=( f_n,Δ) ∈ℱ(L,m), if lim_n→∞H(Σ_n)=H_L,m f_n^-1(E_q)∩Δ≠∅, and L(∂Σ_n)≥δ_E_q, then d_f_n(f_n^-1(E_q)∩Δ,∂Δ)≥δ_0, for sufficiently large n. By (<ref>) Σ_n is extremal in ℱ(L,m). Then by Remark <ref>, when m is given large enough, for sufficiently large n, we have (<ref>), and then by Lemma <ref> we have the conclusion. (1) For any fixed L>0 and any sufficiently large positive integer m, there exists a precise extremal sequence in ℱ _r^'(L,m). (2) If L<2δ_E_q, then for any extremal sequence Σ_n of ℱ_r^'(L,m), ℱ_r(L,m) or ℱ (L,m), lim_n→∞inf L(∂Σ_n)=L. (3) If L≥2δ_E_q, then for any extremal sequence Σ_n in ℱ_r^'(L,m), ℱ_r(L,m) or ℱ (L,m), lim_n→∞inf L(∂Σ_n)≥2δ_E_q. (1) By (<ref>), Definition <ref> and Corollary <ref>, we may assume that m is sufficiently large such that H_L,m>H_L-π/2L. Then by Definition <ref>, Corollary <ref> and (<ref>), there exists an extremal sequence in ℱ_r^'(L,m), and for any extremal sequence F_n in ℱ_r^'(L,m), we have H(F_n)→ H_L,m>H_L-π/2L as n→∞. Let L_0≤ L be the infimum of all numbers L^' such that there exists an extremal sequence F_nof ℱ_r^'(L,m) such that lim_n→∞inf L(∂ F_n)=L^'. Then there exists an extremal sequence Σ_n in ℱ _r^'(L,m)such that lim_n→∞L(∂Σ _n)=L_0. By (<ref>), for sufficiently large n_0 and any n≥ n_0, Σ_n satisfies H(Σ_n)>H_L-π/2L≥ H_L-π/2L( ∂Σ_n) , and so Σ_n with n≥ n_0 is a precise extremal sequence in ℱ_r^'(L,m). (2) Assume L<2δ_E_qand Σ_n is an extremal sequence of ℱ_r^'(L,m), ℱ_r(L,m) or ℱ(L,m), with lim_n→∞inf L( ∂Σ_n) =l_0<L. We will deduce a contradiction. Since subsequences of an extremal sequence are also extremal, we may assume lim_n→∞L( ∂Σ_n) =l_0. Let L^',L^''∈ (l_0,L),L^'<L^''. Then it is clear that there exist disks T_L^',T_L^'' and, for each large enough n, a disk T_n on S with T_n⊂ T_L^'⊂ T_L^''⊂ S\ E_q such that L(∂Σ_n)=L(∂ T_n)<L^'=L(∂ T_L^' )<L^''=L(∂ T_L^''). Then by Theorem <ref> and Lemma <ref>, we have a contradiction: H_L,m=lim_n→∞H(Σ_n)≤lim_n→∞H(T_n)≤ H(T_L^')<H(T_L^'')≤ H_L,m, where the last inequality follows from that T_L^'' is a surface of ℱ_r^'(L,m). (3) Assume L≥2δ_E_q and let Σ_n be any extremal sequence in ℱ_r^'(L,m), or ℱ_r(L,m), or ℱ(L,m) with l_0=lim_n→∞inf L(∂Σ_n)<2δ_E_q. Then for some L^' and L^'' in (l_0,2δ_E_q) with L^'<L^'', as discussed in (2), we have H_L,m<H(T_L^'')≤ H_L^'',m. But this contradicts L^''<L, which implies T_L^''∈ℱ_r^'(L,m), and then H_L^'',m≤ H_L,m. Let Σ_L,m be a precise extremal surface in ℱ_r(L,m). Then there exists a precise extremal surface Σ_1 of ℱ _r^'(L,m) such that ∂Σ_1=∂Σ_L,m. By Corollary <ref>, there exists a surface Σ_1in ℱ_r^'(L,m) such that H(Σ_1)≥ H(Σ) and L(∂Σ_1)≤ L(∂Σ). Since ℱ_r^'(L,m)⊂ℱ_r(L,m), Σ_1 is a precise extremal surface of ℱ_r^'(L,m) which is also precise extremal in ℱ_r(L,m). Thus L(∂Σ_1)=L(∂Σ), and then we have ∂Σ_1=∂Σ_L,m by Theorem <ref>. Let L∈ℒ be a positive number and let Σ_n be an extremal sequence of ℱ(L,m). Σ_n is called decomposable in ℱ(L,m^') if for some positive integer j_0≥2, there exists a subsequence Σ_n_k of Σ_n, a sequence {{Σ_n_kj} _j=1^j_0} _k=1^∞ with {Σ_n_k j} _j=1^j_0⊂ℱ(L,m^') and a sequence ε_k of positive numbers such that lim_k→∞ε_k=0, ∑_j=1^j_0R(Σ_n_kj)≥ R(Σ_n_k)-ε _k, k=1,2,…, ∑_j=1^j_0L(∂Σ_n_kj)≤ L(∂Σ_n_k )+ε_k, and one of the following conditions holds: (a) For each j≤ j_0, lim_k→∞inf L(∂Σ_n_kj)<lim_k→∞inf L(∂Σ_n_k). (b) For at least two distinct subscribe j_1and j_2 in {1,2,… ,j_0} lim_k→∞inf L(∂Σ_n_kj_i)>0,i=1,2. (c) For some subscribe j_1≤ j_0, lim_k→∞inf L(∂Σ_n_kj_1)>0 and lim_k→∞sup H(Σ_n_kj_1)=0. Let L∈ℒ be a positive number and let Σ_n be a precise extremal sequence of ℱ(L,m). Then Σ_n can't be decomposable in ℱ(L,m^')for every positive integer m^'≤ m. By Lemma <ref> we have L_1=lim_k→∞inf L(∂Σ_n_k)>0. Assume that the sequence {{Σ_n_kj} _j=1^j_0} _k=1^∞ satisfying (<ref>), (<ref>) and one of (a)–(c) in the definition exists. We may further assume H(Σ_n_k1)=max_1≤ j≤ j_0H(Σ_n_kj). Then for each k, by (<ref>)–(<ref>) and Lemma <ref> H(Σ_n_k1)≥H(Σ_n_k)-ε_k/L(∂Σ_n_k)/1+ε_k/L(∂Σ_n_k), which with (<ref>) implies lim_k→∞inf H(Σ_n_k1)≥lim_k→∞H(Σ_n_k)=H_L,m. Thus Σ_n_k1,k=1,2,…, is an extremal sequence of ℱ (L,m) and in fact the equality holds (note that ℱ( L,m^') ⊂ℱ(L,m) and we assumed Σ_n_k j∈ℱ( L,m^')). Assume (a) or (b) holds. Then, by (<ref>) and (<ref>), we have, for sufficiently large k, L(∂Σ_n_k1)≤ L^'<L_1, for some L^'<L_1. Since Σ_n_k1 is extremal in ℱ(L,m), (<ref>) and (<ref>) show that Σ_n is not a precise extremal sequence in ℱ(L,m). This contradicts the assumption that Σ_n is precise extremal in ℱ(L,m). Assume (c) holds. Then 1≠ j_1, and then (<ref>) holds for sufficiently k and we obtain a contradiction again. We have proved that Σ_n is not decomposable in ℱ(L,m^') when m^'≤ m. As a direct consequence of the previous lemma we have the following. Let L be a positive number. Then for any positive integer m^'≤ m, any precise extremal surface Σ of ℱ (L,m) can't be decomposable in ℱ(L,m^'). That is to say, for any integer j_0≥2, there does not exist a number of j_0 surfaces Σ_j,j=1,…,j_0,in ℱ(L,m^') such that ∑_j=1^j_0R(Σ_j)≥ R(Σ), ∑_j=1^j_0 L(∂Σ_j)≤ L(∂Σ), and that L(∂Σ_j)>0,j=1,2,…,j_0. The same conclusion hold when ℱ( L,m) and ℱ(L,m^') are both replaced by ℱ(L). § PRECISE EXTREMAL SEQUENCES IN ℱ_R( L,M) WITH CONVERGING BOUNDARY The goal of this section is to prove the following Theorem, which plays a key role of the proof of the main theorems. For fixed L∈ℒ with L≥2δ_E_q and large enough m>3, let Σ_n=( f_n,Δ) be a precise extremal sequence in ℱ_r(L,m), and assume that the following conditions (A)–(D) hold. (A) For each n=1,2,…,Γ_n=∂Σ_n has ℱ (L,m)-partitions ∂Δ=α_n1( a_n1,a_n2) +α_n2 (a_n2,a_n3)+⋯+α_nm(a_nm,a_n1) and Γ_n=∂Σ_n=c_n1( q_n1,q_n2) +c_n2( q_n2,q_n3) +⋯+c_nm( q_nm ,q_n1) with c_nj=( f,α_nj) ,j=1,…,m. (B) Γ_0=( f_0,∂Δ) is a curve on S which has 𝒞(L,m)-partitions ∂Δ=α_01( a_01,a_02) +α_02 (a_02,a_03)+⋯+α_0m(a_0m,a_01), and Γ_0=c_01( q_01,q_02) +c_02( q_02 ,q_03) +⋯+c_0m( q_0m,q_01) , with c_0j=( f_0,α_0j) ,j=1,…,m. (C) Γ_n=∂Σ_n=( f_n,∂Δ) uniformly converges to Γ_0=( f_0,∂Δ) , and moreover, for each j=1,…,m,α_nj uniformly converges to α_0j, and c_nj uniformly converges to c_0j. (D) For every n=0,1,…, Γ_n=( f_n,∂Δ) are parametrized by length and a_n1=1. Then (i) For any pair of distinct two points a and b in ∂Δ, lim_n→∞inf d_f_n(a,b)>0. (ii) For any disjoint compact arcs I and J of ∂Δ, lim_n→∞inf d_f_n(I,J)>0. Some conventions for the proof of Theorem <ref>. We first make some conventions on the assumptions of this theorem. By Lemma <ref> and the assumption of Theorem <ref> we have L(Γ_0)=L( f_0,∂Δ) =lim_n→∞L(f_n,∂Δ)=lim_n→∞L(∂Σ_n )≥2δ_E_q>0, and then by Lemmas <ref>, for the sequence Σ_n in the theorem in proof, there exists a positive number δ_0 such that d_f_n(f_n^-1(E_q)∩Δ,∂Δ)≥δ_0 ,n=1,2,… By the assumption, the restriction f_n|_∂Δ:∂Δ→ S is linear in length, say, L(f_n,e^√(-1)[ 0,θ] )=θ/2π L(f_n,∂Δ),θ∈0,2π],n=0,1,2,…, where e^√(-1)[ 0,θ] denotes the arc {e^√(-1)t:t∈0,θ]} on ∂Δ. It is permitted that some α_0j, as the limit of α_nj, is a point, and by (C), (D) and (<ref>) it is clear that α_0j is a point iff c_0j is a point, for each j=1,…,m. For each j_0=2,…,m and each n=0,1,2,…, let ϕ _n(z)=a_nj_0z be the rotation of ℂ (note that a_nj_0 =e^√(-1)θ_nj_0∈∂Δ for some θ_nj_0 ∈0,2π)). Then the sequence Σ_n^j_0=( f_n ∘ϕ_n,Δ) is also a precise extremal sequence of ℱ_r(L,m) and ∂Δ=α_n1^j_0( a_n1^j_0,a_n2^j_0 ) +α_n2^j_0( a_n2^j_0,a_n3^j_0) +⋯+α_nm^j_0( a_nm^j_0,a_n1^j_0) , with a_nj^j_0=a_n,j_0+j-1/a_nj_0 for j=1,…,m, are ℱ(L,m)-partitions of ∂Σ _n^j_0 with n≥1, or is a 𝒞(L,m)-partition for Γ_0^j_0=( f_0∘ϕ_0,∂Δ) . Since a_n1^j_0=1 for all n=0,1,…, and a_nj_0 converges to a_0j_0 as n→∞, it is clear that the partition (<ref>), the sequence Σ_n^j_0 and the curve Γ _0^j_0 satisfy all hypothesis of the theorem in proof as the partition (<ref>), the sequence Σ_n and the curve Γ_0. On the other hand we have d_f_n∘ϕ_n( ϕ_n^-1(a),ϕ_n^-1(b)) =d_f_n( a,b) ,n=1,2,…, for any pair of two points a and b in Δ, and by (C) at least one arc α_0j is not a point. Therefore, to prove (i) of the theorem, we may always assume that the following conditions hold (by omitting at most a finite number of terms of {Σ_n} _n=1^∞). α_01 is not a point and that point a in (<ref>) is fixed and contained in α_01. If a∈α_01^∘, then we have: (a) there exists a compact arc α_a=α_a( a^',a^'') =α _01( a^',a^'') ⊂α_01^∘ such that α_a is a neighborhood of a in α_01^∘, that for each n∈ℕ^0 and the two points q_n^' =f_n(a^'),q_n^''=f_n(a^''), the arc c_n,a=c_n1( q_n^',q_n^'') =( f_n,α_a) ⊂ c_n1^∘ is a compact neighborhood of f_n(a) in c_n1^∘, and L(c_n,a)>π if L(c_01)>π. (b) there exists a number δ_a∈( 0,δ_0)such that for eachn=0,1,2,…, α_a⊂α_n1\ D( { a_n1,a_n2 } ,4δ_a) and c_n,a⊂ c_n1\ D( { q_n1,q_n2} ,4δ_a) , L(c_n1( q_n1,q_n^') )=L(c_n1( q_n ^'',q_n2) )>4δ_a. Here D( Q,r) =∪_x∈ QD(x,r) is the r-neighborhood of Q on the sphere S and ∂Δ is regarded as a set on S. In fact, Condition <ref> follows from (A)–(D) and Condition <ref>. If a∈α_01^∘, then for every positive number ε≤4δ_a (see Condition <ref>) we introduce: α_01,ε is the largest connected and compact neighborhood of α_01 in ∂Δ such that ( f_0,α_01,ε\α_01) ⊂D( {q_01,q_02},ε) . (i) For any arc γ on ∂Δ with distinct endpoints, we have L(f_n,γ)=L(γ)/2πL(f_n,∂Δ)>0,n=0,1,2,… , and for any two arcs γ_1 and γ_2 on ∂Δ with L(γ_1)≤ L(γ_2) we have L(f_n,γ_1)≤ L(f_n,γ_2),n=0,1,2,…. (ii) If {x_n} and {y_n} are two sequence of points on ∂Δ such that lim_n→∞d( x_n ,y_n) =0, then lim_n→∞d_f_n( x_n,y_n) =0. (iii) If the given point a is contained in α_01^∘, then for the points a^',a^'',q_n^',q_n^'',n=0,1,2,…, in Condition <ref> we have, lim_n→∞L(α_n1( a_n1,a^') ) =L(α_01( a_01,a^') )>4δ_a, lim_n→∞L(α_n1( a^'',a_n2) ) =L(α_01( a^'',a_02) )>4δ_a; lim_n→∞L(c_n1( q_n1,q_n^') ) =L(c_01( q_01,q_0^') )>4δ_a, lim_n→∞L(c_n1( q_n^'',q_n2) ) =L(c_01( q_0^'',q_02) )>4δ_a. (iv) If a∈α_01^∘and {b_n}_n=1^∞is a sequence in (∂Δ)\α_01,3δ_a^∘ with lim_n→∞d_f_n( a,b_n) →0, then for sufficiently large n, d_f_n( a,b_n) <min{δ_0, inf{ L(f_n,γ_0( a,b_n) } _n=0^∞), where γ_0(a,b_n) is the shorter[Since ( f_n,∂Δ) are parametrized by length, this also means L(f_n,γ_0)≤ L(f_n,( ∂Δ) \γ_0).] arc of ∂Δ with endpoints a and b_n. In fact, (i)–(iii) follows from (A)–(D), and Conditions <ref> and <ref>. Note that Condition <ref> (b) holds for n=0,1,…. Assume that the given point a is contained in α_01^∘. Then for the sequence { b_n} _n=1^∞⊂ (∂Δ)\α_01,3δ_a^∘, we have by (<ref>) that L(γ_0( a,b_n) )>4δ_a, and thus by Remark <ref> we have L( f_n,γ_0( a,b_n) ) >4δ_a /2πL(f_n,∂Δ)>0 for n=0,1,…,n. Hence, by (<ref>), we have inf{ L(f_n,γ_0( a,b_n) } _n=0^∞>0, which implies (<ref>) for sufficiently large n. Step 1. Some useful results inspiring Theorem <ref> In this step we will prove (<ref>) in some special cases, and we will deduce some contradictions under the condition that (<ref>) fails. These simple results inspire Theorem <ref>, though the complete proof of Theorem <ref> is very complicated. For any two points x and y on ∂Δ with f_0(x)≠ f_0(y), (<ref>) holds, say, lim_n→∞inf d_f_n(x,y)≥ d(f_0(x),f_0(y))>0. By (B)–(D) we have d_f_n(x,y)≥ d(f_n(x),f_n(y))→ d(f_0(x),f_0(y)), and then we have (<ref>). For the given point a in α_01, if there exists a number δ>0 such that each surface Σ_n contains a disk[See Definition <ref>.] ( a,U_n^δ) of radius δ, then (<ref>) holds for all b∈( ∂Δ) \{a}. This follows from Lemma <ref>. In fact we can write ∂ U_n^δ=α_1,n^δ+α_2,n^δ +α_3,n^δ=α_1,n^δ( a_1,n^δ,a) +α_2,n^δ( a,a_2,n^δ) +α_3,n ^δ( a_2,n^δ,a_1,n^δ) as in Lemma <ref>, so that -α_1,n^δ and α _2,n^δ are the boundary radius and α_3,n^δ is the new boundary (see Definition <ref> and Remark <ref> for the radius, the boundary radius, the new and the old boundary of a disk of a surface). For j=1,2, since the spherical distance of the two endpoints of c_j,n ^δ=( f_n,α_j,n^δ) equals δ, we have, by (B)–(D), that c_j,n^δ=( f_n,α_j,n^δ) converges uniformly to an SCC arc c_j,0^δ, and thus α_j,n^δ converges to an arc α_j,0^δ on ∂Δ, for j=1,2. It is clear that for any r∈(0,1], ( a,U_n^δ) contains the disk ( a,U_n^rδ) of radius rδ of Σ_n, the corresponding α_j,n^rδ are well defined with ∂ U_n^rδ=α_1,n^rδ+α_2,n^rδ +α_3,n^rδ for j=1,2,3 and n=0,1,2,…, and the above argument applies when δ is replaced by rδ. Moreover, α_1,0^rδ+α_2,0^rδ uniformly converges to the point a as r→0. Thus, for any b∈( ∂Δ) \{a}, there exists r∈(0,1) such that b∉α_1,0^rδ+α_2,0^rδ. Then for sufficiently large n, b∉α_1,n^rδ+α_2,n^rδ, which implies b∉( a,U_n^rδ) , and thus by Lemma <ref> we have d_f_n( a,b) >rδ>0. Therefore (<ref>) holds for all b∈∂Δ with b≠ a. Lemma <ref> can be used to show Lemma <ref>, which state that (<ref>) holds when a∈α_01^∘ and α_01 either is strictly convex or is straight and L(c_01)>π. Lemma <ref> is the second key ingredient of the proof of Theorem <ref>. By Lemma <ref> and Condition <ref> we have the following lemma. If the given point a is contained in α_01^∘, then for any b∈α_01,4δ_a\{a} lim_n→∞inf d_f_n( a,b) ≥ d( f_0(a),f_0(b)) >0. Let a∈α_01^∘ and let b_n be a sequence in ( ∂Δ) \α_01,3δ_a^∘ which satisfies (<ref>). Then for sufficiently large n and the d_f_n-shortest path I_n=I_n( a,b_n) from a to b_n I_n∩Δ≠∅ but I_n∩Δ∩ f_n ^-1(E_q)=∅. Let γ_0( a,b_n) be the shorter arc of ∂Δ with ∂γ_0={a,b_n}. Then for sufficiently large n, (<ref>) holds. If I_n∩Δ=∅ for some n=n_0 which is so large that (<ref>) holds for this n_0, then we have I_n_0⊂∂Δ and thus d_f_n_0( a,b_n_0) =L( f_n_0,I_n_0 ) ≥ L( f_n_0,γ_0( a,b_n_0) ) ≥min{δ_0, inf{ L(f_n,γ_0( a,b_n) } _n=0^∞), contradicting to (<ref>). Thus, when n is large enough, I_n∩Δ≠∅, and for any x∈ I_n∩Δ, we have d_f_n( x,∂Δ) ≤ L(f_n,I_n)=d_f_n( a,b_n) <δ_0, and so by (<ref>) we have I_n∩Δ∩ f_n^-1(E_q )=∅. (The first key step for the proof of Theorem <ref>) Let Σ_n=( f_n,Δ) be the precise extremal sequence satisfying all conditions (A)–(D) in Theorem <ref>. Assume that the following additional condition hold: (E) a∈α_01^∘, {b_n} is a sequences in ( ∂Δ) \α_0,3δ_a^∘, and for the d_f_n-shortest path I_n=I_n( a,b_n) from a to b_n, lim_n→∞inf d_f_n( a,b_n) =lim _n→∞inf( f_n,I_n) =0. Then Σ_n=( f_n,Δ) contains a subsequence which is still denoted by Σ_n=( f_n,Δ) such that (i) I_n∩Δ≠∅, and I_n has a partition I_n=I_n1( a,b_n1) +I_n2( b_n1,b_n2) +…+I_n,2k+1( b_n,2k,b_n) given by Lemma <ref> (v): I_n,2j-1⊂∂Δ for j=1,…,k+1, and I_n,2j^∘⊂Δfor j=1,…,k. Moreover, k≥1 is independent of n. (ii) I_n,2j^∘∩ f_n^-1( E_q) =∅, f_n restricted to a neighborhood of I_n,2j^∘ is a homeomorphism, ( f_n,I_n,2j) is a straight arc on S,j=1,…,k, and moreover I_n2 divides Δ into two Jordan domains Δ_n1 and Δ_n2, say, I_n2 divides Σ_n into two surfaces F_nj=( f_n,Δ_nj) ,j=1,2. (iii) I_n1 is a point. Thus I_n2=I_n2( b_n1,b_n2) =I_n2( a,b_n2) =I_n( a,b_n2) . (iv) b_n2∈{a_nj}_j=3^m and F_nj are both contained in ℱ_r( L,m) , provided that { b_n,b_n2}∩{a_nj}_j=1^m≠∅. (v) F_nj are both contained in ℱ_r( L,m) , provided that b_n2∉( α_nm+α_n1+α_n2) ^∘ (vi) Σ_n is decomposable in ℱ_r(L,m), provided that (<ref>) or (<ref>) holds for each n=1,2,…. By (<ref>), taking subsequence, we may assume that (<ref>) holds. Then (i) follows from Lemma <ref> and Lemma <ref>. By Lemma <ref>, we have I_n,2j∩ f^-1(E_q)=∅ for sufficiently large n and each j=1,…,2k. Thus (ii) follows from Lemma <ref>, and τ_n=( f_n,I_n2) is a simple and straight arc on S. By (<ref>) and (<ref>), omitting a finite number of terms of the sequence {I_n}, we may assume L(f_n,I_n1( a,b_n1) )<δ_a<min{L( c_n1(q_n1,f_n(a)) ),L( c_n1(f_n(a),q_n2) )}. Then by I_n1⊂∂Δ we have ( f_n,I_n1) ⊂ c_n1^∘and I_n1⊂α_n1^∘, and so I_n1∩{a_nj}_j=1^m=∅. Then by Lemma <ref> (vii) I_n1={a} is a singleton and (iii) is proved. By (<ref>) and (ii) we have lim_n→∞d_f_n( b_n1,b_n2) =lim _n→∞d_f_n( a,b_n2) =lim_n→∞f( a) f( b_n2) =0. Assume b_n2∈{a_nj}_j=1^m. For sufficiently large n, by (<ref>) we have b_n2≠ a_n1,a_n2, since d( f_n( a) ,{ f_n( a_n1) ,f_n( a_n2) }) >4δ_a by (<ref>); and thus b_n2∈{a_nj}_j=3^m. Assume b_n ∈{a_nj}_j=1^m. Then we have b_n2∈{a_nj}_j=1^m as well by Lemma <ref>, and then for sufficiently large n we also have b_n2∈{a_nj}_j=3^m, by (<ref>). Now assume (<ref>) holds. Then by the above discussion we may assume b_n2∈{a_nj}_j=3^m, and then both of the two arcs of ∂Δ with common endpoints a and b_n2 contain at least two points of {a_nj}_j=1^m, and thus F_n1 and F_n2 are both contained in ℱ_r(L,m) by Lemma <ref> (ix), and (iv) is proved completely. If (<ref>) holds, then each of the two arcs of ∂Δ with common endpoints a and b_n2 contains at least two points of {a_nj}_j=1^magain. Thus we also have F_nj∈ℱ _r(L,m),j=1,2, by Lemma <ref> (ix), and (v) is proved. Now assume (<ref>) holds. Then F_n1 and F_n2 are both contained in ℱ_r(L,m). On the other hand by (i) and (ii) and (<ref>) we have R(F_n1)+R(F_n2)=R(Σ_n), and L(∂ F_n1)+L(∂ F_n2)=L(∂Σ_n)+2L(τ_n), with lim_n→∞L(τ_n)=lim_n→∞L(f_n ,I_n)=0. Let γ_nj,j=1,2, be the two arcs of ∂Δ with endpoints a and b_n2 such that γ_nj=( ∂Δ_nj) ∩∂Δ. Then by (iv) and (<ref>) we have lim_n→∞L(∂ F_nj) =lim_n→∞( L(f_n,γ_nj)+L(τ_n)) ≥min{L( c_n1(q_n1,f_n(a)) ),L( c_n1 (f_n(a),q_n2) )}≥4δ_a,j=1,2. Therefore, Σ_n is decomposable in ℱ_r(L,m)by Definition <ref> (b). If (<ref>) holds for each large enough n, the above argument can be used to show that Σ_n is also decomposable in ℱ_r(L,m). This completes the proof of Lemma <ref>. For any a∈α_01^∘ and b∈∂Δ\α_0,3δ_a^∘, if the interior I_n ^∘ of the d_f_n-shortest path I_n=I_n( a,b) is contained in Δ and if b∈∂Δ\( α_nm+α_n1+α_n2) ^∘. Then lim _n→∞inf d_f_n( a,b) >0. If lim_n→∞inf d_f_n( a,b) =0, then we have a contradiction by Lemma <ref> and Lemma <ref> (vi) with b_n=b for every n=1,2,… Step 2. Proof of Theorem <ref> (i) in a special case This step is to prove the following Lemma, which is the second key to Prove Theorem <ref> (i). (The second key to the proof of Theorem <ref> (i)) For any j=1,2,…,m, if L(c_0j)>π, or if L(c_0j)>0 and c_j is strictly convex, then lim_n→∞inf d_f_n(x,b)>0 for every x∈α_0j^∘ and every b∈( ∂Δ) \{x}. By Remark <ref> and Lemma <ref>, it suffices to prove Lemma <ref> for j=1, x=a∈α_01^∘ (the fixed point) and each b∈( ∂Δ) \α_01,3δ_a(see Condition <ref>). Let C_nj be the circle determined by c_nj, for j=1,… ,m,n=0,1,…. If c_01 is strictly convex, then we may assume, by taking subsequence of Σ_n, that all c_n1 are strictly convex, and then c_n,a+q_n^''q_n^' encloses a closed domain T_n, for all n=0,1,…, and it is clear that q_n^''q_n^' divides the closed disk enclosed by C_n1 into two lunes, and T_n is the closure of the lune on the right hand side of q_n^'q_n^''(see Condition <ref> for the notation c_n,a=c_n,a( q_n^',q_n ^'')). If c_01 is straight, then by the assumption L(c_01)>π and Condition <ref>, all c_n,a,n=0,1,2,…, have length >π and thus d( q_n^',q_n^'') <π, and then c_n,a+q_n^''q_n^' also encloses a closed and convex domain T_n, and T_n is in fact a closed hemisphere with q_n^'q_n^''⊂ C_n1 when c_n1 and c_01 are straight. For n=0,1,2,…, let θ_n be the interior angle of T_n at the cusps q_n^' and q_n^'' and for each θ∈0,θ_n] let c_n,a,θ be the circular arc in T_n from q_n^' to q_n^'' so that the closed domain T_n,θ enclosed by c_n,a-c_n,a,θ has interior angle θ at the cusps q_n^' and q_n^''. Then T_n,θ⊂ T_n, T_n,0 is just the arc c_n,a=c_n,a,0 and T_n,θ_n=T_n, for n=0,1,2,... It is clear that c_n,a,θ is strictly convex when θ∈(0,θ_n), whether c_01 is straight or not. On the other hand by Lemma <ref> we may assume that θ_n≥θ_0/2={[ π/2, if c_01 is straight,; θ_0/2>0, if c_01 is strictly convex, ]. n=1,2,… Note that each c_nj is convex and so it is either straight or strictly convex. Since α_a given in Condition <ref> is a compact subarc of all α_n1^∘,n=0,1,2…, by Definition of ℱ_r(L,m) and Lemma <ref>, for each n=1,2,…, and sufficiently small θ=θ(n)∈(0,θ_n], f_n^-1 has a univalent branch g_n,θ defined on the closed domain T_n,θ such that g_n,θ( c_n,a) =α_a. Let θ_n^∗ be the largest positive number in (0,θ_n] such that g_n,θ is well defined for every θ∈(0,θ_n^∗), say, g_n,θ is a univalent branch of f_n^-1 defined on T_n,θ with g_n,θ( c_n,a,0) =α_a. Then it is clear that g_n,θ^' equals g_n,θ^'' on T_n,θ ^'⊂ T_n,θ^'' for every pair of θ^' and θ^'' with 0<θ^' <θ^''<θ_n^∗, and then f_n^-1 has a univalent branch g_n,θ_n^∗ defined on T_n,θ_n^∗ \ c_n,a,θ_n^∗^∘with g_n,θ_n^∗ ( c_n,a) =α_a. This, together with Lemma <ref>, implies that g_n,θ_n^∗ can extend to a univalent branch of f_n^-1 defined on the closed Jordan domain T_n,θ_n^∗. We still use g_n,θ_n^∗ to denote the extension and let α_a,θ_n^∗=g_n,θ_n^∗ ( c_n,a,θ_n^∗) , which is a simple arc in Δ from a^' to a^''. We summarize this by a claim: f_n:D_n=g_n(T_n,θ_n^∗)→ T_n,θ_n^∗ is a homeomorphism and ∂ D_n=α _a-α_a,θ_n^∗ is a Jordan curve, and thus D_n^∘∪α_a=D_n\α_a,θ_n^∗^∘ contains no branch point of f_n. Since α_a=α(a^',a^'') is a compact subarc of all α_n1^∘,n∈ℕ, by Lemma <ref> and Claim <ref>, we have For each n∈ℕ,α_n1^∘ has a neighborhood N_n in Δ such that f_n:D_n∪ N_n→ T_n,θ_n^∗∪ f_n( N_n) is also a homeomorphism. Thus, if θ_n^∗∈( 0,θ_n) , the part of α_a,θ_n^∗^∘ near its endpoints a^' and a^'' is contained in N_n^∘ ⊂Δ. We will prove θ=lim_n→∞infθ_n^∗>0. We first show that this implies Lemma <ref>. Since θ_n^∗>0 for each n≥1, (<ref>) implies θ_n^∗>φ_0 for some φ_0>0 and all n≥1. By Lemma <ref> T_n,θ _n and T_n,φ_0 converge to T_0,θ_0 and T_0,φ_0. Thus it is clear that there is a positive number δ_1<{δ_0,δ_a} such that for the δ_1-neighborhood V_n=D( f_n (a),δ_1) ∩ T_n,φ_0 of f_n(a) in T_n,φ _0and large enough n, V_n is the part of the disk D( f_n(a),δ_1) on the left hand side of c_n,awith D( f_n(a),δ_1) ∩ c_n,a⊂ V_n and V_n does not intersects c_n,a,φ_0. Thus, when we choose δ_1 small enough, (a,U_n) with U_n=g_n,θ_n^∗(V_n) is a disk of Σ_n with radius δ_1, U_n⊂ D_n and U_n∩∂Δ⊂α_a, but (a,U_n) depends on n (see Definition <ref>). Therefore by Lemma <ref>, (<ref>) holds, and thus (<ref>) implies Lemma <ref>. Now return to prove (<ref>), and assume that it fails. Then, by (<ref>), we may assume θ=lim_n→∞θ_n^∗ =0 and θ_n^∗<inf{θ_n}_n=1^∞ for all n, and then for sufficiently large n and each point y in T_n,θ_n^∗, there exists a line segment I_n,y=yy^∗⊂ T_n,θ_n^∗ of length <θ_n^∗π for some y^∗∈ c_n,a. Thus we have d_f_n( x,∂Δ) ≤ d_f_n( x,α _a) ≤ L(I_n,f_n(x))<θ_n^∗π→0 for all x∈ D_n\α_a and sufficiently large n, and thus by (<ref>) we may assume that D_n∩Δ∩ f_n^-1(E_q)=∅ for all n=1,2,…, and so f_n has no any branch point in D_n∩Δ. Thus α_a,θ_n^∗∩Δ∩ f_n^-1(E_q)=∅ for all n=1,2,…. Since 0<θ_n^∗<θ_n, when α_a,θ_n^∗ ^∘∩∂Δ=∅, f_n is homeomorphic in a neighborhood of α_a,θ_n^∗ in Δ by Claims <ref> and <ref>, and so θ_n^∗ can be enlarge, which contradicts the definition of θ_n^∗. Hence α _a,θ_n^∗^∘∩∂Δ is not empty, which together with Lemma <ref> (iv) implies that α_a,θ_n^∗ ^∘∩∂Δ is a finite set contained in {a_j}_j=1 ^m, and thus we have by Claim <ref> α_a,θ_n^∗∩∂Δ={a^',a^''}∪{a_ni_j}_j=1^k-1 in which {a_ni_j }_j=1^k-1=α_a,θ_n^∗^∘∩∂Δ for some k≥2 is a subset of {a_nj}_j=1^m, and a^'',a_ni_1,a_ni_2,…,a_ni_k-1,a^' are arranged on ∂Δ anticlockwise. It is clear that {a_ni_j}_j=1^k-1⊂ D_n since D_n is closed, and thus by (<ref>) d_f_n( a_ni_j,a_α) <θ_n^∗π→0 for every j=1,…,k-1. On the other hand, we have d_f_n( {a_n1,a_n2},α_a) ≥ d( {q_n1,q_n2},c_α) >4δ_a by (<ref>). Thus for large enough n we have a_ni_j∉{a_n1,a_n2} for each j=1,…,k-1. Then α_n,θ_n^∗ cut Δ into k+1 components {Δ_nj} _j=0^k of which Δ_n0 is on the left hand side of -α_n,θ_n^∗ and Δ_nj ,j=1,…,k, are on the right hand side of -α_n,θ_n^∗, and the intersections γ_j=∂Δ_nj∩∂Δ,j=1,…,k, are arcs of ∂Δ arranged on ∂Δ anticlockwise. We may assume k is independent of n. Since c_a,n and c_a,θ_n^∗ are both convex and share the same endpoints {q_n^',q_n^''}, c_a,n is on the convex circle C_n1and c_a,θ_n^∗^∘ is in the domain enclosed by C_n1, we have L(f_n,α_a,θ_n^∗)=L(c_a,θ_n^∗)<L(c_a,n). Then k,α_a,α_n1,-α_a,θ_n^∗,Σ _n,a^'',a^',{{a_i_j}} _j=1 ^k-1satisfy (a) (b) (c) (d) (e1) of Lemma <ref> as k,γ _0,γ_0^',I,Σ,b_2,b_2k+1,{I_2j-1}_j=2^k there, but here all I_2j-1={a_i_j} are points. Then by Lemma <ref>, Σ_nj=( f_n,Δ_nj) ∈ℱ_r( L,m) for j=1,2,…,k. It is clear that by Claim <ref> and (<ref>), ∑_j=1^kR(Σ_nj) =R(Σ_n)-( q-2) A(T_n,θ_n^∗), ∑_j=1^kL(∂Σ_nj) ≤ L(∂Σ)-L(c_a,n )+L(c_a,θ_n^∗)<L(∂Σ), with A(T_n,θ_n^∗)→0 as n→∞. On the other hand it is clear that α_n1( a^'' ,a_n2) ⊂∂Δ_n1 and by (<ref>), L(f_n,α_n1( a^'',a_n2) )=L(c_n1( q_n^'',q_n2) )≥4δ_a. Thus L(∂Σ_n1)>4δ_a and, for the same reason, L(∂Σ_nk)>4δ_a. Hence Σ_n is decomposable by Definition <ref> with (b), which contradicts Lemma <ref>. We have proved (<ref>), and then Lemma <ref> is proved completely. Step 3. Preliminary discussion in the case a∈α_01^∘ The purpose of this and next steps is to prove Assertion <ref> introduced later, which deduce Theorem (i) in the case a∈α_01^∘. By Condition <ref> and Lemma <ref>, to prove Theorem <ref> (i), it suffices to prove the following assertion. For the fixed a∈α_01 and any b∈( ∂Δ) \{a}, we have lim_n→∞inf d_f_n(a,b)>0. We first prove this assertion under the condition a∈α_01^∘. Then by Lemma <ref>, to prove Assertion <ref> under (<ref>), it suffices to prove (<ref>) holds when a∈α_01^∘ and b∈( ∂Δ) \α_01,4δ_a^∘. Assume that (<ref>) holds. Then we may assume that α_n1 ⊂α_01,δ_a for all n (see Notation <ref> for α_01,δ_a), since this is true for sufficiently large n. By Lemma <ref>, for each n, there exists a point b_n in ( ∂Δ) \α_01,3δ_a^∘ such that d_f_n(a,b_n)=min_x∈( ∂Δ) \α_01,3δ_a^∘d_f_n(a,x). By Lemma <ref>, the d_f_n-shortest path I_n=I_n( a,b_n) with b_n∈( ∂Δ) \α_01,3δ_a^∘ exists. Then, to prove Assertion <ref>, it suffices to prove the following assertion. When the fixed a is contained in α_01^∘, lim_n→∞inf L(f_n,I_n( a,b_n) )=lim_n→∞inf d_f_n(a,b_n)>0. To prove Assertion <ref>, we may assume that lim_n→∞b_n=b_0∈( ∂Δ) \α_01,3δ_a^∘, exists and thus, by Condition <ref> (b), we have a∈α_n1^∘∩α_01^∘andb_n ∈( ∂Δ) \α_01,3δ_a^∘, for n=0,1,2,… Assume Assertion <ref> fails. Then by Lemma <ref> (ii) we may assume, by taking subsequence, that lim_n→∞d_f_n( a,b_0) =lim_n→∞d_f_n(a,b_n)=lim_n→∞L(f_n,I_n( a,b_n) )=0. If b_0∈α_01,4δ_a, then by (<ref>) we have b_0 ∈α_01,4δ_a\{a}, and then by Lemma <ref> we have lim_n→∞inf d_f_n( a,b_0) >0. This contradicts (<ref>), and so b_0∈( ∂Δ) \α_01,4δ_a, and so, by taking subsequence, (<ref>) can be enhanced to be a∈α_n1^∘∩α_01^∘andb_n ∈( ∂Δ) \α_01,4δ_a , for n=0,1,2,… By (<ref>) and Lemma <ref>, we have I_n∩Δ≠∅, but I_n∩Δ∩ f_n ^-1(E_q)=∅ for n large enough, and then by Lemma <ref>, taking subsequence, we conclude that I_n has a partition I_n=I_n1( a,b_n1) +I_n2( b_n1,b_n2) +I_n3( b_n2,b_n3) +⋯+I_n,2k+1( b_n,2k ,b_n,2k+1) (with b_n,2k+1=b_n) satisfying all conclusions of Lemma <ref> for each n, in which k≥1 is independent of n. Then I_n1is the point a in α_01^∘ ,I_n2^∘⊂Δ, ( f_n,I_n2) is straight, ∂ I_n2=I_n2∩∂Δ, and I_n2^∘∩ f^-1(E_q)=∅,n=1,2,…, It is clear that I_n2 divides Δ into two Jordan domains Δ_1n and Δ_n2 and we let Δ_n1 be the one on the right hand side of I_n2, and write F_nj=( f_n,Δ_nj) ,j=1,2. We may assume lim_n→∞b_n2=b_2 for some b_2∈∂Δ. Now, it is clear that to prove Assertion <ref>, by taking subsequence, we may assume that one of the following holds: Case 1. k=1and b_n2∈{a_nj}_j=1^m for each n=1,2,…, or k≥2. Case 2. k=1and b_n2∉{a_nj}_j=1^m, for each n=1,2,…, but b_2∈{a_0j}_j=1^m. Case 3. k=1and b_n2∉{a_nj}_j=1^m, for each n=1,2,…, and b_2∉{a_0j}_j=1^m. Step 4. Case 1, 2, or 3 implies a contradiction We will deduce a contradiction in each of Cases 1–3. Step 4.1 Discussion of Case 1 Case 1 cannot occur. If k>1 then b_n2∈{a_nj}_j=1^m by Lemma <ref>. Thus {b_n,b_n2} satisfies (<ref>) in Case 1, and so by Lemma <ref> (vi), Σ_n is decomposable in ℱ_r( L,m) , contradicting to Lemma <ref>. Step 4.2 A general discussion of Case 2 and Case 3 Now assume that Case 2 or Case 3 occurs. Then k=1. We may assume I_n3 is a point, for otherwise by Lemma <ref> b_n2∈{a_nj}_j=1^m and Case 1 occurs. Then b_n2=b_n3=b_n and thus, by Claim <ref>, for n=1,2,…, I_n=I_n( a,b_n) =I_n2=I_n2(b_n1,b_n2)=I_n2 (a,b_n), and then by (<ref>) we have b_n2=b_n∈( ∂Δ) \α_01,4δ_a. If b_n2∉( α_nm+α_n1+α_n2) ^∘holds for infinitely many n, then by (<ref>) and Lemma <ref> (v) Σ_n is decomposable in ℱ( L,m) , contradicting Lemma <ref>. We then can assume, for each n,b_n=b_n2is in α_n2^∘\α_01,4δ_a or α_nm^∘\α_01,4δ_a, and thus we can further assume b_n=b_n2∈α_n2^∘\α_01,4δ_a ,n=1,2,…. Then we have, b_0=b_2=lim_n→∞b_n=lim_n→∞b_n2 ∈α_02\α_01,4δ_a^∘,n=1,2,…. By the way we see that α_02, as the limit of α_n2, is not a point, for otherwise α_n2\α_01,4δ_a is empty for all large enough n. By definition of ℱ_r( L,m) ,f_n is homeomorphic in a neighborhood of α_nj^∘ for each j=1,…,m, which with (<ref>) and (<ref>) implies that f_n is homeomorphic in some neighborhoods of the endpoints of I_n=I_n( a,b_n) . On the other hand, for large enough n, f_n is homeomorphic in a neighborhood of I_n2^∘by Lemma <ref> (ii). Therefore by (<ref>) we conclude that f_n is homeomorphic in a neighborhood of I_n2( a,b_n2) =I_n( a,b_n) in Δ and I_n2^∘∩ f_n^-1(E_q)=∅. Step 4.3 Complete the discussion of Case 2 Case 2 implies a contradiction. Now assume Case 2 occurs. Then by the condition of Case 2 and (<ref>), b_n=b_n2→ b_2=a_03∈α_02\α _01,4δ_a^∘ as n→∞, which with the condition lim_n→∞a_n3=a_03 implies, for sufficiently large n, a_n3∈α_n2\α_01,3δ_a^∘. By (<ref>) and the condition lim_n→∞a_n3=a_03 we have lim_n→∞d( b_n,a_n3) =0, which with Lemma <ref> (i) implies lim_n→∞d_f_n( b_n,a_n3) =0. Thus by (<ref>) we have lim_n→∞d_f_n( a,a_n3) ≤lim _n→∞d_f_n( a,b_n) +lim_n→∞d_f_n( b_n,a_n3) =0. Hence by (<ref>) and Lemma <ref> (vi), by taking subsequence, we may assume that the d_f_n-shortest path Ĩ_n=Ĩ_n( a,a_n3) from a to a_n3 has a partition Ĩ_n=Ĩ_n1( a,b̃_n1) +Ĩ _n2( b̃_n1,b̃_n2) +Ĩ_n3( b̃_n2,b̃_n3) +⋯+Ĩ_n,2k̃ +1( b̃_n,2k̃,a_n3) . satisfying all conclusions of Lemma <ref> for each n, in which 1≤k̃≤ m+1 is independent of n. Then {a_n3,b̃ _n2}∩{a_nj}_j=1^m≠∅, say (<ref>) holds, and thus by Lemma <ref> (vi) Σ_n is decomposable in ℱ _r( L,m) , contradicting to Lemma <ref> . Step 4.4 Discussion of Case 3: (1) Case 3 implies Condition <ref> Now we assume that Case 3 occurs. Then by Condition of Case 3 and (<ref>) we have b_2∈α_02^∘ and thus by (<ref>) we have b_2=lim_n→∞b_n2∈α_02^∘\α_01,4δ_a, and then by (<ref>) we may assume { b_2,b_n2}⊂[ α_02^∘∩α_n2^∘] \α_01,4δ_a,n=1,2,…. Then a≠ b_2,b_n2 for n=1,2,…. By (<ref>) and (<ref>), we have lim_n→∞d_f_n( a,b_2) =0. Then by Lemma <ref> we have f_0(a)=f_0(b_2), and then f_0( a) =f_0( b_2) ∈ c_01^∘∩ c_02^∘. Thus by (<ref>), (<ref>) and Lemma <ref>, we may assume c_0j=( f_0,α_0j) are both straight and L(c_0j)≤π, for j=1,2. Then Case 3 can be discussed under the following condition. Both c_01 and c_02 are straight, f_0(a)=f_0 ( b_2) ∈ c_01^∘∩ c_02^∘, L(c_01 )≤πand L(c_02)≤π. Step 4.5 Discussion of Case 3: (2) The closed Jordan domains K_n1, K_n2 and K_n,ψ_n^∗ Let C_nj be the circle determined by c_nj for n=0,1,…, and j=1,2. Then C_01 and C_02 are great circles on S. By (<ref>), we have f_n(a)∈ c_n1\ D( { q_n1,q_n2} ,3δ_a) → c_01\ D( { q_01,q_02} ,3δ_a) ∋ f_0(a) as n→∞, which implies that f_0(a) and q_02 are not antipodal on S by Condition <ref>. Then we have {q_02 ,f_0(a)}⊂ C_01∩ C_02, which implies C_01=-C_02, say, c_01+c_02 is folded at q_02. Let A_n=f_n(a)and B_n =f_n(b_n). Then τ_n=( f_n,I_n) =( f_n,I_n2) =A_nB_n→{f_0(a)} by (<ref>), and we see that, by (<ref>), α_n2^∘ \α_01,3δ_a is an open arc of α_n2^∘ containing b_n (see Notation <ref> for α_01,3δ_a). Then by (<ref>) and Condition <ref> we must have τ_n⊥ c_n2 at B_n, where τ_n⊥ c_n2 indicates that τ_n intersects c_n2 at B_n perpendicularly. Since τ_n⊥ c_n2at B_nand τ_nis on the left hand side of c_n1and c_n2, we see that η_n=c_n1( A_n,q_n2) +c_n2( q_n2 ,B_n) -τ_n( A_n,B_n) is a convex topological triangle, enclosed by the two convex circular subarcs of c_n1 and c_n2 and the line segment τ_n. It is clear that η_n converges to the folded arcs f_0( a) f_0( a_02) +f_0( a_02) f_0( a) as n→∞. Then c_n1+c_n2 is strictly convex at q_n2, the interior angle of η_n at A_n is almost π/2, C_n1 and C_n2 enclose a "thin" and convex closed lens K_n which is on the left hand side of c_n1+c_n2, and τ_n divides K_n into two closed Jordan domains K_n1 and K_n2 such that K_n1 is on the right hand side of τ_n, for n=1,2,…. Then ∂ K_n1=η_n and both K_n1 and K_n2 are contained in some open hemispheres, respectively. It is clear that one of the two cusps of K_n is q_n2, and we let q_n2^∗ be the other cusp. Assume the interior center of C_n2, the center on the left hand side of C_n2, is P_nand C_n2=∂ D(P_n,R_n). Since the orientation of C_n2 is given by c_n2 and all c_nj are convex for all n=0,1,…, and all j=1,2,…,m, we have R_n≤π/2. Let R_n,ψ be the radius P_nB_n,ψ of C_n2 such that B_n,ψ is the point on the arc ∂ K_n∩ C_n2 so that the angle between the radius P_nq_n2 and P_nB_n,ψ at P_n equals ψ, A_n,ψ be the intersection of R_n,ψ and c_n1, and write R_n,ψ_n=P_n q_n2^∗and R_n,ψ_n,a=P_nB_n.Then R_n,0=P_nq_2n and 0<ψ_n,a<ψ_n. Let τ_n,ψ=τ_n,ψ( A_n,ψ,B_n,ψ) =A_n,ψB_n,ψ=K_n∩ R_n,ψ,ψ∈0,ψ _n], and K_n,ψ=∪_θ∈0,ψ]τ_n,θ. Then τ_n,0={q_n2}, τ_n=τ_n,ψ_n,a=A_n,ψ_n,aB_n,ψ_n,a =A_nB_n, K_n,ψ_n,a=K_n1 and K_n,ψ_n=K_n. By the way we see that I_n is an f_n-lift of τ_n,ψ_n,a=τ_n, and K_n,ψ is a closed Jordan domain for each ψ∈0,ψ_n], with an exception at ψ=0: K_n,0={q_n2}. Since C_n1 and C_n2 tend to the great circles C_01and C_02 with C_02=-C_01, we have lim_n→∞max_ψ∈0,ψ_n]L(τ_n,ψ)→0. Then we may assume K_n is so thin that max_w∈ K_nd(w,∂ K_n)<δ_0. Now we can prove the following For sufficiently large n, the restriction f_n1=f_n |_Δ_n1 is a homeomorphism onto K_n,ψ_a=K_n1, that is to say, F_n1=( f_n,Δ_n1) is the simple closed domain K_n1. By Condition <ref> and Definition of ℱ_r(L,m), f_n restricted to ∂Δ_n1=-I_n2( a,b_n) +α _n1( a,a_n2) +α_2( a_n2,b_n) , with b_n=b_n2, is a homeomorphism onto ∂ K_n1 and f_n has no branch point on ( ∂Δ_n1) \{a_n2}. Recall that we are still in Case 3, and so I_n( a,b_n) =I_2n( a,b_n2) . On the other hand, f_n is locally homeomorphic on ( Δ\ f^-1( E_q) ) ∪( ∂Δ) \{a_nj}_j=1^m. Then f_n1=f_n|_Δ_n1 is homeomorphic in a neighborhood of ( ∂Δ_n1) \{a_n2} in Δ_n1 and ( f_n1,Δ_n1) ∈ℱ_r( L,3), and then f_n1(Δ_n1)⊃ K_n1. The set of branch values of f_n1 in K_n1\{c_n2} is contained in E^'=K_n1^∘∩ E_q. If E^' ≠∅, let ψ∈(0,ψ_n,a] decrease continuously from ψ_n,a to 0 so that τ_n,ψ first meets a value Q∈ E^' for some ψ=ψ^'∈( 0,ψ_n,a) . Then by Lemma <ref> we see that f_n^-1 has a univalent branch defined on the closed Jordan domain K_n1^'=∪_ψ∈ψ^',ψ_n,a]τ_n,ψ and so τ_n,ψ^' has an f_n-lift in Δ_n1 whose endpoints are on α_n1^∘ and α_n2^∘, but interior in Δ _n1, and thus f_n^-1(E_q)∩Δ_n1⊃ f_n^-1 (Q)∩Δ_n1≠∅, which with (<ref>) implies that d_f_n( f_n^-1(E_q) ∩Δ,∂Δ)≤ d_f_n( f_n^-1(Q) ∩Δ_n1,α_n1+α _n2)<L(τ_n,ψ^')<δ_0, for sufficiently large n, contradicting (<ref>). Then for sufficiently large n, f_n has no any branch value in K_n1\{q_n2}, and thus f_n^-1 has a univalent branch defined on K_n1\{q_n2}. Therefore Claim <ref> follows from Lemma <ref>. By Condition <ref> we may discuss Case 3 under the following condition. For each n=1,2,…, there exists φ_n∈(ψ_n,a ,ψ_n), such that for every ψ∈(ψ_n,a,φ_n],τ _n,ψ has a unique f_n-lift I_n,ψ in Δ from a point a_n,ψ∈α_n1^∘ to a point b_n,ψ∈α_n2^∘, I_n,ψ^∘⊂Δ, and K_n,ψ has an f_n-lift D_n,ψ so that D_n,ψ is a closed Jordan domain enclosed by ∂ D_n,ψ=α_n1( a_n,ψ,a_n2) +α _n2( a_n2,b_n,ψ) -I_n,ψ( a_n,ψ ,b_n,ψ) , say, f_n restricted to D_n,ψ is a homeomorphism onto K_n,ψ. For each n=1,2,…, let ψ_n^∗ be the supremun of all φ_n∈(ψ_n,a,ψ_n) satisfying Condition <ref>. Then ψ_n^∗<ψ_n,n=1,2,⋯. Assume ψ_n^∗=ψ_n for some n. Then q_n2^∗∈ c_n1∩ c_n2 and f_n^-1 has a univalent branch g_n defined on K_n,ψ_n^∗\{q_n2^∗}=K_n,ψ_n \{q_n2^∗}=K_n\{q_n2^∗}. This, together with Lemma <ref>, implies that f_n restricted to Δ is a homeomorphism onto K_n and ∂Σ_n=c_n1+c_n2, and then we have m=2. This contradicts that m>3, and then (<ref>) is true. Step 4.6 Discussion of Case 3: (3) The subsurface F_n0^' By Claim <ref> and Condition <ref>, it is clear that f_n^-1 has a univalent branch g_n defined on K_n,ψ_n^∗\τ_n,ψ_n^∗. By Lemma <ref> this branch can be extended to a univalent branch well defined on K_n,ψ_n^∗,but we still denote it by g_n. Let I_n,ψ_n^∗=I_n,ψ_n^∗( a_n,ψ_n^∗ ,b_n,ψ_n^∗) =g_n(τ_n,ψ_n^∗( A_n,ψ_n^∗,B_n,ψ_n^∗) ) (see (<ref>)), and let λ_n0 be the arc of ∂Δ from a_n,ψ_n^∗ to b_n,ψ_n^∗, say, λ_n0=α_n1( a_n,ψ_n^∗,a_n2) +α_n2( a_n2,b_n,ψ_n^∗) . Then it is clear that I_n,ψ_n^∗∩ I_n=∅, and -I_n,ψ_n^∗+λ_n0 encloses a domain Δ_n0 in Δ such that f_n:Δ_n0→ K_n,ψ_n^∗ is a homeomorphism. Since ( f_n,I_n,ψ_n^∗) is straight, we have L(f_n,∂Δ_n0)=L(f_n,I_n,ψ_n^∗)+L(f_n ,λ_n0)≤ L(f_n,( ∂Δ) \λ_n0)+L(f_n,λ_n0)≤ L, and then we have f_n restricted to Δ_n0 is a homeomorphism onto K_n,ψ_n^∗, which is the closure of the component of K_n\τ_n,ψ_n^∗ on the right hand side of τ_n,ψ_n^∗. Moreover for sufficiently large n F_n0^'=( f_n,Δ_n0) ∈ℱ_r(L,3)⊂ℱ_r(L,m), and lim_n→∞inf L(∂ F_n0^')≥ L(c_01( A_n,ψ_n^∗,q_n2) >δ_a>0. (note that ( f_n,I_n,ψ_n^∗) is straight and we assumed m>3). It is clear that the closed Jordan domain Δ_n0 is the union ∪_ψ∈0,ψ_n^∗]I_n,ψ with I_n,ψ^∘⊂Δ_n0,∂ I_n,ψ⊂( ∂Δ_n0) ∩∂Δ and τ_n,ψ^∘=( f_n,I_n,ψ^∘) ⊂ K_n,ψ_n^∗^∘ for all ψ∈(0,ψ_n^∗). Then by (<ref>) and (<ref>) we have Δ_n0∩ f_n^-1(E_q)=∅, and then by Claim <ref> we have R(F_n0^')=( q-2) A(F_n0^')<( q-2) A(K_n,ψ_n^∗)<( q-2) A(K_n)→0 as n→∞, and H(F_n0^')≤( q-2) A(K_n)/L(∂ F_n0^')≤( q-2) A(K_n)/δ_a →0( n→∞) . Step 4.7 Discussion of Case 3: (4) Discussion of the d_f_n-shortest path I_n,ψ_n^∗ If there exists a subsequence {n_s}_s=1^∞ of {n} such that I_n_s,ψ_n_s^∗⊂∂Δ, then Δ_n_s 0=Δ, and then F_n_s0^'=Σ_n_s=( f_n_s ,Δ) by Claim <ref>. But this implies A(f_n_s,Δ)=A(K_n_s,ψ_n_s^∗)<A(K_n_s)→0 as s→∞. Then by (<ref>), we have H(Σ_n_s)≤( q-2) A(f_n_s,Δ)/L(∂Σ_n_s)=( q-2) A(f_n_s,Δ )/L(∂ F_n0^')→0, which contradicts that Σ_n is an extremal sequence of ℱ _r( L,m) (and of ℱ( L,m)) by Lemma <ref>. Thus we may assume that I_n,ψ_n^∗∩Δ≠∅,n=1,2,…. By (<ref>) we have lim_n→∞L(f_n,I_n,ψ_n^∗)=lim_n→∞L(τ_n,ψ_n^∗)=0, and so by (<ref>) we may assume that Δ∩ I_n,ψ_n^∗∩ f_n^-1(E_q)=∅,n=1,2,…. Then by Lemma <ref>, as (<ref>) for I_n, we may assume, by taking subsequence, that I_n,ψ_n^∗ has a partition I_n,ψ_n^∗=J_n1( a_n1^',a_n2^') +J_n2( a_n2^',a_n3^') +⋯+J_n,2k^'+1( a_n,2k^'+1^',a_n,2k^'+2^') satisfying all conclusions of Lemma <ref> when we regard k^' as k there, where k^' is independent of n, a_n1^'=a_n,ψ_n^∗ and a_n,2k^'+2^'=b_n,ψ_n ^∗. Then f_n has no branch point in ∪_j=1^k^' J_n,2j^∘⊂Δ∩ I_n,ψ_n^∗ and we have f_n restricted to a neighborhood of J_n,2j^∘ in Δ is a homeomorphismand ( f_n,J_n,2j) is straight for every j=1,…,k^'. Step 4.8 Discussion of Case 3: (5) Complete the discussion of Case 3 Corresponding to (<ref>), we write I_n^∗=J_n2( a_n2^',a_n3^') +⋯+J_n,2k^'( a_n,2k^'^',a_n,2k^'+1^') . Let γ_n0=γ_n0( a_n2^',a_n,2k^' +1^') be the arc on ∂Δ from a_n2^' to a_n,2k^'+1^' and let γ_n0^'=γ _n0^'( a_ni_0,a_ni_2) be the smallest arc of ∂Δ containing γ_n0 such that the endpoints of γ_n0^' are contained in {a_nj}_j=1^m. Then we can write γ_n0^' =γ_n0^∗+γ_n0^∗∗, γ_n0^∗ =γ_n0^∗( a_ni_0,a_n2^') +γ_n0^∗( a_n2^',a_n2) =γ_n0^∗( a_ni_0,a_n2^') +α _n1( a_n2^',a_n2) γ_n0^∗∗ =γ_n0^∗∗( a_n2 ,a_n,2k^'+1^') +γ_n0^∗∗( a_n,2k^'+1,a_ni_2) =α_n2( a_n2 ,a_n,2k^'+1^') +γ_n0^∗∗( a_n,2k^'+1,a_ni_2) . with i_0( modm) <2<i_2. Then a_n2∈γ_n0^∘∩{a_nj}_j=1^m≠∅. If J_n1 is a point, then a_n2^'=a_n1^'∈α_n1 and a_ni_0=a_n1, and thus γ_n0^∗=α_n1 and f_n(I_n^∗∘∩γ_n0^∗)=τ_n,ψ_n^∗ ^∘∩ c_n1( f( a_n2^') ,q_n2) =∅, thus I_n^∗∘∩γ_n0^∗=∅. If J_n1 is not a point, then a_n1^'=a_n1, a_n2^'=a_ni_0 and γ_n0^∗=-J_n1( a_ni_0,a_n1) +α_n1( a_n1,a_n2) . Since f_n maps ∂Δ_n0 homeomorphically onto ∂ K_n,ψ_n^∗, (<ref>) also holds. Thus in both cases (<ref>) holds. For the same reason, we may show that I_n^∗∘∩γ_n0^∗∗=∅. Thus we have I_n^∗∘∩γ_n0^'=∅, and (e2) of Lemma <ref> holds for Σ=Σ_n. Now we have proved that -I_n^∗,γ_n0,γ_n0^' ,Σ_n,k^' satisfy all hypothesis of Lemma <ref> as I,γ_0,γ_0^',Σ,k there. Let {Δ _nj} _j=0^k^' be the k^'+1 components of Δ\ I_n^∗ such that Δ_n0 is the the only one on the right hand side of I_n^∗, which we discussed above, and all others are on the left hand side of I_n^∗, and moreover we have F_nj^'=( f_n,Δ_nj) ∈ℱ(L,m-1) for every j=1,…,k^'. Now we deduce a contradiction. It is clear that we have ∑_j=0^k^'R(F_nj^')=R(Σ_n). ε_n=∑_j=0^k^'L(f_n,J_n,2j)→0( n→∞) , and ∑_j=0^k^'L(∂ F_nj^')≤ L(∂Σ _n)+2ε_n, equality holding if and only if all J_n,2j+1,j=0,…,k^', are points. Thus we can conclude, by (<ref>) and Claim <ref>, that the sequence Σ_n is decomposable in ℱ( L,m) (by Definition <ref> (c)). This contradicts Lemma <ref>. We have completed the discussion of Case 3 and obtained a contradiction. We have proved that each of the three cases implies a contradiction, and thus (<ref>) fails. Therefore Assertion <ref> holds true, and we have proved Assertion <ref> in the case a∈α_01^∘. Step 5. Discussion in the case a∈∂α_01 Now we prove Assertion <ref> under the condition a=a_01∈∂α_01={a_01,a_02}. The case a=a_02 can be discussed as the case a=a_01 in the same way. Now that we have already proved Assertion <ref> with (<ref>), we conclude that, under the condition a=a_01∈∂α_01, Assertion <ref> holds for all b∈( ∂Δ) \{a_0j}_j=1^n. So, to prove Assertion <ref> under (<ref>), it remains to prove the following special case of (<ref>): for each j_0=2,3,…,m, lim_n→∞inf d_f_n(a_01,a_0j_0)>0 if a_0j_0≠ a_01. Note that j_0≠1 may not imply a_0j_0≠ a_01: when α_0m is a point, for example, a_0m=a_01. Since d_f_n(a_nj,a_0j)→0 for each j=1,2,…,m, (<ref>) is equivalent to lim_n→∞inf d_f_n(a_n1,a_nj_0)>0if a_0j_0≠ a_01. Now we fix a j_0 with a_0j_0≠ a_01. By Lemma <ref>, there exists a d_f_n-shortest path J_n from a_n1 to a_nj_0. To prove (<ref>) we assume the contrary lim_n→∞inf d_f_n(a_n1,a_nj_0)=0. Then by taking subsequence, we may assume lim_n→∞d_f_n(a_n1,a_nj_0)=lim_n→∞L(f_n,J_n)=0. If J_n has a subsequence J_n_k such that J_n_k⊂∂Δ, then we have by (<ref>) that J_n_k tends to the point a_01, which implies a_j_0=a_01, contradicting (<ref>) (note that Γ_n=( f_n,∂Δ) are parameterized by length for all n=0,1,2,…. So we may assume J_n ∩Δ_n≠∅for all n. Then by (<ref>) we may assume J_n∩ f^-1(E_q)∩Δ=∅,n=1,2,…. By Lemma <ref> and taking subsequence, we can assume that J_n has a partition J_n=J_n1( a_n1^',a_n2^') +J_n2( a_n2^',a_n3^') +J_n3( a_n3^' ,a_n4^') +⋯+J_n,2k^'+1( a_n,2k^'+1^',a_n,2k^'+2^') with a_n1^'=a_n1, a_n,2k^'+2^'=a_nj_0, a_nj^'→ a_0i_jfor some a_0i_j∈{a_0j}_j=1^m for each j=1,2,…,2k^'+2, and k^'<m. We show that there exists j_1∈{1,…,k^'}, such that the two endpoints of J_n,2j_1 converges to distinct points, say, a_n,2j_1^'→ a_0,i_2j_1≠ a_0i_2j_1+1 ← a_n,2j_1+1^' as n→∞. Assume that this fails. Then each pair a_n,2j^',a_n,2j+1^', the endpoints of J_2j, converge to the same point a_0i_2j =a_0i_2j+1, for j=1,…,k^'; and, as an arc on ∂Δ, by (<ref>) each J_n,2j-1( a_n,2j-1,a_n,2j) converges to the same point a_n,i_2j-1=a_n,i_2j, for j=1,2,… ,k^'+1. Therefore, all a_nj^' converge to the same point, and thus a_01=lim_n→∞a_n1=lim_n→∞a_n1 ^'=lim_n→∞a_n,2k+2^'=lim_n→∞a_nj_0=a_0j_0 This contradicts (<ref>). Thus the segment J_n,2j_1=J_n,2j_1( a_n,2j_1^',a_n,2j_1+1^') of J_n satisfies Lemma <ref> for the case k=1 there, so that the two endpoints a_0i_2j_1 and a_0i_2j_1+1 of J_n,2j_1 belong to {a_nj}_j=1^m. Thus by Lemma <ref> for the case k=1, J_n,2j_1 divides Σ _n into two subsurfaces Σ_n0 and Σ_n1 contained in ℱ_r(L,m). By (<ref>) we have R(Σ_n0)+R(Σ_n1)=R( Σ_n) . On the other hand we also have L(∂Σ_n0)+L(∂Σ_n1)=L(∂Σ_n )+2L(f_n,J_n,2j_1)→ L(f_0,∂Δ)≤ L lim_n→∞inf L(∂Σ_nj)≥min{ L(f_0,γ_0),L(f_0,γ_1)} >0,j=1,2, where γ_0 and γ_1 are the two arcs of ∂Δ with the two distinct endpoints a_0i_2j_1 and a_0i_2j_1+1. Then Σ_n is decomposable in ℱ_r(L,m)⊂ℱ(L,m), by Definition <ref> (b). This contradicts the conclusion of Lemma <ref>, and hence (<ref>) cannot hold. We have proved Assertion <ref> completely and so (i) is completely proved. Step 6. Theorem <ref> (i) implies Theorem <ref> (ii) If (ii) is not true, then there exists sequences x_n1∈ I and x_n2∈ J such that x_n1→ x_01∈ I,x_n2→ x_02∈ J, and d_f_n(x_n1,x_n2)→0, as n→∞. On the other hand, for the shorter arc α _j=α_j( x_nj,x_0j) from x_nj to x_0j on ∂Δ,j=1,2, we have by Lemma <ref> (ii) d_f_n( x_nj,x_0j) →0, as n→∞, for j=1,2. Then we have d_f_n(x_01,x_02)≤ d_f_n( x_01,x_n1) +d_f_n ( x_n1,x_n2) +d_f_n( x_n2,x_02) →0. But it is clear that x_01≠ x_02, and we obtain a contradiction by (i) of the Theorem <ref>, and thus Theorem <ref> (ii) holds true. § EXISTENCE AND PROPERTY OF EXTREMAL SURFACES IN ℱ _R( L,M) The goal of this section is to prove the following theorem. Let L∈ℒ be a positive number and m be a sufficiently large positive integer. Then there exists a unique positive number L_1 with L_1≤ L and a precise extremal surface Σ_L_1in ℱ_r(L,m) such that L(∂Σ_L_1)=L_1. The proof is divided into 5 steps. The key points of the proof are Theorems <ref> and <ref>, which follows from Theorem <ref>. Step 1. Notations, simple discussions and the idea for proof of Theorem <ref>. When L≤2δ_E_q, let L_1=L and let Σ_L be a simple disk on S whose interior is outside E_q. Then L_1 and Σ _L_1 are the desired number and surface, by Theorem <ref>. So through out the proof, we assume L≥2δ_E_q. The proof for this case is complicated. We will first state the idea of the proof, after some preparation. By Corollary <ref>, there exists a precise extremal sequence Σ_n of ℱ_r^'(L,m), such that Σ_n is also a precise extremal sequence of ℱ_r(L,m) and ℱ(L,m), and, moreover, L_1=lim_n→∞inf L(∂Σ_n)≥2δ_E_q. By definition of ℱ_r^'(L,m), for the number d^∗=d^∗( E_q,m) , which depends only on m and E_q and given by Theorem <ref>, we have _maxf_n≤ d^∗for all n=1,2,… We assume that all ∂Σ_n=( f_n,∂Δ) are ℱ(L,m)-partitioned, and then by definition, for each n, ∂Δ has an ℱ(L,m)-partition ∂Δ=α_n1( a_n1,a_n2) +α_n2( a_n2,a_n3) +⋯+α_nm( a_nm,a_n1) for ∂Σ_n, and ∂Σ_n has the corresponding ℱ( L,m)-partition ∂Σ_n=c_n1( q_n1,q_n2) +c_n2( q_n2,q_n3) +⋯+c_nm( q_nm,q_nm) , consisted of m circular arcs as in Definition <ref>. Then f_n restricted to each α_nj is the SCC arc c_nj, and Remark <ref>, we may assume a_n1=1 for all n=1,2,…. Since for any homeomorphism h of Δ onto itself and Σ_n^h=( f_n∘ h,Δ) ,Σ_n ^h is also a precise extremal sequence of ℱ_r(L,m), we may assume, by taking subsequence and applying Aazela-Ascoli Theorem to curves parametrized by length, the following. All Γ_n=∂Σ_n are parametrized by length and uniformly converges to a closed curve Γ_0=(f_0,∂Δ), for each n≥1 and j∈ M={1,2,…,m}, c_nj=( f,α _nj) is an SCC arc. In fact, parametrized by length, ( f_n,∂Δ) is a linear mapping in length and so does (f_0,∂Δ). Thus f_n restricted to each α_nj is the simple circular arc c_nj for all n∈ℕ^0={0}∪ℕ, where ℕ is the set of positive integers, and j∈ M={1,2,…,m}, and we have by Conditions <ref> and <ref> that: For each j∈ M={1,2,…,m} and n∈ℕ^0, f_n^-1 has a unique univalent branch g̃_nj defined on c_nj^∘ with g̃_nj( c_nj^∘) =α_nj^∘, such that g̃_nj uniformly converges to g̃_0j:c_0j^∘→α_0j^∘, and thus for any interval I_n⊂ c_nj^∘ which converges to I_0⊂ c_0j^∘, g̃_nj(I_n) converges to g̃_0j (I_0). When c_nj is not closed, g̃_nj can be extended to be homeomorphic on c_nj. The idea of proving Theorem <ref>. Though Γ_n=( f_n,∂Δ) converges to Γ_0=(f_0,∂Δ) uniformly, f_n may not converge in Δ. The key of the proof is to construct a surface Σ_L_1=( f_L_1,Δ) ∈ℱ_r^'( L_1,m) such that R(Σ_L_1)=lim R(Σ_n)=H_L,m, f_L_1|_∂Δ=f_0 and L(f_L_1,∂Δ)=Γ_0=L(f_0,∂Δ). This f_L_1 will be obtained by modifying some f_n for sufficiently large n. The key of this modification is that Σ_n=( f_n,Δ) has a subsequence, which will be still denoted by Σ_n, such that the following holds: (a) There exists a closed Jordan domain Δ_n contained in Δsuch that the curves ( f_n,∂Δ _n) contain no point of E_q and are equivalent each other, and R( f_n,Δ_n) are equal to each other. (b) For 𝒜_n=Δ\Δ_n^∘, there exists a sequence of surfaces B_n=( F_n ,𝒜_n) such that f_n|_∂Δ_n =F_n|_∂Δ_n, f_0=F_n|_∂Δ, and R(f_n,A_n)-R(F_n,A_n)→0 as n→∞. (c) For the surface Σ_n^∗=( f_n^∗ ,Δ) , in which f_n^∗ is defined by f_n on Δ_n and by F_n on A_n, H( Σ_n^∗) is the constant H_L,m for all n. The key ingredient of this idea are Theorems <ref> and <ref>, which will be proved later in this section, and which with Theorem <ref> deduce (a)–(c). The proof of these two theorems are applications of Theorem <ref>. By (<ref>), (<ref>) and Conditions <ref> and <ref>, ∂Δ has the 𝒞( L,m)-partition ∂Δ=α_01( a_01,a_02) +α_02( a_02,a_03) +⋯+α_0m( a_m,a_01) , for Γ_0 so that α_01 initiates at a_01=1∈∂Δ, and Γ_0 has the corresponding 𝒞( L,m)-partition Γ_0=c_01( q_01,q_02) +c_02( q_02 ,q_03) +⋯+c_0m( q_0m,q_01) , so that For each j=1,…,m,f_0 restricted to α_0j is the SCC arc c_0j, c_nj converges to c_0j uniformly, and thus c_0j is a point iff α_0j is a point. By assumption we have 2δ_E_q≤ L(Γ_0)=L_1=lim_n→∞L(Γ _n). Since ∂Σ_n and Γ_0 are parametrized by length and a_n1=a_01=1, we have For each j=1,2,…,m, α_nj converges to α_0j as well and thus α_nj converges to a point iff α_0j and c_0j are both point-arcs. Moreover, for any sequence of intervals [θ_n1,θ_n2] of real numbers with [θ_n1,θ _n2]→[ θ_01,θ_02] and for the sequence of arcs I_n={e^√(-1)θ:θ∈θ _n1,θ_n2]} of ∂Δ, L(f_n,I_n)→ L(f_0,I_0). Recall that M={1,…,m}. Then there exists a subset M_0={i_1 ,i_2,…,i_m_0}of M with 1≤ i_1<i_2<…<i_m_0≤ m, such that j∈ M_0 iff α_0j is not a point. Note that α_0j is a point iff c_0j is. The partition (<ref>) has a simplified partition ∂Δ=α_0i_1( a_0i_1,a_0i_1+1) +α_0i_2( a_0i_2,a_0i_2+1) +⋯+α _0m_0( a_i_m_0,a_i_m_0+1) , with m_0≤ m and a_01=⋯=a_0i_1-1=a_0i_1=1, such that all point-arcs in (<ref>) are deleted, and the partition (<ref>) also has a simplified partition: Γ_0=c_0i_1( q_0i_1,q_0i_1+1) +c_0i_2 ( q_0i_2,q_0i_2+1) +⋯+c_0m_0( q_i_m_0,q_i_m_0+1) , such that all point-arcs in (<ref>) are also deleted and that f_0 restricted to each a_0j for j∈ M_0={i_1,…,i_m_0} is the SCC arc c_0j. By Lemma <ref> and Theorem <ref>, there exists a δ_1>0 such that for all n, δ_1<d_f_n(f^-1(E_q)∩Δ,∂Δ), and for sufficiently[Don't confuse α with a, although they look similar from a distance, up close they are different! The three distance in the curly brackets are of two arcs, one arc and one point, and two points.] large n, δ_1<min_j∈ M_0{min_{ i,j}∈ M_0 i≠ jd_f_n( α_0i,α_0j) ,min_i∈ M_0 j∈ M,a_0j∉α_0id_f_n (α_0i,a_0j),min_{ i,j}⊂ M a_0i≠ a_0jd_f_n(a_0i,a_0j)} . It is clear that δ_2=1/3min{ d(w_1,w_2):{w_1,w_2}⊂ E_q∪{q_0j}_j=1^m and w_1≠ w_2} >0. On the other hand we have δ_3=min_j∈ M_0L(c_0j)>0. Let δ be a positive number with δ<min{δ_1,δ_2,δ_3}/12π( d^∗+1) m. Then we have ( D(q_0j,δ)\{q_0j}) ∩ E_q=∅,j∈ M, and it is clear that: For each j∈ M_0, c_0j is divided into three arcs: c_0j=c_0j,δ^1+c_0j,δ^2+c_0j,δ^3=c_0j,δ ^1( q_0j,q_0j,δ^1) +c_0j,δ^2( q_0j,δ^1,q_0j,δ^2) +c_0j,δ^3( q_0j,δ^2,q_0,j+1) with c_0j,δ^2=c_0j\ D({q_0j,q_0,j+1},δ), and each α_0j corresponding to c_0j is divided into three arcs α_0j=α_0j,δ^1+α_0j,δ^2+α_0j,δ^3=α_0j,δ^1( a_0j,a_0j,δ^1) +α_0j,δ^2( a_0j,δ^1,a_0j,δ^2) +α_0j,δ^3( a_0j,δ^2,a_0,j+1) , with c_0j,δ^i=( f_0,α_0j,δ^i) for i=1,2,3, say, f_0(a_0j)=q_0j,f_0(a_0j,δ^1)=q_0j,δ^1,f_0 (a_0j,δ^2)=q_0j,δ^2,f_0(a_0,j+1)=q_0,j+1; and thus, for sufficiently large n, c_nj=c_nj( q_nj ,q_n,j+1) is divided into three arcs by ∂ D({q_0j ,q_0,j+1},δ): c_nj=c_nj,δ^1+c_nj,δ^2+c_nj,δ^3=c_nj,δ ^1( q_nj,q_nj,δ^1) +c_nj,δ^2( q_nj,δ^1,q_nj,δ^2) +c_nj,δ^3( q_nj,δ^2,q_n,j+1) , with d(q_nj,δ^1,q_0j)=δ, d(q_nj,δ^2,q_0,j+1 )=δ and c_nj,δ^2=c_nj,δ^2( q_nj,δ^1,q_nj,δ ^2) =c_nj\ D({ q_0j,q_0,j+1} ,δ), and α_nj=α_nj( a_nj,a_nj) , corresponding to c_nj, is divided into three arcs α_nj=α_nj,δ^1+α_nj,δ^2+α_nj,δ^3=α_nj,δ^1( a_nj,a_nj,δ^1) +α_nj,δ^2( a_nj,δ^1,a_nj,δ^2) +α_nj,δ^3( a_nj,δ^2,a_n,j+1) , such that f_nrestricted to α_nj,δ^iis a homeomorphism onto c_nj,δ^i, say, c_nj,δ^i=( f_n,α_nj,δ^i) ,i=1,2,3, and f_n(a_nj)=q_nj,f_n(a_nj,δ^1)=q_nj,δ^1,f_n (a_nj,δ^2)=q_nj,δ^2,f_n(a_nj+1)=q_nj+1; and therefore, we have c_nj,δ^i→ c_0j,δ^i,α_nj,δ ^i→α_0j,δ^i as n→∞, for i=1,2,3. Thus Σ_n and Γ_0 satisfies (A)–(D) in Theorem <ref>. Then we have For two sequences I_nj, of arcs on ∂Δ such that I_nj→ I_0j as n→∞ for j=1,2, we have lim_n→∞inf d_f_n( I_n1,I_n2) =lim_n→∞inf d_f_n( I_01,I_02) . By definition, α_0j,δ^2,j∈ M_0, are disjoint compact arcs in ∂Δ, α_nj→α_0j, α_nj,δ^2→α_0j,δ^2 and α_0j,δ^2 ⊂α_0j^∘. Then by Theorem <ref> and Claim <ref> we may assume by taking subsequence, that δ_4=min{min_{ i,j}⊂ M_0 i≠ jinf_n∈ℕ{ d_f_n( α_0i,δ^2,α_0j,δ^2) } ,min_j∈ M_0inf _n∈ℕd_f_n( ( ∂Δ) \α_nj,α_nj,δ^2) } >0. Then we have by the relation α_ni,δ^2⊂( ∂Δ) \α_nj for { i,j}⊂ M_0,i≠ j, we have min_{ i,j}⊂ M_0,i≠ jd_f_n( α_ni,δ^2,α_nj,δ^2) ≥δ_4>0. It is clear that we may assume that δ is small enough at first such that L(c_0j,δ^1)=L(c_0j,δ^3)<2πδ for all j∈ M_0. By, Conclusions <ref> and <ref>, Claim <ref>, and (<ref>), we have for each j∈ M_0, d_f_n( a_0j,a_nj,δ^1) ≤ d_f_n( a_0j,a_nj) +d_f_n( a_nj,a_nj,δ^1) ≤ d_f_n( a_0j,a_nj) +L(f_n,α_nj,δ ^1)→0+L(c_0j,δ^1)<2πδ, as n→∞, and for the same reason d_f_n( a_0,j+1,a_nj,δ^2) → L(c_0j,δ^3)<2πδ, as n→∞, for each j∈ M_0. Thus we have for sufficiently large n, max_j∈ M_0max{ d_f_n( a_0j,a_nj,δ ^1) ,d_f_n( a_n,j+1,δ^2,a_0,j+1) } <2πδ. For each j∈ M_0, we let C_0j be the circle determined by c_0jand C_0j=∂ D( p_j,r) . Then r_j≤π/2 and p_j is on the left hand side of C_0j. For any positive number ε≪δ with ε<min{δ,δ_1,δ_2,δ_3,δ_4 }/12π( d^∗+1) m, let C_0j,±ε=∂ D(p_j,r_j±ε)and write 𝒜_0j,±ε=D(C_0j,ε), which is the ε neighborhood of C_0j on S: 𝒜_0j,±ε:r_j-ε<d(w,p_j)<r_j +ε, and let R_j,δ,ε=D(c_0j,ε)\ D({q_0j,q_0,j+1},δ), which is compact and is the component of 𝒜_0j,±ε\ D({q_0j,q_0,j+1},δ) containing c_0j\ D({q_0j,q_0,j+1},δ). Then we have ∂𝒜_0j,±ε=C_0j,-ε∪ C_0j,+ε. Recall that D(X,δ)=∪_x∈ XD(x,δ). By (<ref>), (<ref>) and (<ref>), 𝒞_δ={∂ D(q_0j,δ):j∈ M} is consisted of disjoint circles. Then we may assume that the positive numbers δ and ε≪δ are small enough such that the following holds. For each circle J_1 of 𝒞_δ and each circle J_2of the 3m_0 circles { C_0j,±ε:j∈ M_0}∪{ C_0j:j∈ M_0} , either J_1∩ J_2=∅ or J_1 and J_2 intersect almost perpendicularly; the closed domain R_j,δ,ε,j∈ M_0, defined by (<ref>), is a quadrilateral enclosed by four circular arcs contained in C_0j,-ε, ∂ D(q_0j,δ), C_0j,+ε and ∂ D(q_0,j+1,δ): ∂ R_j,δ,ε =-c_0j,δ,-ε^2( q_0j,δ,-ε^1,q_0j,δ,-ε^2) +τ_j,δ,ε^1( q_0j,δ,-ε ^1,q_0j,δ,ε^1) +c_j,δ,ε^2( q_0j,δ,ε ^1,q_0j,δ,ε^2) +τ_j,δ,ε ^2( q_0j,δ,ε^2,q_0j,δ,-ε ^2) ; and min_1≤ j≤ m_0L(c_0j,δ,-ε^2)≥4/5 min_1≤ j≤ m_0L(c_0j)>5δ. For each j∈ M_0, it is clear that for sufficiently large n, q_nj,δ^1 and q_nj,δ^2 are contained in the interior of τ_j,δ,ε^1and τ_j,δ,ε^2 respectively. Step 2 Two lifting results: Theorems <ref> and <ref> Let t_0=2π( d^∗+1) sinδ, which is ( d^∗+1) times of the length of any circle on S with radius δ, and for j∈ M let ζ_nj,δ^t_0=ζ_nj,δ^t_0(t),t∈[ 0,t_0] , be the locally simple path which describes -∂ D(q_0j,δ) by d^∗+1 times, parametrized by length, oriented clockwise, and from ζ_nj,δ^t_0(0)=q_n,j-1,δ^2 to itself, say, ζ_nj,δ^t_0(t_0)=q_n,j-1,δ^2. For each j∈ M, there exists two numbers 𝔦_1=φ _1( j) and 𝔦_2=φ_2( j) in M_0, which are uniquely determined by j, such that 𝔦 _1+1≤ j≤𝔦_2, and that each arc α_0i with i∈ M and 𝔦_1+1≤ i≤𝔦_2-1 is a point, when 𝔦_2>𝔦_1+1. In other words, α_0𝔦 _1 and α_0𝔦_2 are terms in (<ref>) having positive length and joined at a_0j, and α_0𝔦_1 is before α_0𝔦_2 in the direction of ∂Δ. For example, when j∈ M_0, we have φ_2( j) =j and φ_1( j+1) =j, and it is clear that φ_1(M)=φ_1(M_0)=φ_2(M)=φ_2(M_0)=M_0. In the remain part of this section, the discussion for the case M_0 ⫋ M makes the argument more complicated, without more deeper meaning. The reader may understand the arguments only in the special case M_0=M, though we discuss in general for completeness. When M_0=M, one has 𝔦_1=φ_1(j)=j-1 and 𝔦_2=φ _2(j)=jfor all j∈ M. We first prove the following theorem. Let j∈ M. Then for sufficiently large n, there exists a number t_nj∈(0,t_0), such that the for 𝔦 _1=φ_1( j) and 𝔦_2=φ_2( j) the following hold. (i) ζ_nj,δ^t_nj=ζ_nj,δ^t_0|_[ 0,t_nj] has an f_n-lift η_nj,δ^t_nj =η_nj,δ^t_nj(t),t∈0,t_nj], from a_n𝔦 _1,δ^2to a_n𝔦_2,δ^1, say, η _nj,δ^t_nj(0) is the terminal point a_n𝔦_1,δ^2 of α_n𝔦_1,δ^2, η_nj,δ^t_nj (t_nj) is the initial pointa_n𝔦_2,δ^1 of α_n𝔦_2,δ^2, and f_n( η_nj,δ^t_nj(t)) =ζ_nj,δ^t_nj (t),t∈0,t_nj]. (ii) The interior η_nj,δ^t_nj∘ of η_nj,δ^t_njis contained in Δ, say, η_nj,δ^t_nj∘(t)∈Δ for all t∈(0,t_nj). (iii) η_nj,δ^t_nj⊂ D_f_n(a_0j,t_nj+2πδ)⊂ D_f_n(a_0j,δ_1/3), and for each pair {i,j}⊂ M with a_0i≠ a_0j η_nj,δ^t_nj∩η_nj,δ^t_nj=∅. (iv) η_nj,δ^t_nj is a simple arc in Δ and η_nj,δ^t_nj∘∩ f_n^-1(E_q)=∅, and thus f_n has no branch point on η_nj,δ^t_nj =η_nj,δ^t_nj( a_n𝔦_1,δ^2 ,a_n𝔦_2,δ^1)(by convention, η_nj,δ^t_nj( a_n𝔦_1,δ^2,a_n𝔦_2 ,δ^1) indicates η_nj,δ^t_nj is a path from a_n𝔦_1,δ^2to a_n𝔦_2,δ^1). By (<ref>) and Definition <ref> (c) (iv) for ℱ( L,m) ⊃ℱ_r( L,m) we have ζ_nj,δ^t_0 never passes any point of E_q and for sufficiently large n, f_n is homeomorphic in neighborhoods of a_n𝔦_1,δ^2 and a_n𝔦_2,δ^1 in Δ. Thus ζ_nj,δ^t_0 never passes through any branch point of f_n. From this condition we have: there exists a maximal number t_nj∈ (0,t_0]satisfying the following condition. (A) The part ζ_nj,δ^t_nj of ζ_nj,δ^t_0 has an f_n-lift η_nj,δ^t_nj(t) starting from a_n𝔦_1,δ^2∈α_n𝔦_1,δ^2(recall that a_n𝔦_1,δ^2 is the terminal point of α_n𝔦_1,δ^2). (B) ζ_nj,δ^t_nj∘⊂Δ. By Condition <ref>, the lift η_nj,δ^t_nj is uniquely determined by t_nj and the initial point η_nj,δ (0)=a_n𝔦_1,δ^2and satisfies (<ref>). Since _maxf_n≤ d^∗and t_nj is maximal, we have by Lemma <ref> that t_nj∈(0,t_0) and η_nj,δ^t_nj(t_nj)∈∂Δ, We have proved that η_nj,δ^t_nj satisfies (ii) and (<ref>). Now, we show that η_nj,δ^t_nj satisfies (iii). Since η_nj,δ^t_nj(0)=a_n𝔦_1,δ^2 and η_nj,δ^t_nj is parametrized by d_f_n-length, by (<ref>) we have for sufficiently large n, η_nj,δ^t_nj ⊂D_f_n(a_n𝔦 _1,δ^2,t_nj)⊂D_f_n(a_0j,t_nj+d_f_n (a_n𝔦_1,δ^2,a_0j)) ⊂ D_f_n(a_0j,t_nj+2πδ). On the other hand, by (<ref>), we have t_nj+2πδ<2πδ( d^∗+1) +2πδ<4πδ( d^∗+1) <δ_1/3. Therefore (<ref>) holds, which with (<ref>) and Lemma <ref>, implies that for each pair {i,j} in M with a_0i≠ a_0j d_f_n(η_ni,δ^t_ni,η_nj,δ^t_nj) ≥ d_f_n( D_f_n(a_0i,δ_1/3),D_f_n(a_0j ,δ_1/3)) ≥ d_f_n(a_0i,a_0j)-2δ_1/3>δ_1/3>0. That is to say (<ref>) holds, and (iii) is proved. By Condition <ref>, the f_n-lift η_nj,δ^t_nj is simple, and thus (iv) is true. It remains to prove (i). For sufficiently large n, it is clear that for each i∈ M_0,α_ni⊂ D_f_n(α_0i,δ), and thus by (<ref>) and Lemma <ref> we have, for i∈ M_0 with i≠𝔦_1,𝔦_2, that d_f_n( η_nj,δ^t_nj,α_ni) ≥ d_f_n( D_f_n(a_0j,δ_1/3),D_f_n(α _0i,δ)) ≥ d_f_n( a_0j,α_0i) -δ_1/3-δ, and then by (<ref>) d_f_n( η_nj,δ,α _ni) >δ_1/2. Then for sufficiently large n, η _nj,δ^t_nj( t_nj) ∩α_ni≠∅ holds only for i=𝔦_1or 𝔦_2. Thus, when t tends to t_nj in [0,t_nj], η_nj,δ(t) tend to α_n𝔦_1or α_n𝔦_2, and it is clear that we only have η_nj,δ(t_nj)=a_n𝔦_2^1∈α_n𝔦_2. (i) has been proved and Theorem <ref> is proved completely. By taking subsequence, we may assume that for each j∈ M there exists v_j∈ℕ^0, independent of n, such that d^∗+1≥t_nj/2πsinδ>v_j≥t_nj/2πsinδ-1. For each j∈ M_0, we write τ_nj,δ,ε^1,L=τ_j,δ,ε^1( q_0j,δ,-ε^1,q_nj,δ^1) ,τ_nj,δ ,ε^2,L=τ_j,δ,ε^2( q_nj,δ ^2,q_0j,δ,-ε^2) , say, τ_nj,δ,ε^1,L is the arc of τ_j,δ ,ε^1 from q_0j,δ,-ε^1 to q_nj,δ ^1 and τ_nj,δ,ε^2,L is the arc of τ _j,δ,ε^2 from q_nj,δ^2to q_0j,δ ,-ε^2. In other words, τ_nj,δ,ε^1,L and τ_nj,δ,ε^2,L are the parts of τ_nj,δ ,ε^1 and τ_nj,δ,ε^2 on the left hand side of c_nj, respectively. Then by Theorem <ref> and properties of path lifts, we have the following. For each j∈ M,η_nj,δ^t_nj=η_nj,δ ^t_nj(t),t∈0,t_nj], is a simple path in Δ from a_nφ_1( j) ,δ^2 to a_nφ _2(j),δ^1, η_nj,δ^t_nj∘=η_nj,δ^t_nj|_(0,t_nj) ⊂Δ, and η_nj,δ^t_nj is the f_n-lift of the path ζ_nj,δ^t_nj=ζ_nj,δ^t_0|_[0,t_nj] =τ_nφ_1(j),δ,ε^2,L+ζ_j,δ,ε+τ_nφ_2( j) ,δ,ε^1,L, where τ_nφ_1(j),δ,ε^2,L and τ _nφ_2( j) ,δ,ε^1,L are defined by (<ref>), and ζ_j,δ,ε can be expressed as ζ_j,δ,ε=v_j copies of -∂ D(q_0j,δ) -∂ D(q_0j,δ )-…-∂ D(q_0j,δ)-κ, in which each -∂ D(q_0j,δ) is regarded as a closed simple path from q_0φ_1(j),δ,-ε^2 to itself, and -κ is the simple arc of -∂ D(q_0j,δ) from q_0φ_1 (j),δ,-ε^2 to q_0φ_2(j),δ,-ε ^1, and ζ_j,δ,-ε=-κ when v_j=0. Since {v_j}_j=1^M is independent of n, and ζ_j,δ ,ε is starting from the point q_0φ_1(j),δ ,-ε^2∈ C_0φ_1(j),-ε∩∂ D( q_0j,δ) to the point q_0φ_2(j),δ,-ε^1∈ C_0φ_2(j),-ε∩∂ D( q_0j ,δ) , which are also independent of n, we have: For each j∈ M,ζ_j,δ,ε is a subarc of ζ_nj,δ^t_nj|_[0,t_nj]=ζ_nj,δ^t_0 |_[0,t_nj] which is independent of n, and we can write η_nj,δ,ε^t_nj=𝔱_nφ_1 (j),δ,ε^2+η_nj,δ,ε+𝔱 _nφ_2(j),δ,ε^1, with ( f_n,𝔱_nφ_1(j),δ,ε^2) =τ_nφ_1(j),δ,ε^2,L, ( f_n ,𝔱_nφ_2(j),δ,ε^1) =τ _nφ_2(j),δ,ε^1,L and ( f_n,η_nj,δ,ε) =ζ_j,δ ,ε=ζ_j,δ,ε( q_0φ_1( j)) ,δ,-ε^2,q_0φ_2( j)) ,δ,-ε^1) . It is clear that for each j∈ M_0, c]ll max{ L(τ_nj,δ,ε^1,L),L(τ_nj,δ ,ε^2,L)} =max{ L(f_n,𝔱_nj,δ,-ε^1 ),L(f_n,𝔱_nj,δ,-ε^2)} =ε+o(ε) as ε→0. We write η_nj,δ,ε=η_nj,δ,ε( a_0φ_1( j)) ,δ,-ε^2,a_0φ _2( j)) ,δ,-ε^1) Now we prove the following Theorem. For sufficiently large n, and each j∈ M_0, the following hold: (i) c_0j,δ,-ε^2 has an f_n-lift α _nj,δ,-ε^2 such that the initial point of α _nj,δ,-ε^2 equals the terminal point a_0φ _2( j) ,δ,-ε^1=a_0j,δ,-ε ^1 of η_nj,δ,ε, and the terminal point a_0j,δ ,-ε^2 of α_nj,δ,-ε^2 equals the initial point a_0φ_1(j+1),δ,-ε^2=a_0j,δ ,-ε^2 of η_n,j+1,δ,ε; and moreover, α_nj,δ,-ε^2⊂ D_f_n(α_0j,δ ^2,2πε)⊂ D_f_n(∂Δ,δ_1). (ii) For any other i∈ M_0 with i≠ j, α_ni,δ ,ε^2∩α_nj,δ,ε^2=∅. We first show that (i) implies (ii). By (<ref>) we have for large enough n and each pair of distinct i and j in M_0 that d_f_n( α_ni,δ,-ε^2,α_nj,δ ,-ε^2) ≥ d_f_n( D_f_n(α_0i,δ^2,2πε),D_f_n(α_0j,δ^2,2πε )) , and then, by Lemma <ref>, (<ref>) and (<ref>), we have d_f_n( α_ni,δ,-ε^2,α_nj,δ ,-ε^2) ≥ d_f_n( α_0i,δ^2 ,α_0j,δ^2,) -4πε≥δ_4-4πε>δ_4/2>0. This implies (ii). Consider the closed quadrilateral R_j,δ,ε defined by (<ref>) for each j∈ M_0. By Conclusion <ref>, it is clear that for sufficiently large n, the interior c_nj,δ^2∘ of c_nj,δ^2=c_nj∩ R_j,δ,ε is contained in R_j,δ,ε^∘ with ∂ c_nj,δ^2 ⊂∂ R_j,δ,ε, and thus c_nj,δ^2 divides R_j,δ,ε into two closed quadrilateral which are closed Jordan domainsand the one on the left hand side of c_nj,δ^2 is denoted by R_nj,δ,ε^L. Since c_nj,δ^2 converges to c_0j,δ^2 as n→∞, c_nj,δ^2∩∂ R_j,δ,ε=c_nj ∩∂ R_j,δ,εis consisted of the two endpoints q_nj,δ^1 and q_nj,δ^2 of c_nj,δ^2, which are terminal and initial points of τ_nj,δ,ε^1,L and τ_nj,δ,ε^2,L respectively. By (<ref>) and (<ref>)–(<ref>), we may assume (by taking subsequence) that ∂ R_nj,δ,ε^L has the partition ∂ R_nj,δ,ε^L=-c_0j,δ,-ε^2 +τ_nj,δ,ε^1,L+c_nj,δ^2+τ_nj,δ ,ε^2,L, with c_nj,δ^2∩ c_0j,δ,-ε^2=∅, corresponding to (<ref>). We will show the following: f_n^-1 has a univalent branch g̃ _nj,δ,ε defined on closed quadrilateral R_nj,δ ,ε^L such that g̃_nj,δ,ε(c_nj,δ^2)=g̃_nj,δ ,ε(c_nj∩ R_j,δ,ε)=g̃_nj,δ ,ε(c_nj∩ R_j,δ,ε^L)=α_nj,δ ^2, and g̃_nj,δ,ε(R_nj,δ,ε^L\ c_nj,δ^2)⊂Δ\ f_n^-1(E_q). It is clear that there exists a family {τ_q,q∈ c_nj,δ^2} of simple circular arcs which is a continuous fibration of R_nj,δ ,ε^L. Precisely speaking, R_nj,δ,ε^L=∪_q∈ c_nj,δ^2τ_q, τ_q_nj,δ^1=-τ_nj,δ,ε^1,L=-τ _j,δ,ε^1( q_0j,δ,-ε^1 ,q_nj,δ^1) , τ_q_nj,δ^2=τ_nj,δ,ε^2,L=τ _j,δ,ε^2( q_nj,δ^2,q_0j,δ ,-ε^2) , τ_q∩τ_q^'=∅, for { q,q^'}⊂ c_nj,δ^2 with q≠ q^', each τ_q is a simple circular path in R_nj,δ,ε^L from q∈ c_nj,δ^2 to a point ψ( q) ∈ c_j,δ,-ε^2 with L(τ_q)≤max{L(τ_nj,δ,ε^1,L),L(τ _nj,δ,ε^2,L)}<πε for q∈ c_nj,δ^2. Moreover, τ_q\{q}⊂ T_nj, where T_nj is the (open) disk T_nj enclosed by the circle C_nj determined by c_nj. It is clear that f_n restricted a neighborhood of α_nj,δ^2 in Δ is a homeomorphism onto a neighborhood of c_nj,δ^2 in the closed disk T_nj. Thus, for each q∈ c_nj,δ^2, it is clear that τ_q=τ_q( q,ψ( q) ) contains a MAXIMAL arc τ_q^'=τ_q( q,q^') such that τ_q^' has an f_n-lift β_n,a=β_n,a( a,a^') with τ_q^'=( f_n,β_n,a) and β_n,a ^∘⊂Δ. We show that a^'∈Δ say β _a\{a}⊂Δ. Since L(τ_q^')≤ L(τ_q)<πε we have d_f_n( a,a^') ≤ L(τ_q^')<πε<δ_4/4, and then, by a∈α_nj,δ^2, we have d_f_n( ( ∂Δ) \α _nj,a^') ≥ d_f_n( ( ∂Δ) \α_nj,α_nj,δ^2) -d_f_n( a,a^') >δ_4-πε>0. This implies a^'∉( ∂Δ) \α_nj. On the other hand, q^'∈ T_nj=T_nj\ c_nj and thus a^'∉α_nj. We have proved that a^'∈Δ. By Lemma <ref>, β_n,p is the f_n-lift of the whole arc τ_q and f_n(a^')=ψ( q) , the endpoint of τ_q. Now that β_n,a is the f_n-lift of τ_q with β _n,a\{a}⊂Δ we have that for each x∈β _n,a\{a}, d_f_n( x,∂Δ) ≤ L(τ_q)<επ<πδ<δ_1. Thus we have β _n,a\{a}∩ f_n^-1( E_q) =∅, and thus ( R_nj,δ,ε^L\ c_nj,δ^2) ∩ E_q=∅. and f_n has no branch value on R_nj,δ,ε ^L\ c_nj,δ^2. By Lemma <ref>, Claim <ref> is proved. We are still considering the fixed j∈ M_0. Then we have φ _2( j) =j and φ_1( j+1) =j+1. Then the edge τ_q_nj,δ^1=τ_q_nj,δ,ε^1^1,L of ∂ R_nj,δ,ε^Lis the arc τ_nφ _2( j) ,δ,ε^1,L=τ_nj,δ,ε^1,L of ζ_nj,δ|_[0,t_nj] in (<ref>) and the edge τ_q_nj,δ^2=τ_q_nj,δ,ε^2^2,L of ∂ R_nj,δ,ε^Lis the arc τ_nφ _1( j+1) ,δ,ε^1,L=τ_n,j+1,δ ,ε^1,L of ζ_n,j+1,δ|_[0,t_n,j+1] in (<ref>) for j+1. Thus the arc 𝔱_nφ_2 (j),δ,ε^1=𝔱_nj,δ,ε^1 in (<ref>) equals β_n,a_nj,δ^1=g̃_nj,δ ,ε( τ_q_nj,δ^1^1,L) and 𝔱_nφ_1(j+1),δ,ε^2=𝔱 _n,j+1,δ,ε^2 in (<ref>) equals β_n,a_nj,δ^2=g̃_nj,δ,ε( τ_q_nj,δ^2 ^1,L) ,since 𝔱_nj,δ,ε^1 \{ a_nj,δ^1} and 𝔱 _n,j+1,δ,ε^2\{ a_nj,δ^2} contains no branch point of f_nand f_n is homeomorphic in a neighborhood of α_nj,δ^2. Let α_nj,δ,-ε^2=g̃_nj,δ,ε( c_j,δ,-ε ^2) . Then α_nj,δ,-ε^2 satisfies (ii), except (<ref>). By (<ref>) we have R_nj,δ,ε^L⊂ D( c_nj,δ^2,επ) , which implies α_nj,δ,-ε^2⊂g̃_nj,δ.ε( R_nj,δ,ε) ⊂ D_f_n(α _nj,δ^2,πε). Since α_nj,δ^2 converges α_0j^2⊂α _0j^∘ as n→∞, we have by Conclusion <ref>, D_f_n(α_nj,δ^2,πε)⊂ D_f_n (α_0j,δ^2,2πε) for sufficiently large n. Then (<ref>) holds and Theorem <ref> is proved. By the way, by (<ref>) we have ( g̃_nj,δ,ε(R_nj,δ,ε ^L)\α_nj,δ^2) ∩ f_n^-1(E_q )=∅. It is clear that the following hold. For each j∈ M_0, the definition of R_nj,δ ,ε^Lis valid for n=0, say, R_0j,δ,ε^L is well defined and we have ∂ R_0j,δ,ε^L=-c_0j,δ,-ε^2 +τ_0j,δ,ε^1,L+c_0j,δ^2+τ_0j,δ ,ε^2,L, where τ_0j,δ,ε^1,Land τ_0j,δ,ε^2,Lare the parts of τ_j,δ,ε^1 and τ_j,δ,ε^2on the left hand side of c_0j. Since for any neighborhood V of c_0j,δ^2 on S the closed domains R_nj,δ,ε^Land R_0j,δ,ε^L coincide outside V when n is large enough, (<ref>) implies ( R_0j,δ,ε^L\ c_0j,δ^2) ∩ E_q=∅. Step 3 The construct of the Jordan curve γ_n,δ ,ε in Δ Let γ_n,δ,ε=∑_j∈ M_0( α_n,φ _1( j) ,δ,-ε^2+η_nj,δ,ε) , and Γ̃_δ,ε=∑_j∈ M_0( c_φ _1( j) ,δ,-ε^2+ζ_j,δ,ε) , then by Theorems <ref> and <ref>, Calim <ref> and (<ref>), we see that γ_n,δ,ε is a closed curve in Δ, Γ̃_δ,ε is a closed curve on S, and, Γ̃_δ,ε=( f_n,γ_n,δ ,ε) . We assume that Γ̃_δ,ε and γ _n,δ,ε are parametrized by length. We will show the claim: For sufficiently large n,γ_n,δ,ε is a simple curve in Δ which depends on n,δ and ε, while Γ̃_δ,ε=( f_n,γ_n,δ ,ε) is a closed curve on S which is independent of n. For each j∈ M_0, by Claim <ref> and the fact that c_j-1,δ ,-ε^2 is independent of n, Γ̃_δ ,ε is also independent of n, and so, to prove the claim, it suffices to prove that γ_n,δ,ε is simple. For each j∈ M_0, by Theorem <ref> (iv), η_nj,δ^t_nj is simple, and then by Claim <ref>, η_nj,δ,εas a subarc of η_nj,δ^t_njis also simple and has distinct endpoints; and by Theorem <ref> (i) α_nj,δ,-ε^2 is simple with distinct endpoints. On the other hand, it it is clear, by Theorems <ref> and <ref>, that c_nj,δ,-ε^2 ∩ζ_nj,δ,ε={q_j,δ,-ε^1} and c_nφ_1( j) ,δ,-ε^2∩ζ _nj,δ,ε={q_φ_1( j) ,δ ,-ε^2}. Thus α_nφ_1(j),δ,-ε ^2+η_nj,δ,ε and η_nj,δ,ε +α_nj,δ,-ε^2 are simple arcs. Therefore by Theorem <ref> (iii) and Theorem <ref> (ii) we have: For each j∈ M_0, the arcs α_nφ_1 (j),δ,-ε^2+η_nj,δ,ε+α_nφ _2(j),δ,ε^2 and η_nφ_1(j),δ ,-ε+α_nφ_1(j),δ,-ε^2+η _nj,δ,εare simple for sufficiently large n. Let j∈ M. by (<ref>) and (<ref>), we have η_nj,δ ,ε⊂η_nj,δ⊂ D_f_n(a_0j,δ_1 /3)andα_nφ_l(j),δ,-ε^2⊂ D_f_n(α_0φ_l(j),δ^2,2πε) with l=1,2, for large enough n. Then, for every pair of i and j with i≠φ_1(j),φ_2(j)and i∈ M_0 we have by Lemma <ref> and (<ref>) d_f_n(α_ni,δ,-ε^2,η_nj,δ,ε) ≥ d_f_n( D_f_n(α_0i,δ^2,2πε ),D_f_n(a_0j,δ_1/3)) ≥ d_f_n(α_0i,δ^2,a_0j)-2πε-δ _1/3 ≥ d_f_n(α_0i,a_0j)-2πε-δ_1/3 ≥δ_1-2πε-δ_1/3>0, for sufficiently large n. Then for sufficiently large n, we have for i≠φ_1(j),φ_2(j), α_ni,δ,-ε^2∩η_nj,δ,ε=∅. For sufficiently large n, by (<ref>), (<ref>), Conclusion <ref> and Theorem <ref> (ii) we conclude that γ_n,δ ,ε is a simple curve in Δ, and Claim <ref> is proved completely. Step 4 Construction of the sequence Σ_n,δ,ε^∗ with the same Ahlfors error terms For sufficiently large n, let Δ_n,δ,ε be the closed Jordan domain in Δ enclosed by γ_n,δ ,ε and let 𝒜_n,δ,ε=Δ\Δ_n,δ,ε^∘. It is clear that γ_n,δ,ε,{𝔱_nj,δ,ε ^1}_j∈ M_0,{𝔱_nj,δ,ε^2}_j∈ M_0 divide Δ into 2m_0+1 Jordan domains Δ_n,δ,ε,Δ_nj,δ, and Δ_nj,δ,ε^',j∈ M_0, where Δ_nj,δ is the part of Δ on the right hand side of η_nj,δ and Δ_nj,δ,ε^'=g̃_nj,δ,ε(R_nj,δ,ε^L), with Δ_nj,δ ,ε^'∩Δ_nj,δ=𝔱 _nj,δ,ε^1,and Δ_n,j,δ,ε^'∩Δ_n,j+1,δ =𝔱_nj,δ,ε^2. Then, for each j∈ M_0, we have ∂Δ_nj,δ=-η_nj,δ+I_nj,δ=-𝔱 _nj,δ,ε^1-η_nj,δ,ε-𝔱 _nφ_1(j),δ,ε^2+I_nj,δ, where I_nj,δ is the arc of ∂Δ from a_nφ _1(j),δ^2 to a_nj,δ^1. In fact, I_nj,δ=α_nφ_1(j),δ^3+α_nφ_1 (j)+1+…+α_n,j-1+α_nj,δ^1, and as a limit of I_nj,δ we have I_0j,δ =α_0φ_1(j),δ^3( a_0φ _1( j) ,δ^2,a_0φ_1( j) +1,δ) +α_nφ_1(j)+1( a_0φ_1( j) +1,δ,a_0φ_1( j) +2,δ) +…+α_n,j-1( a_0,j-1,δ,a_0j,δ) +α_0j,δ^1( a_0j,δ,a_0j,δ^1) =α_0φ_1(j),δ^3( a_0φ_1( j) ,δ^2,a_0φ_1( j) +1,δ) +α_0j,δ^1( a_0j,δ,a_0j,δ^1) , where α_nφ_1(j)+1,α_0φ_1( j) +2,δ,…,α_n,j-1( a_0,j-1,δ,a_0j,δ) are all equal to the point-arc a_0j,δ. Now we can prove For sufficiently large n, there exists a surface B_n=( F_n,δ,ε,𝒜_n,δ,ε) such that (i) F_n,δ,ε|_∂Δ=f_0, F_n,δ ,ε=f_n in a neighborhood of γ_n,δ,ε in 𝒜_n,δ,ε, and 𝒜_n,δ,ε is contained in D_F_n(∂Δ,2πδ). (ii) F_n,δ,ε^-1(E_q)∩[ 𝒜 _n,δ,ε\∂Δ] =∅. (iii) For each j∈ M_0, the restriction F_nj,δ,ε^'=F_n,δ,ε|_Δ_nj,δ,ε^' is a homeomorphism from Δ_nj,δ,ε^' onto R_0j,δ,ε such that ( F_nj,δ,ε^',α_nj,δ,-ε ^2) =( f_n,α_nj,δ,-ε^2) =c_0j,δ,-ε^2, ( F_nj,δ,ε^',α_nj,δ^2) =( f_0,α_0j,δ^2) =c_0j,δ^2, ( F_nj,δ,ε^',𝔱_nj,δ ,ε^1) =τ_0j,δ,ε^1,L , ( F_nj,δ,ε^',𝔱 _nj,δ,ε^2) =τ_0j,δ,ε^2,L. (iv) For each j∈ M_0, the restriction ( F_nj,δ,ε,Δ_nj,δ) =( F_n,δ,ε|_Δ _nj,δ,Δ_nj,δ) is a surface contained in D(q_0j,δ) such that, corresponding to (<ref>) and (<ref>), ( F_nj,δ,ε,𝔱_nφ_1( j) ,δ,ε^2) =τ_0nφ_1( j) ,δ,ε^2,L, ( F_nj,δ,ε,𝔱_nj,δ,ε^1) =τ_0j,δ ,ε^1,L, ( F_nj,δ,ε,η_nj,δ,ε) =( f_n,η_nj,δ,ε) =ζ_j,δ ,ε, ( F_nj,δ,ε,I_nj,δ) =c_0,j-1,δ^3+c_0j,δ^1, F_nj,δ,ε^-1(q_0j) ={a_0j}, a_0j is the only possible branch point of F_nj,δ,ε and v_F_nj,δ,ε(a_0j)=v_j+1 (see (<ref>) for v_j). In fact there exists a Jordan domain U with U⊂Δ, which contains Δ_n,δ,ε, such that for each j∈ M_0,U∩Δ_nj,δ and U∩Δ_nj,δ,ε^' are closed Jordan domains, that the restriction f_n|_U∩Δ_nj,δ,ε^' can be extended to a homeomorphism F_nj,δ,ε^' from Δ_nj,δ,ε^' onto R_0j,δ,ε^L satisfying (iii), that the restriction ( f_n|_U∩Δ_nj,δ ,U∩Δ_nj,δ) can be extended to a surface ( F_nj,δ,ε,Δ_nj,δ ) contained in D(q_0j,δ) satisfying (iv), that F_nj,δ,ε and F_nj,δ,ε^' agree on 𝔱_nj,δ,ε^1 (note that we assumed j∈ M_0), that F_n,j+1,δ,ε and F_nj,δ,ε ^' agree on 𝔱_nj,δ,ε^2. Then (i) holds trivially, and, by (<ref>) and (<ref>), (ii) also holds. Then it is clear that these 2m_0 mappings agree on the intersection boundary, and so compose the desired global mapping F_n,δ,ε defined on 𝒜_n,δ,ε. The claim is proved. Let f_n,δ,ε^∗(z)={[ f_n(z),z∈Δ\𝒜_n,δ,ε,; F_n,δ,ε,z∈𝒜_n,δ,ε. ]. Then we obtain a sequence of surfaces Σ_n,δ,ε^∗=( f_n,δ,ε^∗,Δ) ∈ℱ( L,m) , with ∂Σ_n,δ,ε^∗=( f_n,δ,ε^∗,∂Δ) =( f_0,∂Σ) =Γ_0. It is possible that Σ_n,δ,ε^∗∉ℱ_r( L,m) , which happens only if, for some a_0j, a_0j∉ f_n,δ,ε^∗-1( E_q) and the integer v_j≥1. But we can show at last that this can not happen. By Claim <ref> (ii), we have ( f_n,δ,ε^∗-1(E_q)∩𝒜_n,δ ,ε) \∂Δ=∅, which implies n( Σ_n,δ,ε^∗) =#f_n,δ,ε^∗-1(E_q)∩Δ_n,δ,ε=#f_n^-1(E_q)∩Δ_n,δ,ε≤n (Σ_n)≤ qd^∗, and then, taking subsequence if necessary, we have n( Σ_n,δ,ε^∗) is a constant for all n=1,2,... For each j≤ M_0, by definition of F_n,δ,ε and by (<ref>) we have A(F_n,δ,ε,Δ_nj,δ)≤( v_j+1) A(D(q_0j,δ))=2π( v_j+1) ( 1-cosδ) ≤π( d^∗+1) δ^2, and A(F_n,δ,ε,Δ_nj,δ,ε^')=A(R_0j,δ,ε^L)<2πε L(c_0j,δ ^2)<L(c_0j,δ^2)δ<Lδ. Thus, by (<ref>) and (<ref>), we have δ<1 and 0<R(f_n,δ,ε^∗,𝒜_n,δ,ε)=R(F_n,δ,ε,𝒜_n,δ,ε)=( q-2) A(F_n,δ,ε,𝒜_n,δ,ε)<Cδ, where C=[ π( d^∗+1) +L] M_0, which is independent of δ,ε and n. Then, by Conclusion <ref>, we may choose δ small enough such that A(Σ_n,δ,ε^∗)=A(f_n,Δ_n,δ,ε)+A(F_n,δ,ε,𝒜_n,δ,ε)≤4π d^∗+1. Hence by Corollary <ref> and (<ref>), we may assume that Σ_n,δ,ε^∗ have the same area and by Claim <ref>, we have the following. A(Σ_n,δ,ε^∗), n (Σ_n,δ,ε^∗) and R(Σ_n,δ,ε^∗) are constants for all n=1,2,…, respectively. By (<ref>) and Claim <ref> implies H(Σ_n,δ,ε^∗)=R(Σ_n,δ ,ε^∗)/L(∂Σ_n,δ,ε^∗) is a constant H for all n=1,2,… Step 5 Complete the Proof of Theorem <ref> We will show H=H(Σ_n,δ,ε^∗)=lim_n→∞ H(Σ_n)=H_L,m, for n=1,2,… Since Σ_n is an extremal sequence in ℱ( L,m)and L(∂Σ_n)→ L(∂Σ_n,δ ,ε^∗)=L(f_0,∂Δ), by (<ref>), we have H=H(Σ_n,δ,ε^∗)≤lim_n→∞ H(Σ_n). So, by (<ref>), to prove (<ref>), it suffices to prove that for any number μ>0 R(Σ_n)<R(Σ_n,δ,ε^∗)+μ. Since Σ_n and Σ_n,δ,ε^∗ coincide on Δ_n,δ,εand by (<ref>) R(f_n,δ ,ε^∗,𝒜_n,δ,ε)>0, to prove (<ref>) it suffices to prove R( f_n,𝒜_n,δ,ε) <μ , for large enough n. We will prove the following Claim, which implies (<ref>) by taking δ small enough. There exists a constant C_1 independent of n, δ, and ε, such that R(f_n,𝒜_n,δ,ε)≤ C_1δ , for large enough n. We first show that for each j∈ M_0 and sufficiently large n R(f_n,Δ_nj,δ)<C_1^'δ, R(f_n,Δ_nj,δ,ε^')<C_2^'δ, for some C_1^' and C_2^' independent of n, δ and ε. By the assumption that η_nj,δ is parameterized by length and t_nj<2π( d^∗+1) δ, we have L(f_n ,η_nj,δ)<2π( d^∗+1) δ. By (<ref>) and (<ref>) we have L(f_n,I_nj,δ)→ L(c_0,j-1,δ^3)+0+L(c_0j,δ^1)<4πδ. Thus we have L(f_n,∂Δ_nj,δ)=L(f_n,I_nj,δ-η_nj,δ)<4πδ+2π( d^∗+1) δ=C_1^'' δ where C_1^''=( 4π+2π( d^∗+1) ). By (<ref>) and (<ref>) we have C_1^''δ <δ_E_q, which with (<ref>), Theorem <ref> and Lemma <ref>, implies R(f_n,Δ_nj,δ)=H(f_n,Δ_nj,δ)L(f_n,∂Δ_nj,δ)≤( q-2) L(f_n,∂Δ_nj,δ)≤( q-2) C_1^''δ, and then, putting C_1^'=( q-2) C_1^'', we have (<ref>). By (<ref>) and the fact that R_nj,δ,ε→ R_0j,δ,ε we have R(f_n,Δ_nj,δ,ε^')=( q-2) A(R_nj,δ,ε)≤( q-2) π L(c_nj,δ ^2)ε<( q-2) ( π L) δ when n is large enough by (<ref>). This implies (<ref>). (<ref>) and (<ref>) imply Claim <ref>, for 𝒜 _n,δ,ε=∪_j∈ M_0[ Δ_nj,δ ,ε^'∪Δ_nj,δ,ε] and 𝒜_n,δ,ε\∂Δ contains no point of f_n,δ,ε^∗-1(E_q). Now, (<ref>) is proved completely, and we have reach the position to complete the proof of Theorem <ref>. Let f_L_1be any one of the sequence f_n,δ,ε^∗, so that n is large enough, δ is small enough, and εsatisfying (<ref>) further small enough, and Σ_L_1=( f_L_1,Δ) . Then (<ref>) implies H(Σ _L_1)=H_L,m, and then by (<ref>), Σ_L_1 is an extremal surface of ℱ(L,m). Since Σ_n is a precise extremal sequence, L_1=lim_n→∞inf L(∂Σ_n )=L(Γ_0) and L(∂Σ_L_1)=L(Γ_0), Σ _L_1 is a precise extremal surface in ℱ( L,m) . If f_L_1 has no branch point outside f^-1(E_q), then Σ_L_1∈ℱ_r(L,m) and thus Σ_L_1 is a precise extremal surface in ℱ_r(L,m). As it is pointed out, f_L_1 may be not in ℱ_r( L,m) . But when this is happen, by Theorem <ref> there exists a surface Σ^'=( f,Δ) such that H(Σ^')≥ H(Σ_L_1), L(∂Σ^')≤ L(∂Σ_L_1)and Σ^'∈ℱ_r(L,m). Then Σ^' is again an extremal surface of ℱ_r(L,m) and by the definition of L_1 we have L(∂Σ^')≥ L(∂Σ_L_1) and thus L(∂Σ^')=L(∂Σ_L_1) and Σ^' is also a precise extremal surface of ℱ_r(L,m). This completes the proof of Theorem <ref>. By Lemma <ref>, Theorem <ref> and (<ref>), we have the following: Let L∈ℒ be a positive number and m be a sufficiently large positive integer. Then there exists a precise extremal surface Σ_L_1of ℱ_r^'(L,m),ℱ _r(L,m) and ℱ(L,m), such that L(∂Σ_L_1 )=L_1≤ L. For any precise extremal surface Σ of ℱ (L,m), L(∂Σ)≥2δ_E_q if L≥2δ_E_q. § RELATION OF PRECISE EXTREMAL SURFACES OF ℱ(L,M) AND ℱ(L,M-1) The goal of this long section is to prove the following theorem, followed by some applications. Let L∈ℒ be a positive number with L≥ 2δ_E_q. Then for sufficiently large integer m and any precise extremal surface Σ_L_1=( f,Δ) of ℱ_r(L,m) with L(∂Σ_L_1)=L_1≤ L, Σ_L_1 is a precise extremal surface of ℱ_r( L,m-1) . To prove this theorem we fix a number Lof ℒ with L≥2δ_E_q, and let Σ_L_1=( f,Δ) be a precise extremal surface of ℱ_r(L,m) with L(∂Σ_L_1 )=L_1≤ L, and assume m is large enough, m>10L/δ_E_q, and ∂Σ_L_1 is parametrized by length. Then ∂Σ_L_1 has an ℱ(L,m)-partition ∂Σ_L_1=c_1( q_1,q_2) +c_2( q_2,q_3) +⋯+c_m( q_m,q_1) , which in fact means that ∂Δ has an ℱ(L,m)-partition for ∂Σ_L_1: ∂Δ=α_1( a_1,a_2) +α_2( a_2,a_3) +⋯+α_m( a_m,a_1) , such that f restricted to a neighborhood of α_j^∘ in Δ is a homeomorphism onto a left hand side neighborhood of c_j^∘ and c_j=( f,α_j) is a convex circular arc for j=1,2,…,m, and by Remark <ref> we may assume a_1=1. As in Remark <ref>, we assume a_j=e^√(-1)ψ_j, 0=ψ_1<ψ_2<…<ψ_m<2π and introduce the continuous subscript x in a_x∈∂Δ and q_x∈∂Σ_L_1 with q_x=f(a_x), but when the letters i,j,k appear as subscripts, they are always integers. Under the above assumptions, we will first prove Lemmas <ref>, <ref>, <ref>, <ref>, <ref>, <ref>, and <ref>. Then we will prove Theorem <ref> easily from Lemmas <ref>, <ref> and <ref>. ∂Σ_L_1 cannot folded at any a_i∈{a_j}_j=1^m\ f^-1(E_q), and thus, for each i with a_i∈{a_j}_j=1^m\ f^-1(E_q), c_i-1and c_i intersect only at q_i in a neighborhood of q_ion S. Assume a_i∈{a_j}_j=1^m\ f^-1(E_q) and ∂Σ_L_1 is folded at a_i. Then c_i-1+c_i contains an arc of the form c_i^'+c_i+1^'=c_i^' -c_i^' such that c_i^'∩ E_q=∅. Therefore we can sew Σ_L_1 along c_i^' to obtain a new surface Σ^'∈ℱ(L,m) so that R(Σ^')=R(Σ_L_1) and L(∂Σ^')<L(∂Σ_L_1 ), say H(Σ^')>H(Σ_L_1), which contradicts the maximality of Σ_L_1. (i) The precise extremal surface Σ_L_1 of ℱ_r( L,m) is also a precise extremal surface of ℱ( L,m) and L_1=L( ∂Σ_L_1 ) ≥2δ_E_q. (ii) f is locally homeomorphic in [ Δ\ f^-1 (E_q)] ∪[ ( ∂Δ) \[ f^-1(E_q)∩{a_j}_j=1^m] ] . (i) follows from Lemma <ref>. By definition of ℱ_r(L,m), each x∈( ∂Δ) \{a_j}_j=1^m, is a simple point of f (see Definition <ref>), say f is a homeomorphism in a neighborhood of x in Δ. Assume for some j_0≤ m, f(a_j_0)∉ E_q. Then the interior angle θ_j_0 of Σ_L_1 at a_j_0is strictly less than or equal to 2π. If θ_j_0 <2π, then a_j_0 is a simple point of f (Definition <ref> (a)). If θ_j_0=2π, then a_j_0 is a simple point of f if ∂Σ_L_1 is simple in a neighborhood of a_j_0 in ∂Δ. Thus a_j_0 is not a simple point of f iff ∂Σ_L_1 is folded at a_j_0, contradicting Lemma <ref>. Thus every point a_j∈{a_j}_j=1^moutside f^-1(E_q)is a simple point of f. Since all branch points of f are contained in f^-1(E_q), the conclusion (ii) holds. ℭ^1=ℭ^1( Σ_L_1) is the collection of all subarcs of ∂Σ_L_1 such that for each c=( f,α) ∈ℭ^1, the following (a)–(c) hold: (a) c is an SCC arc and every point of c^∘ is a simple point of Σ_L_1, say, f restricted to a neighborhood of α^∘ is a homeomorphism. (b) c^∘∩ E_q=∅, say, c∩ E_q⊂∂ c (∂ c is the set of endpoints of c). (c) L(c)<π. ℭ^2=ℭ^2( Σ_L_1) is the subset of ℭ^1 such that each c∈ℭ^2 has two distinct endpoints. (i) All arcs in { c_j} _j=1^m∩ℭ^1 have the same curvature. (ii) { c_j} _j=1^m∩ℭ^1 contains at most one major circular arc (a simple circle is regarded as a major circular arc). (iii) { c_j} _j=1^m∩ℭ^1contains at most one closed arc, say, { c_j} _j=1^m∩[ ℭ^1\ℭ^2] is either empty or contains only one element. If #{ c_j} _j=1^m∩ℭ^1≤1, then there is nothing to prove. So we assume #{ c_j} _j=1^m ∩ℭ^1≥2. Assume that (i) or (ii) of the lemma fails. Then there exist distinct arcs c_j_1 and c_j_2 in { c_j} _j=1^m∩ℭ^1 such (a) or (b) in Deformation <ref> holds. Then there exists a new surface Σ^' ∈ℱ( L,m) such that H(Σ^' )>H(Σ_L_1). Thus Σ_L_1 is not extremal in ℱ (L,m), contradicting Lemma <ref> (i), and so (i) and (ii) hold, and (iii) follows from (ii). Assume that for some j≤ m, c_j∈ℭ^2. Then c_j+c_j+1 is circular at q_j+1 if q_j+1∉ E_q, and c_j-1+c_j is circular at q_j if q_j∉ E_q. The term "circular at q_i" means that f restricted to a neighborhood of a_i in α_i-1+α_i is a homeomorphism onto a simple circular arc, for i=j or j+1. By Lemmas <ref> and <ref>, for i=j,j+1, when q_i∉ E_q, we have that f is not only circular at q_i but also homeomorphic in a neighborhood of a_i in Δ. Assume c_j∈ℭ^2 and q_j+1∉ E_q. Then c_j is not closed and by Lemma <ref> f is homeomorphic in a neighborhood of α_j\{a_j}=α_j^∘∪{ a_j+1} in Δ. Then by Lemma <ref> we have the following claim. For sufficiently small ε_0>0, and for the arc C_j=c_j( q_j,q_j+1) +c_j+1( q_j+1 ,q_j+1+ε_0) , there exist a number θ∈(0,π/2) and a closed simple Jordan domain ( T_j,ε_0,θ,C_j) =( f,D_j,ε_0,θ) of Σ_L_1 with the old boundary C_j, such that the following hold. (1) D_j,ε_0,θis a Jordan domain in Δ with ∂ D_j,ε_0,θ=A_j+β_j, in which A_j=α_j( a_j,a_j+1) +α_j+1( a_j+1,a_j+1+ε_0) , β_j is a simple arcs in Δ with β_j^∘⊂Δ. (2) T_j,ε_0,θ is a closed Jordan domain in some open hemisphere S_1 on S with ( T_j,ε_0,θ\{q_j}) ∩ E_q=∅, ∂ T_j,ε_0,θ=C_j+τ_j, in which τ_j is a polygonal path from q_j+1+ε_0 to q_j. (3) The interior angle of T_j,ε_0,θ at q_j and q_j+1 are both θ. To prove the lemma, we will deduce a contradiction under the assumption: c_j+c_j+1 is not circular at q_j+1. Let ε≪ε_0be a positive number which is so small that for the arc γ_j,ε=γ_j,ε( q_j,q_j+1+ε) =c_j( q_j,q_j+1) +c_j+1( q_j+1 ,q_j+1+ε) , L(γ_j,ε)<π. We first show the following. For sufficiently small ε<ε_0, there exists a surface F_ε=( f_ε,D_j,ε _0,θ) in S_1 such that f_ε agree with f in a neighborhood of (β_j( a_j+1+ε_0,a_j) )\{a_j}in D_j,ε_0,θ, ∂ F_ε=γ_j,ε^'+c_j+1( q_j+1+ε,q_j+1+ε_0) +τ_j, where γ_j,ε^'=γ_j,ε^'( q_j,q_j+1+ε) is the SCC arc from q_j to q_j+1+ε with L( γ_j,ε^') =L(γ_j,ε), and moreover, A(F_ε)>A(T_j,ε_0,θ), F_ε\{q_j}⊂ S_1\ E_q, and q_j+1+ε is the only possible branch value of f_ε. In fact, F_ε is a deformation of T_j,ε _0,θ=( f,D_j,ε_0,θ) so that the new boundary τ_j^∘ and the part c_j+1( q_j+1+ε,q_j+1+ε_0) of the old boundary of T_j,ε_0,θ remain unchanged, while the part γ _j,ε of the old boundary is changed into γ_j,ε^'. It is clear that as ε→0, γ_j,ε ^' converges to c_j and thus the left hand side angle of τ _j+γ_j,ε^' at q_j tends to θ (this may fail when c_j∈ℭ^1\ℭ^2, the assumption c_j∈ℭ^2 is used here). Thus, when ε is small enough, the circular arc γ_j,ε^' does not intersects τ_j\{q_j} as sets in S_1, and it is clear that for sufficiently small ε>0, γ_j,ε^'+c_j+1( q_j+1+ε,q_j+1+ε_0) +τ_j encloses a surface F_ε=( f_ε,D_j,ε_0,θ) in S_1, which is just a simple closed domain in S_1 when γ_j,ε^'+c_j+1( q_j+1+ε,q_j+1+ε_0) is simple. It is clear that by Lemma <ref> and Condition <ref> we have (<ref>). When ε→0, as sets in S_1,F_ε converges to T_j,ε_0,θ and so (<ref>) holds. When γ_j,ε^'+c_j+1( q_j+1+ε,q_j+1+ε_0) is not simple, γ_j,ε^'+c_j+1( q_j+1+ε,q_j+1+ε_0) contains a small closed arc, which is consisted of two short subarcs of γ_j,ε^' and c_j+1( q_j+1+ε,q_j+1+ε_0) near q_1+j+ε, and which tends to q_j+1 as ε→0, and we may make q_j+1+ε to be the only branch value of F_ε. It is clear that ( f,β_j) is equivalent to ( f_ε,-β_j) and since the interior angle of F_ε=( f_ε,D_j,ε_0 ,θ)at a_j tends to that of ( f,D_j,ε_0,θ) =T_j,ε_0,θ, we can make F_ε and f agree in a neighborhood of β\{a_j} in D_j,ε_0,θ. The assertion is proved. Now we can deform Σ_L_1 by replacing the part T_j,ε _0,θ of Σ_L_1 with F_ε, that is, we cut ( T_j,ε_0,θ,C_j) from Σ_L_1 along the new boundary τ_j^∘ to obtain a surface Σ_1, and then we sew F_ε and Σ_1 also along the new boundary τ_j^∘. Then by (<ref>) and (<ref>) we see that Σ_L_1 becomes a new surface Σ^'=( f^',Δ) such that ∂Σ^'=c_1+⋯+c_j-1+γ_j^' +c_j+1(q_j+1+ε,q_j+2)+c_j+2+⋯+c_m. We have to show Σ^'∈ℱ( L,m) . Since f_ε agree with f in a neighborhood of β_j \{a_j} in D_j,ε_0,θand the partition (<ref>) is an ℱ( L,3) partition of ∂ F_ε, f^'can be defined by f on Δ\D_j,ε_0,θ and f_ε on D_j,ε_0,θ, and so (<ref>) is an ℱ( L,m)-partition of ∂Σ^', by (<ref>) and (<ref>), and so Σ ^'∈ℱ( L,m) . By Assertion <ref> we also have L(∂Σ_L_1)=L(∂Σ^'),n̅( Σ_L_1) =n̅( Σ^') ,A(Σ ^')>A(Σ_L_1), which implies H(Σ^')>H(Σ_L_1). Then Σ_L_1 is not an extremal surface of ℱ( L,m) , which contradicts Lemma <ref> (i). Assume that L(c_1)<δ_E_q/2. Then c_1is not closed, say, q_1≠ q_2. Though the proof is complicated, the idea is quite simpler, which is implied in the proof of Case 1 and the discussion for other cases are essentially the same, with a little difference. To prove the result, we assume that the opposite holds, say, c_1is a whole circle. Then (<ref>) implies the following. c_1 is a strictly convex circle from q_1 to q_2=q_1, the length and diameter of c_1 are both less than δ_E_q/2, and c_1∩ E_q is either empty or a singleton. We denote by T_0 the domain enclosed by c_1. First of all we have by Lemma <ref> the following. Σ_L_1 contains no closed simple domain of the form ( T_0,c_1) =( T_0,α _1) (as in Remark <ref> (iii), c_1 should be understood as ( f,α_1)). This in fact means that there is no subdomain D of Δ such that α_1⊂∂ D and f is a homeomorphism from D\{a_1,a_2} onto T_0\{q_1}. We let h_θ,q_1( w) be the rotation h_θ,q_1( w) =φ_q_1^-1∘φ_θ∘φ_q_1(w),w∈ S, of S, where φ_q_1 is a rotation of S putting q_1 into 0, and φ_θ is the rotation w↦ e^-iθw of S,θ∈0,π]. Recall that, by Remark <ref>, q_1+1/2 is the middle point of c_1. Write c_1,θ=h_θ,q_1( c_1) , let q_2,θ^'∈ S be the intersection of c_1,θ^∘=c_1,θ\{q_1} and c_1^∘=c_1\{q_1} with q_2,0^'=q_1+1/2, let c_1 =c_11,θ+c_12,θ, c_1,θ =c_11,θ^'+c_12,θ^' be partitions of c_1 and c_1,θ with c_11,θ=c_1( q_1,q_2,θ^') ,c_12,θ=c_1( q_2,θ^',q_2) , c_11,θ^'=c_1,θ( q_1,q_2,θ^') ,c_12,θ^'=c_1,θ( q_2,θ^' ,q_2) , let T_θ be the disk enclosed by c_1,θ, T_θ^'=T_0\T_θ and let T_θ^''=T_θ\T_0. Then it is clear that c_12,θ^' divides T_0 into two Jordan domains T_θ^' and T_θ^''', and c_11,θ divides T_θ into two Jordan domains T_θ^''' and T_θ^''. By Lemma <ref> we have For sufficiently small θ>0, ( T_θ^',c_12,θ) is a simple closed Jordan domain of Σ_L_1 (see Remark <ref> (ii) and (iii)) with the new boundary c_12,θ^'∘and old boundary c_12,θ. That is to say, there exist two simple paths α_12,θ and α_12,θ^' in Δ, both are from a point a_2,θ^'∈α_1^∘ to a_2, such that α_12,θ=α_1( a_2,θ^',a_2) ⊂α_1,α_12,θ^'∘=α_12,θ ^'∘( a_2,θ^',a_2) ⊂Δ, α_12,θ-α_12,θ^' encloses a Jordan domain D_θ^' in Δ, c_12,θ=( f,α_12,θ) ,c_12,θ^'=( f,α_12,θ^') , and f restricted to D_θ^' is a homeomorphism onto T_θ^'. We let α_11,θ=α_1( a_1,a_2,θ^') . Then we can cut T_θ^'\ c_12,θ^' from Σ_L_1 along c_12,θ^', and sew T_θ^'' to Σ_L_1 \ T_θ^' along c_11,θ, to obtain a surface Σ_θ=( f_θ,Δ) . This Σ_θ can be obtained in another way as follows. By Lemma <ref>, there exists a closed path c_1^' in T_0 from q_1 to q_1, oriented anticlockwise, such that c_1^'∘⊂ T_0, c_1^' is a polygonal path and for the domain T_0^' enclosed by c_1-c_1^', the two interior angles of T_0^' at q_1 are both positive, and that ( T_0^',c_1) is a simple closed domain of Σ_L_1, say, there exists a Jordan domain D_0^' in Δ such that ∂ D_0^'∩∂Δ=α_1 and f:D_0^'\{a_1,a_2}→T_0^'\{q_1} is a homeomorphism (see Remark <ref> (ii)). We let α_1^'=( ∂ D_0) \α_1^∘, oriented from a_1 to a_2. Then -c_1^'∘=( f,-α_1^'∘) is the new boundary of the domain ( T_0^' ,c_1) of Σ_L_1. For a small enough number θ>0, c_1,θ-c_1^' encloses a domain T_0,θ^' and ( T_0,θ^',c_1,θ) can be regarded as a deformation of ( T_0^',c_1) , sharing the same new boundary -c_1^'∘. That is to say, when we rotate c_1 by h_θ,q_1to the position c_1,θ, c_1,θ-c_1^' still enclosed a simply connected domain T_0,θ^' which shares the new boundary -c_1^'∘ of the domain ( T_0^',c_1)of Σ_L_1 and when we replace ( T_0^' ,c_1) with ( T_0,θ^',c_1,θ) , we obtain the new surface Σ_θ=( f_θ,Δ) . It is clear that ( T_0,θ^',c_1,θ) is a simple closed domain of the new surface Σ_θ. On the other hand, we do not change Σ_L_1\( T_0^',c_1) in the deformation. Therefore Σ_θ∈ℱ( L,m) and the partition (<ref>) becomes the following ℱ( L,m)-partition of ∂Σ_θ: ∂Σ_θ=c_1,θ+c_2+⋯+c_m. It is clear that A(Σ_θ)=A(Σ_L_1),L(∂Σ_θ)=L(∂Σ_L_1). By Condition <ref>, the disk D(q_1,δ_0) with δ_0=d( q_1,q__1+1/2)is contained in D(q_1,δ_E_q/2)⊂ S and contains at most one point of E_q. Then there are only three possibilities: Case 1. D(q_1,δ_0)∩ E_qis either empty or is a singleton {q^∗} contained in c_1, and thus T_0∩ E_q=∅. Case 2. ( D(q_1,δ_0) \T_0) ∩ E_q={𝔞} with 𝔞∈ c_1,θ_1∩ c_1,-θ_2, where θ _1>0,θ_2>0 and π<θ_1+θ_2<2π. That is to say, θ_1 is the first number in (0,2π) so that 𝔞∈ c_1,θ_1 and θ_2 is the first number in (0,2π) so that 𝔞∈ c_1,-θ_2, and more over, T_0∩ E_q=∅. Case 3. ( D(q_1,δ_0)\ T_0) ∩ E_q=∅, and T_0 contains at most one point of E_q. Assume Case 1 occurs. Recall that c_12,0=c_12,0^'=c_12( q_1,q_1+1/2) . Then we may assume D(q_1,δ_0)∩ E_q={q^∗}∈ c_12,0. When it is empty, the proof is the same, and when {q^∗}∈ c_11,0, the proof can be proceeded based on consider the rotation φ _-θ,q_1 in a symmetrical way. By assumption of Case 1, we have n( Σ_θ) =n( Σ_L_1) , and then by (<ref>) we have H(Σ_L_1)=H(Σ_θ). Let θ_0 be the maximal positive number in (0,π] so that Σ_θ∈ℱ( L,m) is well defined for all θ∈(0,θ_0). This is equivalent to that θ_0 is the maximal number such that ( T_θ^',c_12,θ) is a closed simple Jordan domain of Σ_L_1 with the new boundary -c_12,θ^'∘ and old boundary c_12,θ for all θ∈(0,θ_0) (see Remark <ref> (ii)), in other words, for all θ∈(0,θ_0), c_12,θ^'∘ is in the interior of Σ_L_1^∘, which just means α_12,θ^'∘⊂Δ, and f has no branch point in α _12,θ^'∘. Then by Calim <ref> we have θ_0<π, otherwise, (T_π,c_12,π)=( T_0 ,c_1) is a closed simple domain of Σ_L_1, by Lemma <ref>. Moreover, we can show the following. ( T_θ_0,c_12,θ_0) is still a simple closed Jordan domain of Σ_L_1 so that the new boundary is contained in c_12,θ_0^'∘ and the old boundary contains c_12,θ_0. That is to say, there exist two simple paths α_12,θ_0 and α_12,θ_0^' in Δ, both are from the point a_2,θ_0^' ∈α_1^∘ to a_2, such that α_12,θ_0=α_1( a_2,θ_0^' ,a_2) ⊂α_1,α_12,θ_0^'∘ =α_12,θ_0^'∘( a_2,θ_0^' ,a_2) ⊂Δ, α_12,θ_0-α_12,θ_0^' encloses a Jordan domain D_θ_0^' in Δ, c_12,θ_0=( f,α_12,θ_0) ,c_12,θ_0 ^'=( f,α_12,θ_0^') , and f restricted to D_θ_0^' is a homeomorphism onto T_θ_0^'. The difference between Claims <ref> and <ref> is just that α_12,θ_0^'∘ may not be the new boundary, but contains the new boundary, say, Δ in (<ref>) should be replaced by Δ when θ=θ_0, as in (<ref>). In fact by definition of θ_0, considering that T_θ_0^' =∪_θ∈0,θ_0)T_θ^' and that T_θ^' is an increasing family as θ increases, f^-1 has a univalent branch g_θ_0 defined on T_θ_0^'. Then by Lemma <ref> g_θ_0 can be extended to a univalent branch of f^-1 defined on T_θ_0^'such that g_θ_0( c_12,θ_0) =α_12,θ_0. Thus ( T_θ_0^',c_12,θ_0) is a closed simple Jordan domain of Σ_L_1. Then α _12,θ_0^'=g_θ_0( c_12,θ_0^') is a well defined arc in Δ from a_2,θ _0^' to a_2, and D_θ_0^' =g_θ_0( T_θ_0^') is a closed Jordan domain in Δ. Therefore Claim <ref> hold. We let Δ_θ_0=Δ\D_θ_0^' . By condition of Case 1, c_12,θ_0^'∘∩ E_q=∅, and so we have f has no branch value on c_12,θ_0^'∘and thus each component of α_12,θ_0^'\∂Δ has a neighborhood in Δ on which f is a homeomorphism. If α_12,θ_0^'∘∩∂Δ=∅, then ( f,Δ_θ_0) is a surface in ℱ( L^',m+1) ,with L^'=L-L(c_12,θ_0)+L(c_12,θ_0^'), and thus for small enough ρ>0 and the arc 𝔠_ρ=c_1( q_2,θ _0+ρ^',q_2,θ_0^') +c_12,θ_0 ^', by Lemma <ref> ( f,Δ_θ_0 ) contains the simple and closed Jordan domain ( K_ρ,𝔠_ρ) with K_ρ=T_θ _0+ρ^'\T_θ_0^' such that 𝔠_ρ is the old boundary, an arc of ( f,∂Δ_θ_0) , and c_12,θ_0+ρ^'∘ is the new boundary. Then we have ∂ K_ρ=𝔠_ρ-c_12,θ_0+ρ^' and ( T_θ_0^',c_12,θ_0) can be extended to the larger simple closed Jordan domain ( T_θ_0+ρ^',c_12,θ_0+ρ) =( T_θ_0^',c_12,θ_0) ∪( K_ρ,𝔠_ρ) of Σ_L_1, and so Σ_θ is well defined for all θ∈0,θ_0+ρ), contradicting the maximal property of θ_0. Thus we have c_12,θ_0^'∘=( f,α_12,θ_0 ^') has to intersect ∂Σ_L_1, say, α_12,θ_0^'∘∩∂Δ≠∅. By Condition <ref>, c_12,θ_0^' is strictly convex, and it is clear that α_12,θ_0^'∩α_12,θ_0 ={a_2,θ_0^',a_2}. On the other hand, by Claim <ref>, regarding ( ∂Δ) \α_12,θ_0 ^∘ and -α_12,θ_0^' as α and β in Lemma <ref>, we conclude that α_12,θ_0^'∘∩( ∂Δ) \α_12,θ_0^∘⊂{ a_j} _j=1^m is a finite set {a_i_1 ,…,a_i_k}. Thus α_12,θ_0^'∩∂Δ={a_2,a_i_1,…,a_i_k,a_2,θ_0^'}, arranged anticlockwise on ∂Δ. Therefore, by Claim <ref>, we have α_12,θ_0^'∘∩( ∂Δ) \α_12,θ_0^∘={a_i_1 ,…,a_i_k} divides α_12,θ_0^'∘ into k+1 open arcs, each of which has a neighborhood in Δ on which f is a homeomorphism. Then Σ_θ_0 is no longer a surface, but is consisted of k+1 surfaces. We will show that these surfaces are all contained in ℱ ( L,m-1) ⊂ℱ( L,m) . We only prove this in the case that the finite set α_12,θ_0^'∘ ∩∂Δis a singleton {a_i_1}, say, k=1. When k>1, the discussion is similar and more simpler. It is clear that we can define the partition c_1,θ_0=𝔠_1,θ_0^'+𝔠 _1,θ_0^''=c_1,θ_0( q_1,q_i_1) +c_1,θ_0( q_i_1,q_1) . Then Σ_θ_0 is consisted of two surfaces Σ_θ_0 ^1 and Σ_θ_0^2 linked at the point ( f,a_i_1 ) , so that ∂Σ_θ_0^1=𝔠_1,θ_0^'+c_i_1 +⋯+c_m, ∂Σ_θ_0^2=𝔠_1,θ_0^''+c_2+⋯+c_i_1-1. It is clear that the partition (<ref>) contains at least 2 terms, and so does (<ref>). Thus by Claim <ref> we have Σ_θ_0^j∈ℱ( L,m-1) ,j=1,2, which implies {Σ_θ,^1,Σ_θ_0^2}⊂ℱ( L,m) . We still assume k=2. Then we see that (<ref>) still holds in the following form ∑_j=1^2A(Σ_θ_0^j)=A(Σ_L_1),∑_j=1 ^2L(∂Σ_θ_0^j)=L( ∂Σ_L_1) , and by (<ref>) we have ∑_j=1^2n(Σ_θ_0 ^j)=n(Σ_L_1), and then ∑_j=1^2R(Σ _θ_0^j)=R(Σ_L_1). This contradicts Lemma <ref>, and thus Case can not occur. Assume Case 2 occurs. If θ_1≥π, then we can obtain a contradiction as in Case 1. Assume θ_1<π. Then we may find the maximum θ_0^' in (0,θ_1] so that ( T_θ_0^'^' ,c_12,θ_0^') is a simple and closed Jordan domain of Σ_L_1, by repeating the argument for Claim (<ref>). If θ_0^'<θ_1, or if θ_0^'=θ_1 and α_12,θ_0^'^'∘∩∂Δ≠∅, then we can find a contradiction again using the same method of Case 1. Assume θ_0^'=θ_1and we cannot obtain a contradiction as in Case 1. Then we have ( T_θ_1^',c_12,θ_1) =( T_θ_1^',c_12,θ_0^') is a simple closed Jordan domain of Σ_L_1 such that c_12,θ_1^'∘ is the new boundary, say, α _12,θ_1^'∘⊂Δ. Then we apply the above argument to obtain Σ_-θ for θ>0, by rotating T_0 into T_-θ in the other direction in a symmetrical way. That is, we can construct surfaces Σ_-θ,θ>0, such that the partition (<ref>) works, in which c_1,θ becomes the rotation c_1,-θ of c_1. Then we can either obtain a contradiction as in the Case 1, or we must have that ( T_-θ_2 ^',c_11,-θ_2) =( T_0\ T_-θ_2,c_11,-θ_2) is a simple closed Jordan domain of Σ_L_1, as Claim <ref>. Then we can see that ( T_-θ_2^'∪T_θ_1^' ,c_1) =( T_0,c_1) is a simple Jordan domain of Σ_L_1, contradicting Claim <ref>. Thus in Case 2, we also have a contradiction. Note that θ_1+θ_2≥π and thus T_-θ_2^'∪T_θ_1^' =T_0. In fact f_n^-1 has branches defined on T_-θ_2^'and T_θ_1^' which are agree on T_-θ_2^'∩T_θ_1^' which contains an arc of c_1, and thus the two branches are agree on T_-θ_2^'∩T_θ_1^', and so these two branches defines a global branch on T_0, contradicting Claim <ref>. Assume Case 3 occurs. Then we may assume D( q_1,δ_0) ∩ E_q=T_0∩ E_q={q^∗}, otherwise we have D( q_1,δ_0) ∩ E_q=T_0∩ E_q=∅, and we can obtain a contradiction based on the discussion entirely the same as in Case 1 in this later case. Then we may assume q^∗∈ c_12,θ ^∗^'∘for some θ^∗∈(0,π). If θ^∗≥π, we can obtain a contradiction as in Case 1, by Claim <ref>. Let θ_0 be the maximum in (0,θ^∗] such that Σ_θ is well defined, say, α_12,θ^'∘⊂Δ, for every θ∈(0,θ_0). If θ_0 <θ^∗, then we can obtain a contradiction as in Case 1. So we may assume that θ_0=θ^∗. The argument in Case 1 for Σ_θ with θ∈(0,θ_0) applies, and (<ref>) and (<ref>) both hold for θ∈ (0,θ_0), and by (<ref>) we have n( Σ _θ) =n( Σ_L_1) for all θ∈0,θ_0)=[0,θ^∗). Therefore (<ref>) still holds and thus Σ_θ is a precise extremal surface of ℱ ( L,m) for every θ∈θ,θ_0 )=[θ,θ^∗). As in Case 1, we can show that α_12,θ_0^'∘ ∩∂Δ is a finite set {a_i_1,a_i_2,…,a_i_k} in {a_j}_j=1^m, and then Σ_θ_0 is consisted of k+1 surfaces {Σ_θ_0^j} _j=1^k+1 of ℱ( L) linked at q_i_1,…,q_i_k, with ∑_j=1^k+1A(Σ_θ_0^j)=A(Σ_L_1),∑_j=1 ^k+1L(∂Σ_θ_0^j)=L(∂Σ_L_1). Here k=0 and Σ_θ_0^j=Σ_θ_0 when α_12,θ_0^'∘∩∂Δ=∅. By the assumption (<ref>), we have q^∗∈ c_12,θ_0^'∘=c_12,θ^∗^'∘. We show that k≠0 and q^∗∈{q_i_1,q_i_2,…,q_i_k}. If k=0, then by (<ref>) q^∗ is in the new boundary c_12,θ^∗^'∘ of ( T_θ_0 ^',c_12,θ_0) and d_f_θ( f_θ^-1(E_q) ,∂Δ)≤ d( q^∗,c_12,θ^') →0 as θ→θ_0. But (<ref>) can't hold by Claim <ref> and Lemma <ref>. Thus k≥1. Assume (<ref>) fails. Then q^∗ is again in the new boundary c_12,θ^∗^'∘ of ( T_θ_0 ^',c_12,θ_0) and (<ref>) again holds as θ→θ_0, which again contradicts Claim <ref> or Lemma <ref>. Thus we have (<ref>). Then ( α _12,θ_0^'∘\{ a_i_j} _j=1 ^k) ∩ f^-1(E_q)=∅ and each component of α_12,θ_0^'∘\∂Δ=α _12,θ_0^'∘\{ a_i_j} _j=1 ^k has a neighborhood in Δ on which f is homeomorphic (by Lemma <ref> (ii)). Then we can show that each Σ_θ_0 ^i is contained in ℱ( L,m) for i=1,2,…,k+1 (this is easy to see when k=1 as in Case 1, and when k>1, the proof is similar). On the other hand, (<ref>) and (<ref>) implies that ∑_j=1^k+1n( Σ_θ_0^j) =n( Σ_L_1) , which with (<ref>) implies that ∑_j=1^k+1R(Σ_θ_0 ^j)=R(Σ_L_1)and thus by (<ref>) Σ_L_1 is decomposable in ℱ( L,m) , contradicting Lemma <ref>. We have proved that Cases 1–3 can't occur. Thus Condition <ref> can't be satisfied by Σ_L_1 and the lemma is proved completely. The proof of Lemma <ref> is to deduce contradictions when c_1 is a circle with L(c_1)<δ_E_q/2. If the condition that Σ_L_1 is precise extremal in ℱ_r( L,m) is strengthened to that Σ_L_1 is precise extremal in ℱ _r( L) =∪_m=1^∞ℱ_r( L,m), the discussion can be greatly simplified, since we need not to discuss the number of edges. For example, we can easily prove the following: If Σ_L_1∈ℱ_r( L,m) is a precise extremal surface of ℱ_r( L) , (<ref>) is an ℱ( L,m)-partition such that {q_j }_j=1^m⊂ E_qand that c_1 is a circle with L(c_1)<2π and c_1∩ E_q={q_1}. Then there exists a precise extremal surface Σ_L_1^'of ℱ_r( L) such that ∂Σ_L_1^' has the following ℱ( L,m)-partition ∂Σ_L_1^'=c_1,θ_0+c_2+⋯+c_m where c_1,θ_0 is the rotation h_θ_0,q_1( c_1) of c_1 with angle θ_0∈(0,π)and c_1,θ_0^∘∩ E_q≠∅, say, c_1,θ_0∩ E_q contains not only q_1, but also a point other than q_1. In Lemma <ref> we assumed L(c_1)<δ_E_q/2 and permitted c_1^∘ contains a point of E_q. But in the Corollary we assume c_1^∘∩ E_q=∅ but L(c_1)<2π, and require that the closed arc c_1,θ_0 of the new surface boundary contains not only the point q_1 of E_q, but also another point of E_q. Since L(c_1)<2π, c_1 is strictly convex. We still use the notations in the above proof. For sufficiently small θ>0, by (<ref>) we have the following conclusion (1)–(3): (1) T_θ^'∩ E_q={q_1}. (2) T_θ^''∩ E_q={q_1}. (3) ( T_θ^',c_12,θ) is a simple closed Jordan domain of Σ_L_1. Then Σ_θ is a well defined surface in ℱ( L,m) . Let θ_1,θ_2,θ_3 be the supremum of θ in (0,π) such that (1), (2), (3) hold on [0,θ] respectively. We first consider the case θ_1≤θ_2. This means that the moving circle c_θ first meets T_0∩ E_q before c_θ first meets ( S\T_0) ∩ E_q. If θ_3≤θ_1, then we can show that Claims <ref> and <ref> hold for c_12,θ_3^'∘=( f,α _12,θ_3^') and α_12,θ_3^'∘, and thus Σ_θ_3 is consisted of a finite number of k+1,k≥1, surfaces Σ_θ_3^j in ℱ( L) with ∑_j=1^k+1A(Σ_θ_3^j)=A(Σ_L_1),∑_j=1 ^k+1L(∂Σ_θ_3^j)=L(∂Σ_L_1), and ∑_j=1^k+1n(Σ_θ_3^j)≤n (Σ_L_1), inequality holding if and only if θ_3=θ_1 and f^-1 (E_q)∩α_12,θ_3^'∘\{a_i_1,a_i_2 ,…,a_i_k}≠∅, which implies ∑_j=1^k+1R(Σ_θ_3^j)≥ R(Σ_L_1),∑ _j=1^k+1L(∂Σ_θ_3^j)=L(∂Σ_L_1), contradicting Lemma <ref> for ℱ( L). If θ_3>θ_1, then Σ_θ_1 is a surface of ℱ( L) with (<ref>) and (<ref>) holding for θ=θ_1. But n( Σ_L_1) >n( Σ_θ_1). Thus H( Σ_θ_1) >H(Σ_L_1) and Σ_L_1 is not extremal in ℱ( L) , contradicting the hypothesis. We have proved the result for the case θ_1≤θ_2. Assume θ_1>θ_2. This means that the moving circle c_θ first meets E_q from outside of c_θ before c_θ first meets E_qfrom inside of c_θ. If θ_3>θ_2, then Σ_θ_2∈ℱ( L), (<ref>) holds for θ_2 and n( Σ_L_1) =n( Σ_θ_2), and thus R(Σ_L_1)=R(Σ_θ_2). It is clear that c_1,θ_2 contains more than one points of E_q and q_1∈ c_1,θ_2. Therefore Σ_θ_2 satisfies the corollary. Assume θ_3≤θ_2. Then we can obtain a contradiction as the discussion for the case θ_1≤θ_2 and θ_3≤θ_1. Recall that our goal in this section is to prove Theorem <ref>. We will in fact deduce a contradiction from the opposite of the conclusion, say under the condition that Σ_L_1∉ℱ_r( L,m-1) . This means we assume the following condition, before we prove Theorem <ref>. ∂Σ_L_1 has no ℱ( L,m-1) partition, say, ∂Σ_L_1∈ℱ( L,m) \ℱ( L,m-1). We first show the following. Under Condition <ref>, every c_j of (<ref>) which is contained in ℭ^2 satisfies c_j∩ E_q=∂ c_j, and L( c_j) ≥δ_E_q. It is trivial that (<ref>) implies (<ref>). Assume that (<ref>) fails for some c_j_0contained in ℭ^2. Then, by definition of ℭ^2, c_j_0 is not closed and one endpoint of c_j_0 is not in E_q, and we may assume q_j_0 is not in E_q. Thus by Lemma <ref> c_j_0-1+c_j_0 is circular at q_j_0 and f restricted to a neighborhood of a_j_0 in Δ is homeomorphic, and so ( f,α_j_0-1+α_j_0) is a locally simple circular arc, and c_j_0-1 and c_j_0 are both in the same circle. If c_j_0-1+c_j_0 is simple, then c_j_0-1+c_j_0 in (<ref>) can be merged into one edge so that (<ref>) becomes an ℱ( L,m-1)-partition of ∂Σ_L_1, contradicting Condition <ref>. Then c_j_0-1+c_j_0 is locally simple but not globally simple. Thus, considering that both c_j_0-1 and c_j_0 are simple and c_j_0∈ℭ^2, we have q_j_0-1∈ c_j_0\{ q_j_0+1}and q_j_0+1∈ c_j_0-1^∘, and thus we can write c_j_0-1+c_j_0=C_j_0-1^'+C_j_0, in which C_j_0-1^'=C_j_0-1^'( q_j_0 -1,q_j_0+1) =c_j_0-1( q_j_0-1,q_j_0+1) and C_j_0=C_j_0( q_j_0+1,q_j_0+1) is the circle c_j_0-1( q_j_0+1,q_j_0) +c_j_0( q_j_0,q_j_0+1) . Therefore we have the ℱ( L,m)-partition ∂Σ_L_1=c_1+⋯+c_j_0-2+C_j_0-1^'+C_j_0 +c_j_0+1+⋯+c_m, and as a set on S, C_j_0-1^'⊂ c_j_0 and the initial point q_j_0-1 of C_j_0-1^' is not in E_q (we have assumed q_j_0∉ E_q and thus q_j_0-1∈ c_j_0 \{ q_j_0+1}⊂ S\ E_q). We conclude this by C_j_0-1^'∈ℭ^2and the initial point q_j_0-1 of C_j_0-1^'is not in E_q. By Claim <ref>, the above discussion about c_j_0-1+c_j_0 applies to c_j_0-2+C_j_0-1^'. Then we can repeat the same argument m-1 times to obtain an ℱ( L,m)-partition ∂Σ_L_1=C_1( q_j_0+1,q_j_0+1) +C_2( q_j_0+1,q_j_0+1) +⋯+C_m( q_j_0 +1,q_j_0+1) such that all C_1,…,C_m are simple circles contained in the same circle C_j=C, and by Lemma <ref> (ii) each point p∈ C_j\ E_q is a simple point of Σ_L_1for j=1,…,m (see Definition <ref> (a) and (b)). Then (<ref>) and (<ref>) imply L(C_j)=L(C)=L(∂Σ_L_1)/m=L_1/m≤L/m≤δ_E_q/10, which implies that C contains at most one point of E_q. Then we may assume C_j=C_j( q_j_0+1^',q_j_0+1^') such that C_j^∘=C_j\{q_j_0+1^'}⊂ S\ E_q, say, C_j∈ℭ^1\ℭ^2. Then by Lemma <ref>, we have m=1. This contradicts (<ref>) and Condition <ref>. Assume that Condition <ref> holds and L(c_1+c_2)<δ_E_q/2. Then the following hold. (i) q_1≠ q_3. (ii) q_1∉ c_2^∘ and q_3∉ c_1^∘. (iii) c_1+c_2 cannot contain a closed subarc c_1^' +c_2^'=c_1( q_1^',q_2) +c_2( q_2,q_1^') such that q_1^'∈ E_q and q_1^'∉{q_1,q_2 ,q_3}. The proof of (i) and (ii) is relatively easy, while the proof of (iii) is quite complicated, but it is very similar to the proof of Case 1 in Lemma <ref>. By Lemma <ref> and (<ref>) we have Neither c_1 nor c_2 is a closed arc, and ( c_1+c_2) ∩ E_q contains at most one point. To prove (i) we assume q_1=q_3. If q_1or q_2is contained in E_q, then by Claim <ref>, we have c_1^∘∩ E_q =∅, say, c_1∈𝔠^2, and then by Lemma <ref> L(c_1)≥δ_E_q, contradicting (<ref>). So we may assume that neither q_1 nor q_2 is contained in E_q. If c_1^∘∩ c_2^∘≠∅, then c_1 and c_2 contains three common points, which implies c_1=-c_2 and c_1+c_2 is folded at q_1∉ E_q, contradicting Lemma <ref>. Thus by Claim <ref>, either c_1 or c_2 is contained in ℭ ^2, but this, together with Lemma <ref>, implies L(c_1+c_2 )≥δ_E_q, contradicting (<ref>) once more. (i) is proved. To prove (ii) assume that it fails. Then we may assume q_3∈ c_1^∘. We first show that Either c_1^∘∩ E_q or c_2^∘∩ E_q is empty. Assume neither c_1^∘∩ E_q nor c_2^∘∩ E_q is empty. Then by Claim <ref> c_1^∘∩ c_2^∘∩ E_q is a singleton {q^∗} in E_q, q_2∉ E_q, and thus c_1 and c_2 contain three common points, but c_1+c_2 is not folded at q_2 by Lemma <ref>. Then by Claim <ref>, neither c_1nor c_2 is closed but c_1+c_2 is contained in the same circle and c_1+c_2 is more than that circle since q_3∈ c_1^∘, and by Lemma <ref>, every point of ( α_1+α_2) ^∘is a simple point of f. Then we can write c_1+c_2=C_1^'+C_2, where C_1^'=c_1( q_1,q_3)is the subarc of c_1 and C_2=c_1( q_3,q_2) +c_2( q_2 ,q_3)is a circle, and moreover, C_1^'∘ and C_2^∘ have neighborhoods in Σ_L_1 which are simple domains of Σ_L_1 (see Remark <ref> (ii)). Hence the new partition ∂Σ_L_1=C_1^'+C_2+c_3+c_4+⋯+c_m is still an ℱ( L,m)-partition of ∂Σ_L_1. But this contradicts Lemma <ref>, since L(C_2 )<L(c_1+c_2)<δ_E_q/2. Thus Claim <ref> holds. By Claim <ref> we may assume c_1^∘∩ E_q=∅. Then by Claim <ref> we have c_1∈ℭ^2 and then by Lemma <ref> c_1 contains two distinct endpoints in E_q. But this contradicts Claim <ref> and (ii) is proved. To prove (iii), assume it fails, say, that c_1^' and c_2 ^' satisfying the condition of (iii) exist. By (i) and (ii), c_1+c_2 cannot be contained in a circle and can not be folded at q_2. Hence, by Lemma <ref>, c_1^'+c_2^' is a simple closed arc. Let α_1^'=α_1^'( a_1^' ,a_2) and α_2^'=α_2^'( a_2,a_1^'') be subarcs of α_1 and α_2 such that c_1^'=( f,α_1^') and c_2^'=( f,α_2^')and let T_0 be the domain on S enclosed by c_1^'+c_2^'. Then by (<ref>) we have D(q_1^',δ_E_q/2)∩ E_q=T_0∩ E_q={ q_1^'} , and that c_1 or c_2 is strictly convex. On the other hand, c_1 and c_2 cannot externally tangent at q_2 since they both contain the two common points q_1^' and q_2. Hence, by definition of ℱ( L,m)-partition in Definition <ref>, we have f is homeomorphic in a neighborhood of ( α_1^'+α_2^') ^∘ in Δ, 0<∠( Σ_L_1,a_2) <2π, and we may assume c_2 is strictly convex. Similar to the discussion of Claim <ref>, we have the following. ( T_0,c_1^'+c_2^') cannot be a simple closed domain of Σ_L_1, say, there is no univalent branch g of f^-1 defined on T_0\{q^'} such that g( c_1^'+c_2^') =α_1^'+α_2^'. Now we have only two cases to discuss. Case A. c_1+c_2 is convex at q_2, say 0<∠( Σ_L_1,a_2) ≤π. Case B. c_1+c_2 is concave at q_2, say π <∠( Σ_L_1,a_2) <2π. The discussion of Case A is essentially a duplication of that for Case 1 (in the proof of Lemma <ref>), with a little difference, but we will write it down for completeness. Assume Case A occurs and let h_θ,q_1^'=φ_q_1 ^'^-1∘φ_θ∘φ_q_1^', where φ_q_1^' is a rotation of S moving q_1^' to 0 and φ_θ is the rotation w↦ e^-iθw of S. Then the rotation h_θ,q_1^'( c_1^'+c_2^') of c_1^'+c_2^' never meets any point of E_q other than q_1^', and then we can obtain a contradiction as in Case 1 (in the proof of Lemma <ref>). But notations here may have different meaning. For example, here T_0 is the domain on S enclosed by c_1^'+c_2^',while in the proof of Lemma <ref>, T_0 is the disk enclosed by the circle c_1. On the other hand one of the key points in Case 1 of Lemma <ref> is (<ref>), which follows from <ref>, but here <ref> may no longer hold for Σ_θ_0^i(the surfaces in (<ref>) and (<ref>)). But we can prove (<ref>) still holds after (<ref>)). The interior angle of T_0 at q_1^' and q_2 are both equal to ∠( Σ_L_1,a_2) . For θ∈ (0,∠( Σ_L_1,a_2) ), we introduce more notations: T_θ=h_θ,q_1^'( T_0) ,T_θ^'=T_0\T_θ,T_θ^''=T_θ\T_0, c_1,θ^'=h_θ,q_1^'( c_1^') ,c_2,θ^'=h_θ,q_1^'( c_2^') , q_2,θ=h_θ,q_1^'( q_2) ,q_2,θ^'=c_2,θ^'∘∩ c_1^'∘. Then q_2,θ^' gives the following partitions of c_1^' and c_2,θ^': c_1^'=c_11,θ^'+c_12,θ^'=c_1^'( q_1^',q_2,θ^') +c_1^'( q_2,θ^',q_2) , c_2,θ^'=c_21,θ^'+c_22,θ^'=c_2,θ^'( q_2,θ,q_2,θ^') +c_2,θ^'( q_2,θ^',q_1^') ; α_1^' has a partition α_1^'=α_11,θ^'( a_1^',a_2,θ^') +α_12,θ^'( a_2,θ^',a_2) =α_1^'( a_1^',a_2,θ^') +α_1^'( a_2,θ^',a_2) such that c_11,θ^'( q_1^',q_2,θ^') =( f,α_11,θ^'( a_1^',a_2,θ^') ) , c_12,θ^'( q_2,θ^',q_2) =( f,α_12,θ^'( a_2,θ^',a_2) ) ; and α_2=α_2(a_2,a_3) has a partition α_2=α_2^'( a_2,a_1^'') +α_2^''( a_1^'',a_3) =α_2( a_2,a_1^'') +α_2( a_1^'',a_3) such that c_2^'=c_2^'( q_2,q_1^') =( f,α_2^'( a_2,a_1^'') ) . The reader should be aware of that c_1j,θ^' are subarcs of c_1^' (not c_1,θ^'), but c_2j,θ^' are subarcs of c_2,θ^' (not c_2^'). It is clear that by Lemma <ref> (ii) and Claim <ref> when θ>0 and θ is small enough, f restricted to a neighborhood of α_12,θ^'+α_2^' in Δ is a homeomorphism, since α_12,θ^'+α_2^' is a subarc in α_1^'+α_2^' which tends to α_2^' as θ→0. Thus for small enough θ>0 we have the following claim similar to Claim <ref>: ( T_θ^',c_12,θ^'+c_2^') is a simple closed Jordan domain of Σ_L_1 such that -c_22,θ^'∘=-c_2,θ^'∘( q_2,θ^',q_1^')is the new boundary and c_12,θ^'+c_2^' is the old boundary. That is to say, there exist a Jordan domain D_θ^'⊂Δand an arc α_22,θ^'=α_22,θ^'( a_2,θ^',a_1^'') in Δ with α_22,θ^'∘⊂Δ, such that ∂ D_θ^'=α_12,θ^'( a_2,θ^',a_2) +α_2^'( a_2,a_1^'') -α_22,θ^'( a_2,θ^',a_1^'') , c_22,θ^'( q_2,θ^',q_1^') =( f,α_22,θ^'( a_2,θ^' ,a_1^'') ) , and that f restricted to D_θ^' is a homeomorphism onto T_θ^'. Let θ_0 be the maximal number in (0,∠( Σ_L_1 ,a_2) ] such that all θ∈( 0,θ_0) satisfy Claim <ref>. Then by Claim <ref>, θ_0<∠( Σ_L_1,a_2) . Repeating the the argument for Claims <ref>–<ref>, with a little difference, we will show the following Claims <ref>–<ref>: Except for that α_22,θ_0^'∘⊂Δ may fail, all other conclusions in Claim <ref> hold for θ_0: ( T_θ_0^',c_12,θ_0 ^'+c_2^') is still a simple closed Jordan domain of Σ_L_1 such that -c_22,θ_0^'∘ contains the the new boundary and c_12,θ_0^'+c_2^' is contained in the old boundary. That is to say, there exist a Jordan domain D_θ_0 ^'⊂Δand an arc α_22,θ_0^' =α_22,θ_0^'( a_2,θ_0^' ,a_1^'') in Δ, such that ∂ D_θ_0^'=α_12,θ_0^'+α_2^'-α_22,θ_0^', c_22,θ_0^'=( f,α_22,θ_0^') , and f restricted to D_θ_0^' is a homeomorphism onto T_θ_0^'. f has no branch value on c_22,θ_0^'∘and thus each component of α_22,θ_0^'\∂Δ has a neighborhood in Δ on which f is a homeomorphism. c_22,θ_0^'∘=( f,α_22,θ_0 ^') has to intersect ∂Σ_L_1, say, α_22,θ_0^'∘∩∂Δ≠∅. α_22,θ_0^'∘∩∂Δis a nonempty finite set { a_i_1,…,a_i_k} in {a_j}_j=1^m, a_1^'',a_i_1,…,a_i_k ,a_2,θ_0^'are arranged on ∂Δ anticlockwise and divide α_22,θ_0^'∘ into k+1 open arcs, each of which has a neighborhood in Δ on which f is a homeomorphism. We repeat the argument for completeness. It is obvious by Claim <ref> that f^-1 has a univalent branch g defined on T_θ_0 ^'\ c_22,θ_0^'∘=∪_θ∈ (0,θ_0)T_θ^' with α_12,θ_0 ^'( a_2,θ_0^',a_2) =g( c_12,θ_0^') ⊂α_1and α_2 ^'=g( c_2^') ⊂α_2. By Lemma <ref>, g can be extended to be a univalent branch of f^-1 defined on T_θ_0^', and thus Claim <ref> holds for D_θ_0^'=g( T_θ _0^') =g( T_θ_0^'). By (<ref>), c_22,θ_0^'∘∩ E_q =∅, which together with Lemma <ref> implies Claim <ref>. Let Δ_θ_0=Δ\D_θ_0^'. If α_22,θ_0^'∘∩∂Δ=∅, then ( f,Δ_θ_0) is a surface in ℱ( L^',m+2) ,with L^'=L-L(c_12,θ_0^'+c_2^')+L(c_22,θ_0 ^'), and thus by Claim <ref> and Lemma <ref>, for every small enough ρ>0, ( f,Δ_θ_0) contains the simple and closed Jordan domain ( T_θ_0+ρ ^'\ T_θ_0^',( c_12,θ_0+ρ^'\ c_12,θ_0^') +c_22,θ_0 ^') such that ( c_12,θ_0+ρ^'\ c_12,θ_0^') +c_22,θ_0^' is the old boundary, say, an arc of ( f,∂Δ_θ_0 ) , and c_22,θ_0+ρ^'∘ is the new boundary. Then we have that ( T_θ_0^',c_12,θ _0^'+c_2^') can be extended to a larger simple closed Jordan domain ( T_θ_0+ρ^' ,c_12,θ_0+ρ^'+c_2^') of Σ_L_1 , and that Claim <ref> holds for all θ∈0,θ_0 +ρ), contradicting the maximal property of θ_0. Thus Claim <ref> holds. It is clear that α_22,θ_0^'∩( α _12,θ_0^'+α_2^') ={a_2,θ_0 ^',a_1^''}, which together with that c_22,θ _0^' is strictly convex and Lemma <ref>, implies that α_22,θ_0^'∘∩( ( ∂Δ) \( α_12,θ_0^'+α _2^') ) is a subset of {a_j}_j=1^m, and so is α_22,θ_0^'∘∩∂Δ. This, together with Claims <ref> and <ref>, implies Claim <ref>. For simplicity, we assume that α_22,θ_0^'∘ ∩∂Δ={a_i_1} is a singleton. Then q_i_1 gives partitions c_22,θ_0^'=c_221,θ_0^'( q_2,θ_0 ^',q_i_1) +c_222,θ_0^'( q_i_1 ,q_1^') , c_2,θ_0^'=𝔠_21,θ_0( q_2,θ_0 ,q_i_1) +c_222,θ_0^'( q_i_1 ,q_1^') , where 𝔠_21,θ_0=c_21,θ_0^'( q_2,θ _0,q_2,θ_0^') +c_221,θ_0^'( q_2,θ_0^',q_i_1) . We can cut ( T_θ_0^'\ c_22,θ_0^',c_2,θ_0^') , the simple closed Jordan domain of Σ_L_1 with new boundary c_2,θ_0 ^'∘, from Σ_L_1 and sew ( T_θ_0^'',c_11,θ_0^') , to Σ_L_1\( T_θ_0^' \ c_22,θ_0^',c_2,θ_0^') along c_11,θ_0^'=c_1∩T_θ_0, to obtain two surfaces Σ_θ_0^1 and Σ_θ_0^2, linked at q_i_1, such that ∂Σ_θ_0^1=( c_1\ c_1^') +c_1,θ_0^'+𝔠_21,θ_0+c_i_1+⋯ +c_m, ∂Σ_θ_0^2=c_222,θ_0^'+( c_2\ c_2^') +c_3+⋯+c_i_1-1. It is clear that the total number of terms in the above two partitions is m+3, q_i_1 is contained in T_0, and q_1,q_2,θ_0,q_3 are outside T_0 since T_0 is convex. Therefore i_1≠1,2,3,and i_1≥4, say, the first partition contains at least four terms and the second partition contains at least three terms, which implies that each of the partitions has at most m terms. Hence the above two partitions are both ℱ( L,m) partitions, by Claim <ref>. It is clear that here (<ref>) still holds and, by (<ref>), we have n( Σ_L_1) =∑_j=1^2n( Σ_θ_0^j) . Then Σ_L_1 is decomposable in ℱ( L,m) , contradicting Lemma <ref>, and (iii) is proved in Case A. Now we assume Case B occurs. By (<ref>), Claim <ref> and the assumption of Case B, both c_1^' and c_2^' are strictly convex. Then we may further assume L(c_1^')≥ L(c_2^'). Let l=L(c_1^')+L(c_2^'), and let T_0 be still the domain enclosed by c_1^'+c_2^'. Let I=q_1^'q_2 and let C be the strictly convex circle passing through q_1^' and q_2 whose length is land whose arc c_x_0 from q_1^' to q_2 is longer than its complementary, with L(c_x_0)=x_0, and let c_l-x_0^'=C\ c_x_0^∘. Then c_x_0 is on the right hand side of the great circle determined by I=q_1^'q_2. Recall Definition <ref> of lens and let 𝔇_x=𝔇( I,x,l-x) =𝔇( I,c_x,c_l-x^') be the lens with ∂𝔇_x=c_x-c_l-x^', where c_x=c_x( q_1^',q_2) and c_l-x^' =c_x^'( q_2,q_1^') are convex circular arcs with L(c_x)=x and L(c_l-x^')=l-x. By Corollary <ref>, we have the following Claim <ref> and <ref>. The area A( 𝔇( I,x,l-x) ) strictly increases for x∈ l/2,x_0]. The lune 𝔇_x^'=𝔇^' (I,c_x)=𝔇^'(I,x) strictly increases, and the lune 𝔇_x^''=𝔇^'(-I,c_l-x^')=𝔇^'(-I,l-x) strictly decreases, for all x∈ l/2,x_0] (see Definition <ref> for the notation 𝔇 ^'(I,·)). That is to say, 𝔇_x^' \ I⊂𝔇_x^'^'and 𝔇_x^'^''\ I⊂𝔇 _x^'' when l/2≤ x<x^'≤ x_0. Since c_1^' and c_2^' are the circular arcs with the same endpoints, we have For any circular arc γ contained in T_0 from q_1^' to q_2, q_1∈γ if and only if γ is contained in the circle determined by c_1 (three points determine a unique circle on S). Assume L(c_1^')=x_0^'. Since we assumed L(c_1^')≥ L(c_2^'), we have x_0^'∈ l/2,x_0]. For x∈(x_0^',x_0] let T_x^'=𝔇_x_0^'^''\𝔇_x^'' and T_x^''=𝔇_x^'\𝔇_x_0^' ^'. Then by Lemma <ref> we have the following result similar to Claims <ref> and <ref>: For every x∈(x_0^',x_0] so that x-x_0^' is small enough, there exist a simple arc α_l-x^' =α_l-x^'( a_2,a_1^'') in Δ, with α_l-x^'∘⊂Δ and c_l-x^'=( f,α_l-x^') , and a Jordan domain D_x^' in Δ with ∂ D_x^'=α _2^'-α_l-x^', such that f restricted to D_x^' is a homeomorphism onto T_x^'(α_2^' is defined just before (<ref>)). In other words, for each x∈(x_0^',x_0] so that x-x_0^' is small enough, ( T_x^',c_2^') is a simple closed Jordan domain of Σ_L_1 with new boundary c_l-x^'∘ and old boundary c_2^'=c_l-x_0^' ^'. Then for every x satisfying Claim <ref>, we can cut ( T_x^',c_l-x_0^'^') from Σ_L_1 and sew ( T_x^'' ,c_x) to Σ_L_1\( T_x^',c_l-x_0^'^') along c_x_0^' to obtain a surface Σ_x=( f_x,Δ) in ℱ_r( L,m+2) . It is clear that there exists a maximum x^∗∈(x_0^',x_0] such that for every x∈ x_0^',x^∗), Σ_x is a well defined surface in ℱ_r( L,m+2). Then either x^∗=x_0 or x^∗<x_0. As the argument for Claims <ref> and <ref>, with a little difference, we can show the following Claims <ref> and <ref>: Except for that α_l-x^∗^'∘⊂Δ may fail, all other conclusions of Claim <ref> hold for x^∗: There exist a simple arc α_l-x^∗^'=α_l-x^∗ ^'( a_2,a_1^'') in Δ, with c_l-x^∗^'=( f,α_l-x^∗^') , and a Jordan domain D_x^∗^' in Δ with ∂ D_x^∗^'=α_2^'-α_l-x^∗^', such that f restricted to D_x^∗^' is a homeomorphism onto T_x^∗^'. In other words, ( T_x^∗^',c_2^') is still a simple closed Jordan domain of Σ_L_1 with the new boundary contained in c_1-x^∗^'∘ and the old boundary containing c_l-x_0^'^'. f has no branch value on c_l-x^∗^'∘and thus α_l-x^∗^'∘\∂Δ has a neighborhood in Δ on which f is a homeomorphism. By Claim <ref> f^-1 has a univalent branch g defined on T_x^∗^'\α_l-x^∗^'∘=∪_x∈(x_0^',x^∗)T_x^' with g( c_2^') =g( c_l-x_0^'^') =α_2^'. By Lemma <ref>, g can be extended to T_x^∗^', and then Claim <ref> follows. Claim <ref> is obvious, since c_l-x^∗^'∘∩ E_q=∅, both α_l-x^∗^' and c_l-x^∗ ^' are simple arcs and c_l-x^∗ is circular. Now, there are only three possibilities: Case BA. x^∗<x_0. Case BB. x^∗=x_0and α_l-x^∗ ^'∘∩∂Δ≠∅. Case BC. x^∗=x_0and α_l-x^∗ ^'∘∩∂Δ=∅. Assume Case BA occurs. Then as the discussion for Claims <ref> and <ref>, we can show c_l-x^∗^'∘=( f,α_l-x^∗ ^'∘) has to intersect ∂Σ_L_1, say, α_l-x^∗^'∘∩∂Δ≠∅. α_l-x^∗^'∘∩∂Δis a nonempty finite set { a_i_1,…,a_i_k} in {a_j}_j=1^m, a_1^'',a_i_1,…,a_i_k ,a_2are arranged on ∂Δ anticlockwise and divide α_l-x^∗^'∘ into k+1 open arcs, each of which has a neighborhood in Δ on which f is a homeomorphism. But the proof of Claim <ref> is simpler: Let Δ_x^∗ =Δ\D_x^∗^'. If α_l-x^∗ ^'∘∩∂Δ=∅, then by Claim <ref>, ( f,Δ_x^∗) is a surface in ℱ( L^',m+1)with L^'=L_1 -L(c_2^')+L(c_l-x^∗^'), and thus by Lemma <ref>, f restricted to a neighborhood of α_l-x^∗ ^' in Δ_x^∗ is a homeomorphism, in other words, ( f,Δ_x^∗) contains a simple closed Jordan domain ( K_ε,c_l-x^∗ ^') with old boundary c_l-x^∗^', say, c_l-x^∗^' is an arc of ( f,∂Δ_x^∗ ) , where K_ε=𝔇_x^∗^''\𝔇_x^∗+ε^'' for every small enough ε>0. Then ( T_x^∗+ε^',c_2^') =( T_x^∗^'∪K_ε,c_2^') is a closed and simple Jordan domain of Σ_L_1, contradicts the maximality of x^∗. Thus Claim <ref> holds. It is clear that α_l-x^∗^'∩α_1^' ={a_2,a_1^''} and, on the other hand, c_l-x^∗ ^' is strictly convex, since x_0^'<x^∗<x_0. Therefore, by Lemma <ref>, we have α_l-x^∗^'∘∩∂Δ⊂{a_j}_j=1^m. Thus α_l-x^∗ ^'∘∩∂Δ is consisted of some points of {a_j}_j=1^m, and then Claim (<ref> holds. When α_2,θ_0^'∘∩∂Δ={a_i_1 }∈{a_j}_j=1^mis a singleton, we can obtain a contradiction as in Case A, and the same argument applies to Case BB. When α_2,θ_0 ^'∘∩∂Δ contains more than one point, the argument is similar. We have obtained a contradiction in Cases BA and Case BB. Assume Case BC occurs. Then c_x^∗+c_l-x^∗^'=c_x_0 +c_2^' is the circle C and it is clear that Σ_x_0 =Σ_x^∗ is a well defined surface with L(∂Σ_x_0)=L(∂Σ_L_1). For all x∈0,θ_0], since L(c_x+c_l-x)=l<δ_E_q/2, (<ref>) implies that ( c_x +c_l-x) \{q_1^'} never meets E_q, and then we have n( Σ_x_0) =n( Σ_L_1) . On the other hand, by Claim <ref>, we have A(Σ_x_0)>A(Σ_L_1). Therefore R(Σ_x_0)>R(Σ_L_1), and moreover ∂Σ_x_0 has the partition ∂Σ_x_0=C_1+C_2+C_3+c_3+⋯+c_m, where C_1=c_1\ c_1^',C_2=c_x_0^'+c_l-x_0^',C_3=c_2\ c_2^'. It is clear that C_2 is a simple circle such that C_2^∘ is the old boundary of a simple domain of Σ_x_0.Thus (<ref>) is an ℱ( L,m+1) partition of ∂Σ_x_0. It is clear that Σ_x_0 has no branch value outside E_q, thus Σ_x_0∈ℱ_r( L,m+1) . It is clear that L(C_1)+L(C_2) =L(c_1\ c_1^')+l=L(c_1\ c_1^')+L(c_1^')+L(c_2^') <L(c_1)+L(c_2)<δ_E_q/2. By (<ref>), (<ref>) and (<ref>), we can repeat the argument for Case 1 to show that Σ_x_0, as a surface in ℱ_r( L,m+1) is decomposable in ℱ( L,m) (in Case 1 we in fact proved Σ_L_1∈ℱ ( L,m) is decomposable in ℱ( L,m-1), by (<ref>)). This implies that Σ_L_1 is also decomposable in ℱ( L,m) by (<ref>) and (<ref>). But this contradicts Lemma <ref>, and thus Case BC can't occur. We have proved (iii) in any case and the lemma has been proved completely. Now we can easily prove Theorem <ref>. It is clear that there are at most [ L/( δ_E_q/4) ] +1=[ 4L/δ _E_q] +1 terms in (<ref>) which have length ≥δ _E_q/4. Thus, we may assume, after a permutation of the subscripts like ( 1,2,…,m) ↦( j_0,j_0+1,…,m,1,2,… ,j_0-1) , that L(c_j)<δ_E_q/4,j=1,2. Then, by Lemma <ref>, we have the following (A) Neither c_1 nor c_2 is closed. Assume Σ_L_1∉ℱ_r( L,m-1) . For j=1 or 2, if c_j is contained in ℭ^2, then by Lemma <ref> we have ∂ c_j=E_qand thus L(c_1)≥δ_E_q, which contradicts (<ref>). Thus neither c_1 nor c_2 is contained in ℭ^2, which, together with (A), implies c_j^∘∩ E_q≠∅ for j=1 and 2. Then by (<ref>) we have c_1∩ E_q=c_2∩ E_q=c_1^∘∩ E_q=c_2^∘∩ E_q={q_1^'} for some q_1^'∈ E_q. This contradicts Lemma <ref> (iii). This contradiction comes from Condition <ref> which assumes Σ_L_1∉ℱ_r( L,m-1), and so Condition <ref> can't be satisfied. Thus we have Σ_L_1∈ℱ _r( L,m-1), and Theorem <ref> is proved. § PROOF OF THEOREM <REF> We first prove the following result. Let Σ be a surface of ℱ( L) and assume that ∂Σ has a partition ∂Σ=Γ+C such that C=C( p_1,p_2) is a simple circular arc with C∩ E_q⊂{p_1,p_2} (it is permitted that p_1=p_2). If C( p_1,p_2) cannot be contained in any open hemisphere on S, then there exists a surface Σ^' in ℱ( L) such that ∂Σ^' has the partition ∂Σ^'=Γ+γ, where γ is a simple polygonal path γ=p_1 𝔞_1𝔞_2…𝔞_sp_2 with γ^∘∩ E_q={𝔞_1,𝔞_2,… ,𝔞_s}, L(γ)≤ L(C), H(Σ^')>H(Σ), R(Σ^')+4π/L(∂Σ^')>R(Σ)+4π/L(∂Σ), and γ^∘∩ E_q≠∅ if d( p_1 ,p_2) =π. We first show the following claim. There exists a closed polygon T on S with ∂ T=-C+γ such that γ satisfies (<ref>), (<ref>) and (<ref>), and moreover, ( T\γ) ∩ E_q=( T^∘∪ C^∘) ∩ E_q=∅. First assume C=C( p_1,p_2) is not contained in any open hemisphere on S. Then C( p_1,p_2) is half, or a major arc, of a great circle c on S, oriented by C. Assume C( p_1,p_2) is half of a great circle on S. Then p_1 and p_2 are antipodal. Let T be the largest closed biangular domain on S so that, ∂ T=-C+γ, -C is one of the two edges of T, say, T is on the right hand side of C, γ is the other edge of T from p_1 to p_2, and (<ref>) holds. Since #E_q=q≥3, we have γ∩ C={p_1,p_2} and γ^∘∩ E_q ≠∅. Then T and γ satisfies Claim <ref> with L(γ)=L(C). Assume that C( p_1,p_2) is a major arc of the great circle c and let S^' be the closed hemisphere enclosed by -c. Then d( p_1,p_2) <π. Since C∩ E_q⊂{p_1 ,p_2}, the convex hull K of ( S^'∩ E_q) ∪{p_1,p_2}is a polygon in S^' with p_1p_2=( ∂ K) ∩ c⊂ K∩ S^'=K⊂ S^'∘∪p_1p_2 and the vertices of K are all in E_q, except the two points p_1 and p_2. But K is just the line segment p_1p_2 on S when S^'∘∩ E_q=∅. Then we have two possibilities to discuss. First consider the case S^'∘∩ E_q=∅ and let T=S^'. Then K=p_1p_2, T and γ=p_1p_2=p_1𝔞_1…𝔞_sp_2 with {𝔞_1,…,𝔞_s}=γ^∘∩ E_q satisfies Claim <ref>. It is possible that γ^∘∩ E_q=∅. Second consider the case S^'∘∩ E_q≠∅. In this case K^∘ is a Jordan domain with ∂ K=-γ+p_1p_2, γ=p_1𝔞_1…𝔞 _sp_2 and {𝔞_1,…,𝔞_s}=γ^∘∩ E_q=[ ( ∂ K) \p_1p_2] ∩ E_q≠∅. On the other hand, γ is a concave polygonal path in S^' whose two endpoints are on ∂ S^'=-c, which implies L(γ)<L(C). Then T=( S^'\ K) ∪γ satisfies Claim <ref>. By Claim <ref>, we can sew Σ and T along C to obtain a surface Σ^' satisfying (<ref>)–(<ref>) and (<ref>). On the other hand, (<ref>) implies n( Σ^') =n( Σ) +n(T^∘)+#C^∘∩ E_q=n( Σ) and we have A(Σ^')=A(Σ)+A(T)>A(Σ). Therefore we have (<ref>) and (<ref>). It is clear that Σ^'∈ℱ( L). We have proved the lemma completely. Lemma <ref> has a direct corollary: Let Σ be an extremal surface of ℱ( L) and assume that ∂Σ contains an arc C=C( p_1,p_2) such that C is an SCC arc with C∩ E_q ⊂{p_1,p_2} (it is permitted that p_1=p_2), then C is contained in some open hemisphere on S. Instead of proving Theorem <ref>, we prove the following theorem which implies Theorem <ref> directly. Let L∈ℒ be given. Then the following conclusions (A)–(C) hold. (A) There exists a precise extremal surface of ℱ_r( L) , and there exists a positive integer m_0=m_0( L,q) , depending only on L and q, such that every precise extremal surface of ℱ_r( L) is precise extremal in ℱ( L) ,ℱ_r( L,m) , and ℱ( L,m) , respectively, for every integer m≥ m_0. (B) For any precise extremal surface Σ_0=( f_0,Δ) of ℱ_r( L) , there exists a positive integer n_0 such that ∂Σ_0 has an ℱ ( L,n_0)-partition ∂Σ_0=C_1( q_1,q_2) +C_2( q_2 ,q_3) +⋯+C_n_0( q_n_0,q_1) satisfying the following (B1)–(B4): (B1) If n_0>1, then, for j=1,2,…,n_0, C_j^∘∩ E_q=∅and ∂ C_j={q_j,q_j+1}⊂ E_q. If n_0=1, then either C_1∩ E_q=∅ or C_1∩ E_q is the singleton {q_1}. (B2) Each C_j is contained in an open hemisphere S_j on S, j=1,2,…,n_0. (B3) At most one of C_j,j=1,…,n_0, is a major circular arc (a closed circular arc is regarded major). (B4) All C_j,j=1,…,n_0, have the same curvature. (C) There exists an integer d^∗=d_L,q depending only on L and qand there exists a precise extremal surface Σ^∗ of ℱ( L) such that _maxΣ^∗≤ d^∗ (see (<ref>) for _max), and either Σ^∗ is a simple closed disk in S\ E_q, or ∂Σ^∗ has a partition ∂Σ^∗=C_1^'(q_1^',q_2^' )+C_2^'( q_2^',q_3^') +… +C_n_0^'^'( q_n_0^'^',q_1^') , with n_0^'>1, such that ∂ C_j^'={q_j^',q_j+1^'}⊂ E_q, q_j^'≠ q_j+1^', C_j^∘∩ E_q=∅, for all j=1,2,…,n_0^'. Let L∈ℒ. For sufficiently large m_0 and each m≥ m_0, by Theorem <ref> there exists a precise extremal surface Σ_m of ℱ_r( L,m) . Assume L(∂Σ_m)=L_m. Then by Theorem <ref>, we have {Σ_m}_m=1^∞ ⊂ℱ_r( L,m_0) , andfor every m≥ m_0, since ℱ_r( L,m_0) ⊂ℱ _r( L,m) , Σ_m is an extremal surface of ℱ_r( L,m_0) . Therefore we have, for everym≥ m_0, L_m≥ L_m_0, H(Σ_m)=H(Σ_m_0), which with the relation Σ_m_0∈ℱ_r( L,m)implies that Σ_m_0 is an extremal surface of ℱ _r( L,m) as well, and thus L_m_0≥ L_m. Therefore we have L_m=L_m_0 and H( Σ_m) =H( Σ_m_0) , m=m_0,m_0+1,… For each Σ∈ℱ_r(L), there exists an integer m>m_0 such that Σ∈ℱ_r(L,m). Then H(Σ)≤ H(Σ _m)=H(Σ_m_0), and in consequence Σ_m_0 is an extremal surface of ℱ_r(L). Assume that Σ^' is any other extremal surface of ℱ_r(L). Then for some positive integer m^'>m_0, Σ^' is an extremal surface of ℱ _r(L,m^') and thus we have L(∂Σ^')≥ L_m=L_m_0, and therefore Σ_m_0 is precise extremal in ℱ_r( L) . We in fact proved that Σ_m is precise extremal in ℱ _r( L) for every m≥ m_0. By Corollary <ref>, each Σ_m is precise extremal in ℱ( L,m) as well for each m≥ m_0. On the other hand we have ℱ( L) =∪_m=1^∞ℱ( L,m), ℱ _r( L) =∪_m=1^∞ℱ_r( L,m), ℱ_r( L,m) increases as m increases, and so does ℱ( L,m) . Thus every precise extremal surface of ℱ_r( L) is precise extremal in ℱ( L), ℱ_r( L,m) and ℱ( L,m) , for each m≥ m_0; and (A) is proved. Let Σ_0=( f_0,Δ) be any precise extremal surface of ℱ_r( L) with L(∂Σ_0)=L_0. Then (A) implies: Σ_0=( f_0,Δ) is a precise extremal surface of every ℱ_r( L,m) and every ℱ( L,m) for m≥ m_0 and L_0=L( ∂Σ_0) =L_m_0. Then ∂Δ and ∂Σ_0 have corresponding ℱ( L,m_0)-partitions ∂Δ=α_1( a_1,a_2) +α_2( a_2,a_3) +⋯+α_m_0( a_m_0,a_1) , ∂Σ_0=c_1( q_1,q_2) +c_2( q_2 ,q_3) +⋯+c_m_0( q_m_0,q_1) , with c_j=( f,α_j) ,j=1,…,m_0. We will show that ∂Σ_0 is circular at each q_j∈{q_j}_j=1^m_0 \ E_q, say, c_j-1+c_j is circular at q_j if q_j∉ E_q. For sufficiently small ε>0, q_j-ε is a point in c_j-1^∘, which tends to q_j as ε→0. Then for sufficiently small ε>0, we have an ℱ(L,m_0 +1)-partition ∂Σ_0=c_1+⋯+c_j-2+c_j-1^'+c_j-1^''+c_j+⋯+c_m_0, where c_j-1^'=c_j-1(q_j-1,q_j-ε), c_j-1 ^''=c_j-1(q_j-ε,q_j). Then c_j-1^'+c_j-1^''=c_j-1 and for sufficiently small ε>0, c_j-1^''∈ℭ^2=ℭ^2( Σ _0)(see Definition <ref> for the notation ℭ^j). Since, by Claim <ref>, Σ_0 is also a precise extremal surface of ℱ_r(L,m_0+1) and (<ref>) is an ℱ( L,m_0+1) partition of ∂Σ_0, Lemma <ref> implies that c_j-1^''+c_j is circular at q_j if q_j∉ E_q and thus c_j-1+c_j is circular at q_jif q_j∉ E_q. Therefore, we conclude that ∂Σ_0 is circular everywhere outside E_q. By Lemma <ref>, f_0 is locally homeomorphic in [ Δ\ f_0^-1(E_q)] ∪[ ( ∂Δ) \[ f_0^-1 (E_q)∩{a_j}_j=1^m_0] ] . Thus we have: For each a∈∂Δ, if a∉ f_0^-1(E_q), then a has a neighborhood α_a in ∂Δ such that ( f_0,α_a) is an SCC arc and f_0 restricted to a neighborhood of α_a in Δ is a homeomorphism. Then for sufficiently large m, ∂Σ_0 has an ℱ (L,m)-partition ∂Σ_0=∑_j=1^m𝔠_j such that 𝔠_j∈ℭ^1( Σ_0)(see Definition <ref>). Since Σ_0 is also precise extremal in ℱ_r(L,m), by Lemma <ref> all terms 𝔠_j have the same curvature. Thus we have The curvature of ∂Σ_0=( f_0,∂Δ) is a constant function of z∈( ∂Δ) \ f_0^-1(E_q) Let n_1=#( ∂Δ) ∩ f_0^-1(E_q). Then there are three possibility. Case 1. n_1=#( ∂Δ) ∩ f_0^-1(E_q)=∅. Case 2. n_1=#( ∂Δ) ∩ f_0^-1(E_q)=1. Case 3. n_1=#( ∂Δ) ∩ f_0^-1(E_q)≥2. Assume Case 1 occurs. Then C=f(∂Δ) is a circle with C∩ E_q=∅ and f_0:∂Δ→ C is a CCM of degree kfor some integer k≥1. In this case, ∂Σ_0 has a partition ∂Σ_0=C_1+C_2+…+C_k, such that every C_k is a simple closed path and C_k=C. Then by Corollary <ref> C is contained in an open hemisphere on S. Thus C contains a major circular arc with length <π. We will prove k=1. We cannot use Lemma <ref> directly, since it is for terms of ℱ( L,m)-partitions in ℭ^1, and every arc of ℭ^1 has length <π. But it applies in this way: If k>1, then ∂Σ_0 has an ℱ_r( L,k+2) partition ∂Σ_0=C_1^'+c_1^'+C_2^'+c_2^'+C_3+…+C_k so that C_1^' and C_2^' are major circular arcs, each of which has length <π, thus C_1^' and C_2^' are both in ℭ^1( Σ_0) , contradicting Lemma <ref>. Thus (B) is proved in Case 1. Assume Case 2 occurs and let ( ∂Δ) ∩ f_0 ^-1(E_q)={a_1}. Then by Claim <ref>, ∂Σ _0=(f,∂Δ) is a simple circle C_1=C_1( p_1 ,p_1) with q_1=f(a_1), and by Corollary <ref> C_1 is contained in an open hemisphere on S. Thus (B) also holds in Case 2. Assume that Case 1 or 2 occurs. Then we have proved that ∂Σ_0 is a simple circle C_1 with #C_1∩ E_q≤1. In this situation, we in fact can show that (C) holds. Let T be the closed disk enclosed by C_1. Then we can sew Σ_0 and S\ T along C_1 to obtain a closed surface F=( f,S) . Then we have L(∂ T)=L(Σ_0), A(Σ_0)=A(F)-A(S\ T)=A(F)+A(T)-4π and n( Σ_0) =n( F) -n( S\ T^∘) =n( F) -n( S) +n( T) =n( F) -q+n( T) . Note that n( T) =#T^∘∩ E_q, not #T∩ E_q. Therefore we have R(Σ_0)=R(F)+R(T)+8π. By Lemma <ref> we have R(F)≤-8π, and thus H( Σ _0) ≤ H(T). Then H( Σ_0) =H(T), say, both T and Σ_0 are precise extremal surface of ℱ( L) . If the diameter of T is equal to, or less than δ_E_q, then there is another disk Σ^∗ in S\ E_q congruent to T, with H(T)≤ H(Σ^∗). But T is extremal in ℱ( L) . We have H(T)=H(Σ^∗), and thus both Σ^∗ and T are precise extremal in ℱ( L) . If the diameter of T is larger than δ_E_q, then by moving T continuously we can show that there is also another disk Σ^∗ on S congruent to T such that n( Σ^∗) ≤n( T) but ∂Σ^∗ contains at least two points of E_q, and then ∂Σ^∗ has a partition (<ref>) satisfying (<ref>) and H(T)≤ H(Σ^∗), which implies that Σ^∗ and T are both precise extremal in ℱ( L). Therefore (C) holds. We have proved (B) and (C) in Cases 1 and 2. Assume Case 3 occurs. Then f_0^-1(E_q) divides ∂Δ into n_1 arcs and thus ∂Σ_L_0 has an ℱ( L,n_1)-partition ∂Σ_0=C_1+C_2+⋯+C_n_1 which satisfies (B1) and (B4) for n_0=n_1. By Corollary <ref>, (B2) holds true. Assume that (<ref>) does satisfies (B3). Then by (B2), as the argument for (<ref>), the partition (<ref>) has a refined ℱ ( L,n_1+2)-partitionwhich contains two terms of length <π which are major arcs and of class ℭ^1. But this contradicts Lemma <ref> again. Thus (B3) holds with the partition (<ref>), and (B) is proved completely. Now we begin to prove (C) for Case 3. By (B3) we may assume C_2,C_3,…,C_n1 all satisfy (<ref>), C_1 may or may not satisfy (<ref>). We first show the following. In Case 3, Σ_0 can be deformed to be a surface F_1 ∈ℱ( L,n_1+k) such that F_1 has a partition of the form (<ref>) satisfying (<ref>), with n_0^'=n_1+k for some integer k≥0. If C_1 also satisfies (<ref>), then there is nothing to prove. So we assume C_1 is closed. Then by (B2) L(C_1)<2π, and by (B1) and Corollary <ref> we can find a precise extremal surface F_1 in ℱ_r( L) , so that ∂ F_1 has an ℱ( L,n_1) partition ∂ F_1=C_1^'+C_2+⋯+C_n_1 which is the same as (<ref>), except that C_1 is replaced by a rotation C_1^' of C_1with C_1^'∩ E_q ={q_1,q_2^',q_3^',…,q_k^'} containing k points arranged on C_1^' anticlockwise, for some k≥2. Then C_1^' can be divided into k arcs by { q_j^'} _j=1^k with q_1^'=q_1, and thus ∂ F_1 has an ℱ( L,n_0^')-partition ∂Σ=∂ F_2=C_1^'( q_1,q_2^') +C_1^'( q_2^',q_3^') +⋯+C_1^'( q_k^',q_1) +C_2( q_1,q_2) +⋯+C_n_1( q_n_1,q_1) satisfying (<ref>) with n_0^'=n_1+k. By Theorem <ref>, there exists a positive integer d^∗ depending only on n_0^'=n_1+k and q, which in fact depends only on L and q since n_0^'<L_0/δ_E_q, and there exists a surface Σ^∗ in ℱ_r( L,n_0^') such that H(Σ^∗)≥ H(F_1),L(∂Σ^∗)≤ L(∂ F_1), the second equality holding only if ∂Σ^∗=∂ F_1. Then Σ^∗ is a precise extremal surface of ℱ_r( L,n_0^') and thus L(∂Σ^∗)≥ L(∂ F_1), which implies L(∂Σ^∗)=L(∂ F_1) and thus ∂Σ^∗=∂ F_1, which has the ℱ( L,n_0^')-partition satisfying (<ref>), and (C) is proved in Case 3. § PROOF OF THEOREMS <REF>, <REF> AND <REF> In this section we complete the proof of our second and third main theorems. Let 𝒮_1 be the space that satisfies Definition <ref> (1)–(4). We denote by 𝒮_1^( 5) ,𝒮_1^(6),𝒮_1^(7),𝒮_1^(8) the subspaces of 𝒮_1 satisfying (5), (6), (7) and (8) of Definition <ref> respectively. Then we have 𝒮_0=𝒮_1^( 5) ∩𝒮_1^(6)∩𝒮_1^(7)∩𝒮_1^(8). We first introduce a procedure to construct a surface for given boundary. (Standard solution of surfaces with known boundary) Let Γ be a closed curve on S with the partition Γ=( f,∂Δ) =C_1( p_1,p_2) +⋯+C_q^'( p_q^',p_1) ,q^'≤ q. satisfying Definition <ref> (1)–(3). Then d( p_j ,p_j+1) <π and p_jp_j+1 is well defined. We can construct a surface Σ_Γ satisfying the following condition. Σ_Γ is contained in 𝒮_1∩ℱ_r and that ∂Σ_Γ=Γ. The surface Σ_Γ can be obtained as the output when we input Γ and execute the following procedure. For each j=1,2,…,q^', let K_j=𝔇^'( p_jp_j+1,C_j) (see Definition <ref>), which is the closed convex lune enclosed by the arc C_j=C_j( p_j,p_j+1) and its chord p_j+1p_j, say ∂ K_j=C_j-p_jp_j+1. If C_j is the line segment p_jp_j+1 for some j, then K_j^∘=∅ and we just set K_j=p_jp_j+1. If C_j is a major arc of a great circle on S for some j, then by definition K_j is the closed hemisphere enclosed by the great circle C_j-p_jp_j+1=C_j+p_j+1p_j. Note that K_q^'=𝔇^'( p_q^'p_1,C_q) , say p_q^'+1=p_1. Let l_2=p_1p_2 and l_q^'=p_1 p_q^', which are well defined since d( p_j,p_j+1) <π for j=1,…,q^',p_q^'+1=p_1. For each j=3,…,q^'-1, let l_j=p_1p_j if p_1p_j is well defined, say, p_j is not the antipodal point p_1^∗ of p_1, or let l_jbe the straight line segment p_1p_j-1p_j if p_j=p_1^∗, which is half of the great circle from p_1 to p_1^∗ passing through p_j-1. Since p_1,…,p_q^' are distinct each other, if p_j=p_1^∗ for some jof 3,…,q^'-1, we have that l_i=p_1p_i is well defined for each i≠ j with 3≤ i≤ q^'-1 and p_j-1p_j is also well defined. When q^'=2, let T_2 be the surface whose interior is the simple domain S\ l_2 and boundary is l_2-l_2. When q^'≥3, for each j=2,…,q^'-1, we define T_j to be the closed triangular domain enclosed by l_j+p_jp_j+1-l_j+1,j=2,…,q^'-1. Each T_j,j=2,…,q^'-1( q^'≥3) , is a simple surface in the sense that its interior is a simple nonempty domain on S. But it is possible that for some j, l_j+p_jp_j+1 -l_j+1 may be contained in a line segment, and in this case the interior of T_j is S\ l_j when p_j+1∈ l_j^∘, or S\ l_j+1 when p_j∈ l_j+1^∘. Recall Remark <ref>, when we regard T_j as a surface, ∂ T_j is a simple closed curve, and thus l_j and l_j+1, regarded as arcs in the surface, only intersect at p_1 in the surface T_j, even if ∂ T_j is just a line segment as a set on S. We sew T_2,…,T_q^'-1 along l_3,… ,l_q^'-2 to obtain a surface P with boundary p_1 p_2… p_q^'p_1. Then we can sew P and K_j along p_jp_j+1 for each j=1,2,…,q^' to obtain a surface Σ_Γ with ∂Σ_Γ=Γ and _maxΣ_Γ≤ q^'-2+q^'≤2q^'-2. Thus we have Σ_Γ∈𝒮_1. It is clear that all branch values of the surface P are contained in {p_1,…,p_q^'}and thus when we patch all lunes K_j to P along p_jp_j+1, no other branch values appeared. Thus all branch values of Σ_Γ are contained in {p_1,… ,p_q^'}⊂ E_q, and so Σ_Γ∈ℱ_r ∩𝒮_1. We will write P=P(p_1,p_2,…,p_q^'). Then it is clear that P(p_1,p_2, …, p_q^') is determined by the ordered points p_1,p_2,…,p_q^' uniquely, and thus Σ_Γ is determined by Γ uniquely, when we execute the above procedure. Uniqueness here is in the sense described in Remark <ref>. If q^'≥4, we can regard l_j,j=3,…,q^'-1, as simple arcs in the surface P with l_j^∘⊂ P^∘and ∂ l_j⊂∂ P so that every pair l_i and l_j with 3≤ i<j≤ q^'-1 only intersect at p_1in P (see Remark <ref> for the convention). Then it is clear that l_3,…,l_q^'-1 divide P into the surfaces T_2,…,T_q^'-1. Then it is clear that when q^'≥4 R( P) =∑_j=2^q^'-1R(T_j)-4π∑_j=2 ^q^'-2#( l_j+1^∘∩ E_q) . The polygon p_1p_2… p_q^'p_1 divides the surface Σ_Γ into the surface P( p_1,p_2 ,…,p_q^') and the q^' lunes K_j ,j=1,…,q^'. Write J=J(Γ)={j:1≤ j≤ q^'and K_j^∘≠∅}. Note that the condition K_j^∘≠∅ means that ∂ K_j=C_j( p_j,p_j+1) -p_jp_j+1 is a Jordan curve on S and p_jp_j+1^∘ (regarded as in the surface Σ_Γ) is in the interior of Σ_Γ. Then we have R(Σ_Γ)=R(P)+∑_j∈ J[ R(K_j)-4π#( p_jp_j+1^∘∩ E_q) ] . Therefore, by (<ref>) we have for q^'≥4 that R(Σ_Γ)=∑_j=2^q^'-1R( T_j) +∑_j∈ J[ R(K_j)-4π#( p_jp_j+1^∘∩ E_q) ] -4π∑_j=2^q^'-2#( l_j+1^∘∩ E_q) . For the cases q^'=2,3, by the definition of T_2 and { K_j} _j=1^q^', we have R(Σ_Γ)=R(T_2)+∑_j∈ J[ R(K_j)-4π#p_jp_j+1^∘∩ E_q] . Let Σ=( f,Δ) be a surface in ℱ such that ∂Σ=Γ_1+Γ_2 and Γ_1 is a closed arc of ∂Σ satisfying Definition <ref> (1)–(3) with the corresponding partition Γ_1=( f,∂Δ) =C_1( p_1,p_2) +⋯+C_q^'( p_q^',p_1) . Let Σ_Γ_1∈𝒮_1∩ℱ_r be the surface given by Solution <ref>. Then there exists a surface Σ_2=( f_2,S) without boundary or Σ_2=( f_2,Δ) with boundary Γ_2 such that the following holds. (i) If ∂Σ=Γ_1, say, Γ_2 reduces to the point p_1, then Σ_2=( f_2,S) is a closed surface and R(Σ)=R(Σ_Γ_1)+R(Σ_2)+8π≤ R(Σ_Γ_1 ), with equality holding if and only if CV_f_2⊂ E_q. (ii) If Γ_2 is not a point, then Σ_2=( f_2 ,Δ) ∈ℱ and R(Σ)+4π=[ R( Σ_Γ_1) +4π] +[ R(Σ_2)+4π] . (iii) CV_f_2⊂ E_q iff Σ∈ℱ_r. In the proof the notations K_j,T_j,l_j,J are defined in Solution <ref>. For all j∈ J, we sew the surface Σ and the surface K_j^c=S\ K_j^∘ along C_j to obtain a surface G_1^' such that ∂ G_1^'=p_1p_2p_3⋯ p_q^'p_1 +Γ_2. By Lemma <ref> (regarding K_j, C_j and p_jp_j+1 as T,γ and γ^' in the lemma, and applying the lemma #J times) we have R(Σ)=R(G_1^')+∑_j∈ J[ R(K_j)-4π#p_jp_j+1^∘∩ E_q] , and Σ∈ℱ_r iff G_1^'∈ℱ_r. We first consider the case q^'=2. Then we have by (<ref>) ∂ G_1^'=p_1p_2p_1+Γ_2. Since ∂ T_2=p_1p_2+p_2p_1, we have by Lemma <ref> R(T_2)=4π#p_1p_2^∘∩ E_q. If Γ_2 is a point, then we can sew G_1^' along p_1p_2 to obtain a surface Σ_2=( f_2 ,S) which is closed, say, a complete covering of S onto itself. By Lemma <ref> (i) R(G_1^')=R(Σ_2)+4π#[ p_1p_2∩ E_q] =R(Σ_2)+4π#[ p_1p_2^∘∩ E_q] +8π, and G_1^'∈ℱ_r iff CV_f_2⊂ E_q. Then by (<ref>) we have R(G_1^')=R(Σ_2)+R(T_2)+8π, and then by Lemma <ref>, (<ref>) and (<ref>) we have R(Σ) =R(G_1^')+∑_j∈ J[ R(K_j)-4π#p_jp_j+1^∘∩ E_q] =R(T_2)+R(Σ_2)+8π+∑_j∈ J[ R(K_j)-4π#p_jp_j+1^∘∩ E_q] =R(Σ_Γ)+R(Σ_2)+8π≤ R(Σ_Γ), equality holding iff CV_f_2⊂ E_q, and so (i) holds when q^'=2. Note that the three relations CV_f_2⊂ E_q, G_1^'∈ℱ_rand Σ∈ℱ_r are equivalent. If Γ_2 is not a point, then by Lemma <ref> (ii), for the surface Σ_2=( f_2,Δ) obtained from G_1^' by sewing it along p_1p_2, we have R(G_1^')=R(Σ_2)+4π#p_1p_2∩ E_q -4π=R(Σ_2)+4π#p_1p_2^∘∩ E_q+4π, and G_1^'∈ℱ_r iff Σ_2∈ℱ_r. Thus by (<ref>) (<ref>), and (<ref>) we have R(Σ)=R(Σ_Γ)+R(Σ_2)+4π, which implies (<ref>). On the other hand, the three relations Σ _2∈ℱ_r, G_1^'∈ℱ_rand Σ∈ℱ_r are equivalent again. We have prove the lemma when q^'=2. From now on we assume q^'≥3. We will construct a sequence G_1^',G_2^',…,G_q^'-1^' in ℱ such that, ∂ G_j^'=l_j+1+p_j+1p_j+2… p_q^' p_1+Γ_2,j=1,…,q^'-1, and R(G_j^')=R(G_j+1^')+R(T_j+1)-4π#( l_j+2^∘∩ E_q) ,j=1,…,q^'-2. By (<ref>), G_1^' already satisfies (<ref>). Assume G_1^',…,G_k^', satisfying (<ref>) for j=1,…,k, and (<ref>) for j=1,…,k-1, are already obtained for some k with 1≤ k≤ q^'-2. When ∂ T_k+1=l_k+1+p_kp_k+1-l_k+2is a Jordan curve on S, we sew G_k^' and S\ T_k+1^∘ along l_k+1+p_k+1p_k+2 to obtain a surface G_k+1^'so that (<ref>) holds for j=k+1. Then by Lemma <ref> R(G_k^')=R(G_k+1^')+R(T_k+1)-4π#E_q∩ l_k+2^∘, say, (<ref>) holds for j=k, and moreover, G_k^'∈ℱ_r iff G_k+1^'∈ℱ_r. Consider the case that l_k+1=l_k+1∪p_k+1p_k+2, say l_k+2=l_k+1+p_k+1p_k+2. In this case we just put G_k+1^'=G_k^'. Then (<ref>) holds for j=k+1, by the equality l_k+1+p_k+1p_k+2… p_q^'p_1=l_k+2 +p_k+2… p_q^'p_1. (<ref>) implies ∂ T_k+1=l_k+1+p_k+1p_k+2-l_k+2=l_k+2-l_k+2, and thus we have by Lemma <ref> R(T_k+1)=4π#l_k+2^∘∩ E_q, say, (<ref>) holds for j=k as well. Consider the case l_k+1=l_k+2+p_k+2p_k+1. Then ∂ T_k+1=l_k+1+p_k+1p_k+2-l_k+2=l_k+1-l_k+1, and ∂ G_k^' =l_k+1+p_k+1p_k+2p_k+3… p_q^'p_1+Γ_2 =l_k+2+p_k+2p_k+1+p_k+1p_k+2p_k+3… p_q^'p_1+Γ_2 =l_k+2+p_k+2p_k+1+p_k+1p_k+2+p_k+2p_k+3… p_q^'p_1+Γ_2. Thus we can sew G_k^' itself along p_k+2p_k+1 to obtain G_k+1^' satisfying (<ref>) for j=k+1. By Lemma <ref> (ii) R(G_k^')=R(G_k+1^')+4π#p_k+2p_k+1∩ E_q-4π, and moreover, G_k^'∈ℱ_r iff G_k+1^' ∈ℱ_r. On the other hand #p_k+2p_k+1∩ E_q=#l_k+1^∘∩ E_q-#l_k+2^∘∩ E_q+1. Hence we have R(G_k^')=R(G_k+1^')+4π#l_k+1^∘∩ E_q-4π#l_k+2^∘∩ E_q. By (<ref>) and Lemma <ref> R(T_k+1)=4π#l_k+1^∘∩ E_q. Hence by (<ref>) we have (<ref>) for j=k. We have proved the existence of the surfaces G_j^',j=1,…,q^'-1, in any case, and moreover, G_j^'∈ℱ_r iff G_j+1^'∈ℱ_r for j=1,…,q-1. By (<ref>), we have G_q^'-1^'=l_q^'+p_q^'p_1 +Γ_2=l_q^'-l_q^'+Γ_2, and, by (<ref>) and (<ref>), we have c]l R(Σ)=R(G_q^'-1^')+∑_j=2^q^'-1[ R(T_j)-4π#l_j+1^∘] +∑_j∈ J[ R(K_j)-4π#p_jp_j+1^∘∩ E_q] , which, together with (<ref>) for q^'=3, or (<ref>) for q^'>3, implies that R(Σ)=R(G_q^'-1^')+R(Σ_Γ)-4π#l_q^' ^∘∩ E_q. Note that the term 4π#l_q^'^∘∩ E_q=4π#p_1p_q^'^∘∩ E_q in (<ref>) never appeared in (<ref>). Assume Γ_2 is a point. Then by (<ref>) we can sew the surface G_q^'-1^' along l_q^'=p_1p_q^' so that G_q^'-1^' becomes a closed surface Σ_2=( f_2,S) . Then by Lemma <ref> (i) we have R( G_q^'-1^') =R(Σ_2)+4π#l_q^' ∩ E_q=R(Σ_2)+4π#l_q^'^∘∩ E_q+8π, and thus by (<ref>) R(Σ)=R(Σ_Γ_1)+R(Σ_2)+8π. By Lemma <ref> we have R(Σ_2)≤-8π, with equality holding if and only if CV_f_2⊂ E_q. On the other hand it is clear that Σ∈ℱ_r⇔ G_1^'∈ℱ _r⇔ G_2^'∈ℱ_r⇔…⇔ G_q^'-1^'∈ℱ_r ⇔ CV_f_2⊂ E_q, (i) is proved completely. Assume that Γ_2is not a point. Then by (<ref>) we can sew G_q^'-1^' itself along l_q^' =p_1p_q^' to obtain a surface Σ_2=( f_2,Δ) ∈ℱ such that ∂Σ _2=Γ_2. By Lemma <ref> (ii) we have R(G_q^'-1^')=R(Σ_2)+4π#l_q^'∩ E_q -4π=R(Σ_2)+4π#l_q^'^∘∩ E_q+4π, which with (<ref>) implies R(Σ)=R(Σ_Γ)+R(Σ_2)+4π. In consequence we have (<ref>). On other hand, (<ref>) also holds and so (ii) is proved completely. For simplicity, we introduce the following condition: A curve Γ=( f,∂Δ) is called satisfying ( p_1,…,p_m)-Condition, if Γ=( f,∂Δ) has a partition ∂Σ=C_1( p_1,p_2) +⋯+C_m( p_m,p_1) and for each j≤ m, C_j is an SCC arc and the endpoints p_j and p_j+1 are distinct and contained in E_q, and moreover d( p_j,p_j+1) <π. If p_1,…,p_m are distinct each other, then Γ satisfies Definition <ref> (1)–(3) with partition (<ref>). In general it is clear that the following holds. Assume Σ∈ℱ such that ∂Σ satisfies ( p_1,…,p_m)-Condition with partition (<ref>). Then ∂Σ has a partition ∂Σ=Γ_1+Γ_2 satisfying Lemma <ref>, say, Γ_1=C_j( p_j,p_j+1) +C_j+1( p_j+1 ,p_j+2) +…+C_j+k( p_j+k,p_j+k+1) is a closed arc of ∂Σ satisfying Definition <ref> (1)–(3) and Γ_2 is either a point when p_1,…,p_m are distinct each other with j=1 and j+k=m, or Γ_2 =C_1( p_1,p_2) +C_2( p_2 ,p_3) +…+C_j-1( p_j-1,p_j) +C_j+k+1( p_j+k+1,p_j+k+2) +…+C_m( p_m,p_1) which satisfies ( p_1,p_2,…,p_j,p_j+k+1,p_j+k+2 ,…,p_m)-Condition. Let Σ be a surface in ℱ such that ∂Σ satisfying ( p_1,…,p_m)-Condition with partition (<ref>). Then there exist surfaces Σ_j ∈𝒮_1∩ℱ_r with partitions ∂Σ_j=C_j1( p_j1,p_j2) +⋯+C_jq_j ^'( p_jq_j^',p_j1) satisfying Definition <ref> (1)–(4) for j=1,…,k, k≥1, such that L(∂Σ)=∑_j=1^kL( ∂Σ_j) , and R(Σ)+4π≤∑_j=1^k( R(Σ_j) +4π), with equality holding if and only if Σ∈ℱ_r. Moreover, each Σ_j is the solution of ∂Σ_j in Solution <ref> with respect to ∂Σ_j, say, Σ_j =Σ_∂Σ_j, q_1^'+q_2^'+…+q_k^'=q^', {C_jl:j=1,…,k,l=1,…,q_j^'}={C_j:j=1,…,q^'}, and max_j=1,…,kR(Σ_j)+4π/L(∂Σ_j)≥R(Σ)+4π/L(∂Σ). All the conclusions can be obtained by repeating the previous lemma and Claim <ref> several times, except (<ref>). (<ref>) follows from (<ref>), (<ref>) and Lemma <ref>. (i) There exists a surface Σ∈ℱ_r which is a 4π-extremal surface 𝒮_1, say, H_𝒮_1=sup_Σ^'∈𝒮_1R(Σ^')+4π/L(∂Σ^')=R(Σ)+4π/L(∂Σ). (ii) For each surface Σ in 𝒮_1 satisfying (<ref>), there exists a surface Σ^'in 𝒮_0∩ℱ_r such that R(Σ^')+4π/L(∂Σ^')≥R(Σ)+4π/L(∂Σ)=H_𝒮_1. (iii) H_𝒮_1=sup_Σ∈𝒮_1R(Σ)+4π/L(∂Σ)=sup_Σ∈𝒮_0∩ℱ_r R(Σ)+4π/L(∂Σ)=sup_Σ∈𝒮_0 R(Σ)+4π/L(∂Σ)=H_𝒮_0. (iv) For each simplest 4π-extremal surface Σ of 𝒮_0, its boundary ∂Σ is consisted of Q(Σ) strictly SCC arcs (see Definitions <ref> and <ref>). (i) Let H_𝒮_1=sup_Σ^'∈𝒮_1R(Σ^')+4π/L(∂Σ^'). Then there exists a sequence Σ_n in 𝒮_1 such that lim_n→∞R(Σ_n)+4π/L(∂Σ_n )=H_𝒮_1. By Definition of 𝒮_1, we may assume Γ_n=∂Σ_n has the partition Γ_n=∂Σ_n=C_n1( p_1,p_2) +C_n2( p_2,p_3) +…+C_nq^'( p_q^',p_1) such that p_1,…,p_q^' are distinct each other, the endpoints set {p_j}_j=1^q^' of all C_nj are independent of nand ∂Σ_n with this partition satisfy Definition <ref> (1)–(4). Thus we may assume C_nj( p_j,p_j+1) converges to an SCC arc C_j( p_j,p_j+1) for each j=1,… ,q^'. It is clear that Γ=C_1( p_1,p_2) +…+C_q^'( p_q^',p_1) satisfies Definition <ref> (1)–(3). Then for the surface Σ_Γ∈𝒮_1∩ℱ_r and its partition {K_j}_j=1^q^'∪{T_j}_j=2^q^'-1 given by Solution <ref>, we have (<ref>) for q^'≥4, or (<ref>) for q^'=2,3. For the surface Σ_Γ_n and its partition {K_nj }_j=1^q^'∪{T_j}_j=2^q^'-1 given by Solution <ref>, we may assume that for each j, either K_nj^∘=∅ for all n, or K_nj^∘≠∅ for all n. Then J_n=J( Σ_n) ={j:K_nj^∘≠∅ ,j∈{1,…,q^'}}=J is independent of n, and Lemma <ref>, (<ref>) and (<ref>) imply that[Note that T_j is the same for each Σ_n, since p_1,…,p_q^' are independent of n.] c]lll R(Σ_n) ≤ R(Σ_Γ_n) =∑_j=2 ^max{2,q^'-1}R( T_j) -4πφ( q^') +∑_j∈ J[ R(K_nj)-4π#( p_jp_j+1 ^∘∩ E_q) ] , where φ(2)=φ(3)=0, φ(q^')=∑_j=2^q^'-2#( l_j+1^∘∩ E_q) for all q^'≥4, and For each j∈ J_n, we may assume K_nj converges to the closed lune K_j enclosed by C_j+p_j+1p_j, which as a limit may be just the segment p_jp_j+1 and we have J_n⊃ J=J(Γ)={j:K_j^∘≠∅,j∈{1,… ,q^'}}. Then it is clear that A(K_nj)→ A(K_j) and for each sufficiently large n n(K_nj)≥n(K_j), which implies lim_n→∞R(K_nj)≤ R(K_j). Therefore by (<ref>) and (<ref>), for the solution Σ_Γ ∈𝒮_1∩ℱ_r to Γ, we have lim_n→∞R(Σ_n) ≤ R(Σ_Γ) =∑_j=2^max{2,q^'-1}R( T_j) -4πφ( q^') +∑_j∈ J[ R(K_j )-4π#( p_jp_j+1^∘∩ E_q) ] . On the other hand we have lim_n→∞L(∂Σ _n)→ L(∂Σ_Γ). Thus we have H_𝒮_1=lim_n→∞R(Σ_n)+4π/L(∂Σ_n)≤R(Σ_Γ)+4π/L(∂Σ_Γ). Since Σ_Γ∈𝒮_1, we have H_𝒮_1=R(Σ_Γ)+4π/L(∂Σ_Γ ). (i) is proved. (ii) Let Σ be any surface in 𝒮_1∩ℱ_r satisfying (i). Then ∂Σ has a partition ∂Σ=C_1( p_1,p_2) +…+C_q^'( p_q^',p_1) satisfying Definition <ref> (1)–(4). But (5) may fails. E_q divides each C_j into subarcs each of which contains no point of E_q in its interior. We will show The partition (<ref>) has a refinement ∂Σ=C_1^'( p_1^',p_2^') +⋯+C_q^''^'( p_q^''^',p_1^') , such that for each j≤ q^'', C_j^'∘∩ E_q=∅,∂ C_j^'⊂ E_q and the two endpoints of C_j^' are distinct for each j=1,…,q^'' (note that q^'' may be larger than q, though q^'≤ q). In other words, ∂Σ with partition (<ref>) satisfies (p_1^',p_2^',⋯,p_q^''^' )-Condition[It is clear that d( p_j,p_j+1) <π for all j=1,…,q^'. But this cannot implies d( p_j^',p_j+1^') <π for all j=1,… ,q^''. So Claim <ref> needs a proof.]. We first show that All C_j^',j=1,2,…,q^'', is contained in an open hemisphere on S. Assume that Claim <ref> fails. Then for some j_0∈{1,… ,q^''}, C_j_0^' is not contained in any open hemisphere and we may assume j_0=1. Then by Lemma <ref> there exists a surface Σ^' such that ∂Σ^'=p_1^'𝔞_2^' +𝔞_2^'𝔞_3^'+⋯ +𝔞_s^'p_2^'+C_2^' +…+C_q^''^', L(∂Σ^')<L(∂Σ),R(Σ^')+4π/L(∂Σ^')>R(Σ)+4π/L(∂Σ )=H_𝒮_1. We can repeat this method at most q^''-1 times for every edge C_j^' which is not contained in any open hemisphere on S, to obtain a surface Σ^'' such that ∂Σ ^'' satisfy (𝔞_1^',𝔞_2^',…,𝔞_m^')-Condition and L(∂Σ^'')≤ L(∂Σ^'),R(Σ^'')+4π/L(∂Σ^'')≥R(Σ^')+4π/L(∂Σ^')>R(Σ)+4π/L(∂Σ). By Corollary <ref>, there exists a surface Σ^'''∈𝒮_1∩ℱ_rsatisfying H_𝒮_1≥R(Σ^''')+4π/L(∂Σ^''')≥R(Σ^'')+4π/L(∂Σ^'')>R(Σ)+4π/L(∂Σ)=H_𝒮_1. This contradiction implies Claim <ref>. By Claim <ref>, we have d( p_j^',p_j+1^') <π for all j=1,…,q^''. Thus Claim <ref> holds. By Claim <ref> and Corollary <ref>, there exists a surface Σ_1∈𝒮_1∩ℱ_r such that R(Σ_1)+4π/L(∂Σ_1)≥R(Σ)+4π/L(∂Σ)=H_𝒮_1, and ∂Σ_1 has a partition satisfying Definition <ref> (1)–(4), and each term of this partition is one of the arc in (<ref>), and thus we have Σ_1∈𝒮_1^(5). Since Σ_1 ∈𝒮_1, (<ref>) in fact implies R(Σ_1)+4π/L(∂Σ_1)=H_𝒮_1. Let ∂Σ_1=𝒞_1( 𝔭_1,𝔭 _2) +𝒞_2( 𝔭_2,𝔭_3) +⋯+𝒞_𝔮^'( 𝔭_𝔮 ^',𝔭_1) be the partition of ∂Σ_1 such that each term is a term in (<ref>). Then by Claim <ref> Σ_1 with this partition satisfies Definition <ref> (1)–(6), and thus Σ_1 ∈𝒮_1^(6). (†) Assume that Σ_1∉𝒮_1^(7), say, (<ref>) does not satisfies Definition <ref> (7). Then there exists a pair ( i,j)with i≠ j, such that the curvature of 𝒞_i is not the same as that of 𝒞_j. By Corollary <ref> we can change 𝒞_i and 𝒞_j a little so that Σ_1 becomes another surface Σ_1^' in 𝒮_1 such that R(Σ_1^')>R(Σ_1) and L(∂Σ_1^')=L(∂Σ_1), which deduce the contradiction H(Σ_1^')>H_𝒮_1. We have proved that Σ_1∈𝒮_1^(7). Assume Σ_1∉𝒮_1^(8). Then in the partition (<ref>), there is an edge 𝒞_j_0 which is a major circular arc. Then by Lemma <ref>, we may assume that Σ_1 is the surface Σ_∂Σ_1 given by Solution <ref> and is obtained by sewing {𝒦_j} and {𝒯_j}, where 𝒦_j and 𝒯_j are defined by the partition (<ref>) as in Solution <ref> (as K_j and T_j there). Let Σ_1^' and Σ_2^' be two surface in 𝒮_1 both obtained by deform Σ_1^' via pushing 𝒞_j_0 to the right hand side a little, respectively, so that the endpoints of 𝒞_j_0 remain unchanged, n (Σ_1^')=n( Σ_2^') =n( Σ_1) and L(∂Σ_1^')+L(∂Σ_2^')=2L(∂Σ_1). Then we have A(Σ_1^')+A(Σ_2^')>2A( Σ_1)by Corollary <ref> (B). Thus we have R(Σ_1^')+4π+R(Σ_2^')+4π/L(∂Σ_1^')+L( ∂Σ_2^') >2( R(Σ_1)+4π) /2L(∂Σ_1). which, with Lemma <ref> and (<ref>), implies max{R(Σ_1^')+4π/L(∂Σ_1^'),R(Σ_2^')+4π/L( ∂Σ_2^') } >R(Σ_1)+4π/L(∂Σ_1 )=H_𝒮_1, contradicting definition of H_𝒮_1 again. We have proved Σ_1∈𝒮_1^(8). Up to now, we have proved Σ_1 satisfies (1)–(8) of Definition <ref>, say Σ_1∈𝒮_0. Then we have by (<ref>) H_𝒮_1≥ H_𝒮_0≥R(Σ_1)+4π/L(∂Σ_1)=H_𝒮_1. and (ii) is proved, since Σ_1∈ℱ_r. (iii) follows from (ii) directly. To prove (iv), let Σ∈𝒮_0 be a simplest 4π-extremal surface of 𝒮_0 with the corresponding partition (<ref>) satisfying Definition <ref> (1)–(8). If (iv) fails for Σ, then all arcs C_j in the partition (<ref>) are straight, say, C_j=p_jp_j+1,j=1,…,q^'. We may assume Σ is the solution of Solution <ref> from the partition (<ref>). Then Σ is obtained by sewing P=P( p_1,p_2 ,…,p_q^') and {K_j}_j=1^q^' given by Solution <ref>, each K_j is just the line segment p_jp_j+1 and we have, by Definition <ref> (5), p_jp_j+1^∘∩ E_q=C_j^∘∩ E_q=∅. Then by (<ref>) 1/H_𝒮_1=L(∂Σ)/R(P)+4π=2∑_j=1^q^'L(p_jp_j+1)/2R(P)+8π. It is clear that R(P)+4π>0. Let θ_1,θ_2,… ,θ_q^' be sufficiently small positive angles such that the closed lunes K_j^'=𝔇^'(p_jp_j+1,θ_j) (see Definition <ref>) have the same area, say, A(K_j^')=A(K_1^'),j=1,2,…,q^', and let C_j^' be the circular boundary of 𝔇^'( p_jp_j+1,θ_j) from p_j to p_j+1. Then all θ_j are determined by θ_1 and we can sew P and the lunes K_j^' along ∂ P to obtain a surface Σ_θ_1 such that ∂Σ_θ_1=C_1^'( p_1,p_2) +C_2^'( p_2,p_3) +⋯+C_q^'^'( p_q^',p_1) . By (<ref>) when θ_1 is small enough we have that R(K_j^')=( q-2) A(K_j^') and that R(Σ_θ_1)=R(P)+∑_j=1^q^'R(K_j^' )=R(P)+∑_j=1^q^'( q-2) A(K_j^'). Then we have by (<ref>) R(Σ_θ_1)+4π =R(P)+4π+q^'( q-2) A(K_1^') =q^'( q-2) [ R(P)+4π/q^'( q-2) +A( K_1^') ] , and then by (<ref>) L( ∂Σ_θ_1) /R(Σ_θ_1 )+4π =∑_j=1^q^'L( C_j^') /q^'( q-2) [ R(P)+4π/q^'( q-2) +A( K_1^') ] =1/q^'( q-2) ∑_j=1^q^' 2L( C_j^') /2R(P)+8π/q^'( q-2) +2A( K_1^') =1/q^'( q-2) ∑_j=1^q^' 2L( C_j^') /2R(P)+8π/q^'( q-2) +2A( K_j^') , which, with 2L( C_j^') =L( ∂𝔇( p_jp_j+1,θ_j,θ_j) ) and 2A( K_j^') =A( 𝔇( p_jp_j+1,θ_j,θ_j) ) , implies L( ∂Σ_θ_1) /R(Σ_θ_1 )+4π=1/q^'( q-2) ∑_j=1^q^' L( ∂𝔇( p_jp_j+1,θ _j,θ_j) ) /2R(P)+8π/( q-2) q^'+A( 𝔇( p_jp_j+1,θ _j,θ_j) ) . Therefore by Lemma <ref> (i) we have for small enough θ_j=θ_j( θ_1) , L( ∂Σ_θ_1) /R(Σ_θ_1 )+4π <1/q^'( q-2) ∑_j=1^q^' L( ∂𝔇( p_jp_j+1 ,0,0) ) /[ 2R(P)+8π/( q-2) q^'+A( 𝔇( p_jp_j+1,0,0) ) ] =1/( q-2) q^'∑_j=1^q^' 2L( p_jp_j+1) /[ 2R(P)+8π/( q-2) q^'+0] =∑_j=1^q^'L( p_jp_j+1) /R(P)+4π=L(∂Σ)/R(Σ)+4π. Then by (<ref>) we have H_𝒮_1=R(Σ)+4π/L(∂Σ)<R(Σ_θ_1)+4π/L( ∂Σ_θ_1) . But this contradicts the definition of H_𝒮_1, since Σ_θ_1∈𝒮_1. Thus (iv) holds. Using the argument of the paragraph (†) in the above proof, we in fact have proved Theorem <ref> (iii): Assume F_1 and F_2 are any two 4π-extremal surfaces in 𝒮_0. Then k( F_1,E_q) =k( F_2,E_q) . Since H_𝒮_0=H_𝒮_1, both F_1 and F_2 satisfy (<ref>) and each ∂ F_j has s partition ∂ F_j=𝒞_j1+𝒞_j2+,…,𝒞_jk_j satisfying (1)–(8) of Definition <ref>. If for some i_1 and i_2, 𝒞_1i_1 and 𝒞_2i_2 have different curvature, then applying the discussion in the paragraph (†) for Σ_1∉𝒮_1^(7), we can construct two surfaces F_1^' and F_2^' contained in 𝒮_1 by deforming 𝒞_1i_1 and 𝒞_2i_2, as we do for 𝒞_i and 𝒞_j there, so that R(F_1^')+R(F_2^')>R(F_1)+R(F_2) and L(∂ F_1^')+R(∂ F_2^')=L(∂ F_1)+R(∂ F_2). Then we can use Lemma <ref> to show that R(F_j^')+4π/L(∂ F_j^')>H_𝒮_1 for j=1 or 2. This is a contradiction. For any closed convex disk T on S, H_𝒮_0=sup_Σ∈𝒮_0R(Σ)+4π/L(∂Σ)>H(T). Since T is convex, it is contained in a hemisphere S_1 on S. It is clear that there exists a disk T_1 contained in S\ E_q whose boundary is the circumcircle of a regular triangle of edge length δ_E_q. Then we may assume that two points p_1 and p_2 of E_q are contained in ∂ T_1 with d( p_1,p_2) =δ_E_q, and then we have T_1∈𝒮_1. Thus by Lemma <ref> (iii) we have H_𝒮_0=H_𝒮_1=sup_Σ∈𝒮_1 R(Σ)+4π/L(∂Σ)≥R(T_1)+4π/L(∂ T_1)=( q-2) A(T_1)+4π/L(∂ T_1). Then in the case L(∂ T)≤ L(∂ T_1), by Lemma <ref> we have H_𝒮_0≥( q-2) A(T_1)+4π/L(∂ T_1)>( q-2) A(T_1)/L(∂ T_1)=H(T_1)≥ H(T). In the case L(∂ T_1)≤ L(∂ T)≤2π, we may rotate S to move T to another congruent disk T^' such that n( T^') ≤n(T) and ∂ T^'contains at least two points of E_q with distance <π, and thus H(T)≤ H(T^')=R(T^')/L(∂ T^') <R(T^')+4π/L(∂ T^')≤ H_𝒮_1 =H_𝒮_0. We are in the position to complete the proof of our second main theorem. The conclusion (iii) is obtained by Corollary <ref> directly. Let Σ be any surface in ℱ. We first show H(Σ)≤ H_𝒮_0. By Theorem <ref> and Lemma <ref>, <ref> holds when L≤2δ_E_q. Thus to prove (<ref>) we may assume L(∂Σ)>2δ_E_q. Let L be any positive number in ℒ with L≥ L(∂Σ). Then by Theorem <ref> (A) there exists a precise extremal surface Σ_L_1 of ℱ( L) such that L(∂Σ_L_1)=L_1. Then we have H(Σ)≤ H(Σ _L_1) and thus by definition of H H(Σ)≤ H(Σ_L_1)=R(Σ_L_1)/L(∂Σ_L_1)<R(Σ_L_1)+4π/L(∂Σ_L_1). By Theorem <ref> (B), for some positive integer n_0, ∂Σ_L_1 has an ℱ( L,n_0)-partition ∂Σ_L_1=C_1( p_1,p_2) +C_2( p_2,p_3) +⋯+C_n_0( p_n_0,p_1) satisfying (B1)–(B4) of Theorem <ref>, which implies that Σ_L_1with partition (<ref>) satisfies ( p_1,p_2,⋯,p_n_0)-Condition when n_0>1. Consider the case that ∂Σ_L_1 contains at most one point of E_q. Then C_1 is an SCC circle and n_0=1, and thus for the closed disk T enclosed by C_1 we have by Lemma <ref> that H(Σ_L_1)=R(Σ_L_1)/L(∂Σ_L_1)≤ H(T), and thus by Lemma <ref> and (<ref>) we have H(Σ)≤ H(Σ_L_1)≤ H(T)<H_𝒮_0. Therefore (<ref>) holds when n_0=1. Assume n_0≥2. Then Claim <ref> and Corollary <ref> imply that there exists a surface Σ_1∈𝒮_1 such that R(Σ_L_1)+4π/L(∂Σ_L_1)≤R(Σ _1)+4π/L(∂Σ_1)≤ H_𝒮_1, which, with (<ref>) and Lemma <ref>, implies H( Σ) <R(Σ_L_1)+4π/L(∂Σ _L_1)≤R(Σ_1)+4π/L(∂Σ_1)≤ H_𝒮_1=H_𝒮_0, and (<ref>) follows. By Lemma <ref>, there exists a surface Σ_0∈𝒮_0 such that R(Σ_0)+4π/L(∂Σ_0)=H_𝒮_0. Then ∂Σ_0 has a partition ∂Σ_0=C_1( p_1,p_2) +C_2( p_2 ,p_3) +⋯+C_q^'( p_q^',p_1) satisfying Definition <ref> (1)–(8). Then by Corollary <ref> there exists a sequence Σ_n in ℱ such that lim_n→∞H(Σ_n)=R(Σ_0)+4π/L(∂Σ_0)=H_𝒮_0. Thus by (<ref>) we have proved H_0=sup_Σ∈𝐅 H(Σ)=sup_Σ∈ℱH(Σ)=H_𝒮_0, and Theorem <ref> (i) is proved. Theorem <ref> (ii) follows from Lemma <ref> (iv) directly. Let 𝔖 be the space of all 4π-extremal surfaces of 𝒮_0. By Theorem <ref> (i), 𝔖≠∅ and Q( Σ) ≤ q for all Σ∈𝔖. Then there exists F∈𝔖 such that q^'=Q(F)=min_Σ∈𝔖Q(Σ). Let 𝔖 _q^' be the subspace of 𝔖 such that Q(Σ )=q^' for all Σ∈𝔖_q^'. Then for every surface Σ∈𝔖_q^', q^'δ_E_q≤ L(∂Σ)<2π q^', and then L=inf_Σ∈𝔖_q^'L(∂Σ)≥ q^'δ_E_q≥2δ_E_q. Then there exists a sequence Σ_n in 𝔖_q^' such that L(∂Σ_n)=L, and Σ_n has the partition ∂Σ_n=C_n1( p_1,p_2) +C_n2( p_2,p_3) +⋯+C_nq^'( p_q^' ,p_1) satisfying Definition <ref> (1)–(8) and Theorem <ref> (ii), where p_1,…,p_q^' are independent of nand distinct each other. If Σ_n∉ℱ_r for some n, then by Lemma <ref> (i), for the surface Σ_∂Σ_n∈𝒮_1 given by Solution <ref> from ∂Σ_n and the partition (<ref>), we have R(Σ_n)<R(Σ_∂Σ_n), which implies H_𝒮_0=H(Σ_n)<H(Σ_∂Σ_n)≤ H_𝒮_1. This contradicts H_𝒮_0=H_𝒮_1. Thus Σ _n∈ℱ_r for all n. By Corollary <ref>, k( Σ_n,E_q) =k is a constant for all n=1,2,…, say, all C_nj have the same curvature, and then C_nj=C_n1 for all n and all j≤ q^'. Thus, ∂Σ_n=∂Σ_1 and L(∂Σ_1)=L and so Σ_1 ∈𝔖_q^' satisfies Definition <ref> (1) and (2). Since _maxΣ^'≤2q^'-2 for all Σ^'∈𝔖_q^', there exists Σ∈𝔖_q^'such that _maxΣ≤_maxΣ^' for all Σ^'∈𝔖 _q^'. Therefore 𝒮^∗≠∅. We first prove (i). Let Σ∈𝒮^∗ and assume Q(Σ)=2. Then ∂Σ has a partition ∂Σ=C_1( p_1,p_2) +C_2( p_2,p_1) satisfying Definition <ref> (8), and C_1 and C_2 are both strictly convex, by Theorem <ref> (ii). Then ∂Σ encloses a closed lens Σ^'=𝔇(p_1p_2,θ_0,θ_0) with θ _0∈(0,π/2], and by Corollary <ref> we have R(Σ)≤ R(Σ^') and thus R(Σ)=R(Σ^') and Σ^'∈𝒮 ^∗. Therefore Σand Σ^'both equal the lens 𝔇(p_1p_2,θ_0,θ_0),by Corollary <ref>, and we have H_0=H_𝒮_0=R(Σ)+4π/L(∂Σ) =R(Σ^')+4π/L(∂Σ^')=R( 𝔇(p_1p_2,θ_0,θ_0)) +4π/L( ∂𝔇( p_1p_2,θ_0 ,θ_0) ) . We will show d( p_1,p_2) =δ_E_q. Assume this fails. We will deduce a contradiction. We first show that θ_0<π/2 when d( p_1,p_2) >δ_E_q. Assume θ_0=π/2. Then T_p_1,p_2 =𝔇(p_1p_2,π/2,π/2) is a disk. Let δ_0=L(p_1p_2). Then it is clear that for an open neighborhood I_δ_0^∘=( δ_0-ε,δ _0+ε) of δ_0 with δ_0-ε >δ_E_q, there exists a family 𝒯_I_δ_0^∘={𝔇 (p_1p_2,δ,π/2,π/2):δ=d( p_1,p_2,δ) ∈ I_δ_0^∘,} of convex disks on S such that n( T_p_1,p_2,δ ) =n( T_p_1,p_2) is a constant for each δ∈ I_δ_0^∘. In fact, ∂ T_p_1,p_2 contains only two points p_1,p_2 in E_q. Then rotating T_p_1,p_2 a little about p_1 we can obtain the desired family. Then by Lemma <ref> (ii4) there exists δ_1∈ I_δ_0 ^∘ and a disk in the family which can be write as T_p_1 ,p_2,δ_1=𝔇(p_1p_2,δ_1 ,π/2,π/2) with δ_1=d( p_1,p_2,δ_1) ∈ I_δ_0^∘ such that H_𝒮_0=R( T_p_1,p_2) +4π/L( ∂ T_p_1,p_2) <R( T_p_1,p_2,δ_1 ) +4π/L( ∂ T_p_1,p_2,δ_1) . Since δ_1>δ_0-ε>δ_E_q, the diameter of ∂ T_p_1,p_2,δ_1 is larger than δ_E_q, and then we may move the disk T_p_1,p_2,δ_1 congruently to another closed disk T_δ_1 so that its boundary contains at least two points of E_qand n( T_δ_1) ≤n( T_p_1,p_2,δ) . Then we have T_δ_1∈𝒮_1 and R( T_p_1,p_2,δ_1) +4π/L( ∂ T_p_1,p_2,δ_1) ≤R( T_δ_1) +4π/L( ∂ T_δ_1) ≤ H_𝒮_1, which with (<ref>) implies H_𝒮_1>H_𝒮_0. This is a contradiction by Lemma <ref> (iii). We have proved θ_0<π/2when d( p_1,p_2) >δ_E_q. Now that we have proved θ_0<π/2, there exists a point p_2^'∈p_1p_2^∘ near p_2 such that d( p_1,p_2^') >δ_E_q, L(∂𝔇( p_1p_2^',θ _p_2^',θ_p_2^') )=L(𝔇 (p_1p_2,θ_0,θ_0)), n( 𝔇( p_1p_2^' ,θ_p_2^',θ_p_2^') ) =n( 𝔇( p_1p_2,θ _0,θ_0) ) , and θ_0<θ_p_2^'<π/2. Then by Lemma <ref>, putting A_0=4π-4πn( 𝔇(p_1p_2,θ_0,θ_0)) , we have H_𝒮_0=R( 𝔇(p_1p_2 ,θ_0,θ_0)) +4π/L( ∂𝔇( p_1p_2,θ_0,θ_0) ) <R( 𝔇(p_1p_2^',θ_p_2^' ,θ_p_2^')) +4π/L( ∂𝔇( p_1p_2^',θ_p_2^',θ_p_2^' ) ) . Now we let F be the closed domain 𝔇(p_1p_2^',θ_p_2^',θ_p_2^'). Then F has diameter larger than δ_E_qand ∂ F∩ E_q={p_1}, but we can rotate F on S to another domain F^' so that H(F^')≥ H(F) and ∂ F^' contains two points q_1 and q_2 of E_q. Thus ∂ F^' satisfies (2) of Lemma <ref> and thus we have by (<ref>) that H_0≥R(F^')+4π/L(∂ F^')≥R(F)+4π/L(∂ F)=R( 𝔇(p_1p_2^' ,θ_p_2^',θ_p_2^')) +4π/L( ∂𝔇( p_1p_2^',θ _p_2^',θ_p_2^') ) >H_𝒮 _0. This contradicts Theorem <ref> again, and thus we have (<ref>) and Corollary <ref> (i) is proved. Assume q=3 and E_q=E_3 is contained in a great circle on S. We will show q^'=Q(Σ)=2. It suffices to show a contradiction when we assume q^'=3. When q^'=3, ∂Σ has a partition ∂Σ=C_1( p_1,p_2) +C_2( p_2 ,p_3) +C_3( p_3,p_1) , satisfies Definition <ref> (1)–(8) and Definition <ref> (1)–(3), and by Lemma <ref> we have R(Σ)≤ R(Σ_∂Σ),∂Σ=∂Σ_∂Σ where Σ_∂Σ∈𝒮_1 is given by Solution <ref>, say, Σ is the surface obtained by gluing the closed Jordan domain T_2,K_1,K_2,K_3 in Solution <ref>. But we have proved H_𝒮_1=H_𝒮_0=H_0, and then we have R( Σ) =R(Σ_∂Σ). By Definition <ref> (8) and by Theorem <ref> (ii), each C_j is not a major circular arc and is strictly convex. Thus for each K_j we may write K_j=𝔇^'( p_jp_j+1,θ_j,θ_j) with θ_j∈(0,π/2]. If p_1p_2p_3p_1 is a Jordan curve, then n( 𝔇( p_jp_j+1,θ_j,θ _j) ) =0 for j=1,2,3, A(T_2)=2π, since E_q is on a great circle, and by (<ref>), we have R(Σ)+4π =4π+R(Σ_∂Σ)=4π+A(T_2)+∑ _j=1^3[ R(K_j)-4π#p_jp_j+1^∘∩ E_q] =6π+∑_j=1^3R(K_j)=1/2∑_j=1^3[ R( 𝔇( p_jp_j+1,θ_j,θ_j) ) +4π] , and H_𝒮_0=R(Σ)+4π/L(∂Σ)=1/2∑_j=1^3R( 𝔇( p_jp_j+1 ,θ_j,θ_j) ) +4π/1/2∑ L( ∂𝔇( p_jp_j+1,θ_j,θ _j) ) . We may assume R( 𝔇( p_1p_2 ,θ_1,θ_1) ) +4π/L( ∂𝔇( p_1p_2,θ_1,θ_1) ) ≥R( 𝔇( p_jp_j+1 ,θ_j,θ_j) ) +4π/L( ∂𝔇( p_jp_j+1,θ_j,θ_j) ) ,j=2,3. It is clear that 𝔇( p_1p_2,θ_1,θ_1) ∈𝒮_0, and then we have H_𝒮_0=R(Σ)+4π/L(∂Σ)≤R( 𝔇( p_1p_2,θ_1,θ_1) ) +4π/L( ∂𝔇( p_1p_2 ,θ_1,θ_1) ) ≤ H_𝒮_0. Then 𝔇( p_1p_2,θ_1 ,θ_1) ∈𝒮^∗ and L( ∂𝔇( p_1p_2,θ_1,θ_1) ) <L( ∂Σ) . This is a contradiction since we assumed q^'=3and Σ∈𝒮^∗. If p_1p_2p_3p_1 is not a Jordan domain, then A(T_2)=4π and we may assume p_2∈p_1p_3^∘. Then we have n( 𝔇( p_jp_j+1 ,θ_j,θ_j) ) =0 for j=1,2, and n( 𝔇( p_3p_1,θ_3,θ _3) ) =1, in other words we have p_jp_j+1 ^∘∩ E_q=∅ for j=1,2 and p_1p_3^∘∩ E_q=1. Therefore we have R( 𝔇( p_jp_j+1,θ_j,θ_j) ) =2R(K_j) for j=1,2 and R( 𝔇( p_3p_1,θ_3,θ _3) ) =2R(K_3)-4π, and so, by (<ref>), we have R(Σ)+4π =4π+R(Σ_∂Σ)=4π+A( T_2) +∑_j=1^3[ R(K_j)-4π#p_jp_j+1^∘∩ E_q] =4π+∑_j=1^3R(K_j)=4π+2π+1/2∑_j=1^3[ R( 𝔇( p_jp_j+1,θ_j,θ _j) ) ] =1/2∑_j=1^3[ R( 𝔇( p_jp_j+1,θ_j,θ_j) ) +4π] and H_𝒮_0=R(Σ)+4π/L(∂Σ)=1/2∑_j=1^3R( 𝔇( p_jp_j+1 ,θ_j,θ_j) ) +4π/1/2∑ L( ∂𝔇( p_jp_j+1,θ_j,θ _j) ) . We may assume R( 𝔇( p_j_0 p_j_0+1,θ_j_0,θ_j_0) ) +4π/L( ∂𝔇( p_j_0p_j_0+1,θ_j_0 ,θ_j_0) ) ≥R( 𝔇( p_jp_j+1,θ_j,θ_j) ) +4π/L( ∂𝔇( p_jp_j+1,θ_j,θ _j) ) ,j=1,2,3. It is clear that T=𝔇 ( p_j_0p_j_0+1,θ_j_0,θ_j_0) ∈𝒮_0, and then by Lemma <ref> we have H_𝒮_0=R(Σ)+4π/L(∂Σ)≤R( 𝔇( p_j_0p_j_0+1,θ_j_0 ,θ_j_0) ) +4π/L( ∂𝔇( p_j_0p_j_0+1,θ_j_0,θ_j_0) ) ≤ H_𝒮_0. Then we have R(T)+4π/L(∂ T)=H_𝒮_0=H_0 and L(∂ T)<L(∂Σ). Thus T∈𝒮_0 is a 4π-extremal surface in 𝒮_0 and so q^'=Q(Σ)≤ Q(T)=2. This contradicts the assumption q^'=Q(E_q)=3. Then we in fact proved q^'=Q(Σ)=2when q=3and E_q lies on a great circle, and so (ii) is proved. Now we assume that q=q^'=Q(Σ)=3 and p_1,p_2,p_3 are not on a great circle. Then the triangle p_1p_2p_3p_1 is in an open hemisphere on S. And it is clear that ∂Σ has a partition ∂Σ=C_1( p_1,p_2) +C_2( p_2,p_3) +C_3( p_3,p_1) satisfying Definition <ref> (1)–(8) and Definition <ref> (1)–(3). By Theorem <ref> (ii) each C_j is strictly convex. By Definition of 𝒮^∗ and Corollary <ref>, there is no other surface Σ^' in 𝒮_0 such that ∂Σ^'=∂Σand _maxΣ^'<_maxΣ. Thus Σ=Σ_∂Σ, which is given in Solution <ref>. Therefore we have proved Corollary <ref> (iii). To prove (iv) of the corollary, we will show n( Σ) =#f^-1(E_3)∩Δ=0, whether q^'=2 or 3. If q^'=2, then, by Corollary <ref> (i), we may assume Σ=𝔇(p_1p_2,θ_0,θ_0) for some θ_0∈(0,π/2], with d( p_1,p_2) =δ_E_3, and then (<ref>) holds and ∂Σ has the partition ∂Σ=C_1( p_1,p_2) +C_2( p_2 ,p_1) =∂𝔇(p_1p_2,θ_0 ,θ_0), such that C_j are symmetric strictly convex circular arcs which are not major. If E_3 is contained in a great circle on S, then q^'=2 by Corollary <ref> (ii), and thus we may again choose Σ=𝔇(p_1p_2,θ_0,θ_0) for some θ_0∈(0,π/2] satisfying (<ref>) and (<ref>). Assume q^'=3=q. Then {p_1,p_2,p_3} are not on a great circle and ∂Σ_0=C_1( p_1,p_2) +C_2( p_2 ,p_3) +C_3( p_3,p_1) , with C_j^∘∩ E_q=∅. Then p_1p_2p_3 p_1 is a triangle which is either strictly convex at all vertices or concave at all vertices. If p_1p_2p_3p_1 is convex, then (<ref>) holds clearly. Assume p_1p_2p_3p_1 is concave and (<ref>) fails. Then n( Σ) ≥1 and for the closed domain T enclosed by p_1p_3p_2p_1=-p_1p_2p_3p_1 , T is contained in some open hemisphere on S containing p_1p_3p_2p_1, we have T∈𝒮_0 and thus we have the the contradiction (note that q-2=1) H_𝒮_0 =H_0=R(Σ)+4π/L(∂Σ) =A(Σ)-4πn( Σ) +4π/L(∂Σ)≤A(Σ)/L(∂Σ) <4π/L(∂ T)≤A(T)+4π/L(∂ T). It is clear that T∈𝒮_0. Since the three edges of T is not strictly convex, by Theorem <ref> (ii) we have A(T)+4π/L(∂ T)<H_𝒮_0. Then by (<ref>) we have the contradiction H_𝒮_0<H_𝒮_0. We have proved (<ref>), whether p_1p_2p_3p_1 is convex or concave, and (iv) has been proved. Continuing the above argument, we can prove Theorem <ref>. Assume q=3,E_3={p_1,p_2,p_3} and Σ_0=( f_0,Δ) ∈𝒮^∗( E_3) . Then we may further assume δ_E_3=d( p_1,p_2) ≤ d( p_1,p_3) ≤ d( p_2,p_3) , and then p_1p_3^∘ does not contain p_2. Therefore ∂Σ_0 has the partition (<ref>) (or (<ref>)), (<ref>) holds, and C_j^∘∩ E_q=∅ for each term C_j in the partition. Then by Definition of 𝒮_0 and by Theorem <ref> we have the following: H_0( E_3) =H_𝒮_0( E_3) =R(Σ_0)+4π/L(∂Σ_0)≥ h_0( E_3) . Now, inspired by the method on page 215 in <cit.>, we construct a sequence of surfaces {Σ_n}⊂𝐅^∗ =𝐅^∗(Eq), such that H(Σ_n)→ H_0( E_3) . Let S^' be the surface with interior S\ C_1 and boundary C_1-C_1. Then we can sew Σ_0 and the surface S^' along C_1 to obtain a surface Σ _0^'=( f_0^',Δ) . It is clear that A(Σ_0^')=A(Σ_0)+4π,L(∂Σ_0^')=L(∂Σ_0), and f_0^'-1(E_3)∩Δ contains only one point a with f_0^'( a) =p_3. So we may assume a=0. Then the line segment p_1p_3 has an f_0^'-lift α=α( a_1,0) in Δ such α∩∂Δ={a_1}, f(a_1 )=p_1 and f( 0) =p_3, and we may assume that -α=[0,1]. Let Σ_n=( f_n,Δ^+) with f_n=f_0^'( z^2n) ,z∈Δ^+. Then we have n( Σ_n,E_3) =0, L(∂Σ_n)=nL( ∂Σ_0) +2L(p_1p_3 ), A( Σ_n) =n( A( Σ _0) +4π) , and so Σ_n∈𝐅^∗( E_3) (note that q=3) and we have h_0( E_3) ≥R(Σ_n,E_3)/L(∂Σ _n)=nA( Σ_0) +4π n/nL(∂Σ _0)+2L(p_1p_3)→A( Σ_0) +4π/L(∂Σ_0)=H_0( E_3) and so we have (<ref>)when q=3, since H_0( E_3) ≥ h_0( E_3) . (<ref>) may fail when q≥4, and the following is a counter example for (<ref>). Let q≥4, let E_q be the set {p_1,p_2,…,p_q} with 0=p_1<p_2<…<p_q=1, E_3={p_2,p_3,p_4} and δ=d( p_1,p_2) <1/q^3δ_{p_2 ,⋯,p_q}, and let D be the small disk with diameter p_1p_2. Then D∈𝒮_0(E_q) and thus we have H_0(E_q)=H_𝒮_0(E_q)≥( q-2) A(D)+4π/L(∂ D)≥4π/2πδ=2/δ. It is clear that we have 𝐅^∗( E_q) ⊂𝐅^∗( E_3) . Then for each Σ∈𝐅^∗( E_q) , by the result H_0(E_3 )=h_0( E_3) just proved we have H(Σ)=H(Σ,E_q)=( q-2) A(Σ)/L(∂Σ)≤( q-2) h_0(E_3)=( q-2) H_0 (E_3). Thus we have h_0( E_q) =sup_Σ∈𝐅^∗( E_q) H(Σ,E_q)≤( q-2) H_0(E_3). By Corollary <ref>, there exists a surface Σ_0∈𝒮 ^∗( E_3) such that Σ_0=𝔇 (δ_E_3,θ_0,θ_0) and H_0(E_3)=R(Σ_0,E_3)+4π/L(∂Σ_0). On the other hand we have, by q≥4, that R(Σ_0,E_3)+4π/L(∂Σ_0) ≤(3-2)A(𝔇(δ_E_3,θ_0,θ_0))+4π/2δ_E_3 ≤6π/2δ_E_3≤3π/q^3d( p_1 ,p_2) =3π/q^3δ<1/qδ. Therefore we have H_0(E_3)<1/qδ, and thus h_0(E_q)≤( q-2) H_0(E_3)<q-2/qδ<1/δ<H_0(E_q). § NOTATIONS AND TERMINOLOGY —–a circle C determined by a circular arc c: C is the circle containing c and oriented by c, p. determine —–boundary radius, Remark <ref>, bdr —–CCM: Complete covering mapping, p. CCM —–BCCM: branched complete covering mapping, p. BCCM —–CTTD: closed topological triangular domain, Convention <ref>, p. conv-1 —–closed domain: closure of a domain, AhlforsS —–convex (path, arc, curve): Definition <ref>, p. in —–decomposable sequence, Definition <ref>, p. undec-seqr —–decomposable surface, Lemma <ref>, p. undec —–domain: connected open set, AhlforsS —–extremal surface, precise extremal surfaces, Definition <ref>, p. HL —–interior angle of a surface at a boundary point, Definition <ref>, p. interiorangle —–OPH: orientation preserving homeomorphism, p. OPH —–OPCOFO(M): orientation-preserving, continuous, open and finite-to-one (mapping), p. OPCOFOM —–SCC arc: simple, convex circular arc, Definition <ref>, p. F —–B_f,B_f( A) ,B_f^∗,B_f^∗( A) ,p. BBC —–C_f,C_f( A) ,C_f^∗,C_f^∗( A) ,p. BBC —–CV_f,CV_f( K)p. BBC —–𝒞( L,m) ⊃𝒞^∗( L,m) ⊃ℱ(L,m)⊃ℱ_r(L,m): subspaces of ℱ( L) , Definition <ref>, p. circu —–d( ·,·) the spherical distance on the Riemann sphere S, —–d_f( ·,·) the distance defined on the surface ( f,Δ) : Definition <ref>, p. df —–D(p,δ), the disk on S with center p and spherical radius δ: p. cov-1 —–𝔇^'( I,θ) ,𝔇^'( I,k) ,𝔇^'( I,c) :lune, Definition <ref>, p. lune-lens —–𝔇( I,θ_1,θ_2) ,𝔇( I,k_1,k_2) ,𝔇( I,c_1,c_2) : lens Definition <ref>, p. lune-lens —–Δ: the open disk {z:| z| <1,z∈ℂ}, p. Delta —–Δ^±: Δ^±=Δ∩ H^±, p. Del+- —–( ∂Δ) ^±=( ∂Δ) ∩H^±, p. +-arc —–E_q={𝔞_1,𝔞_2,⋯,𝔞_q}: a set of distinct q points on the Riemann sphere S, p. Eq —–𝐅: Definition <ref>, p. family F,D —–𝐅^∗=𝐅^∗(E_q): subspace of 𝐅, p. F* —–ℱ,ℱ( L) : subspace of 𝐅: Definition <ref>, p. F —–ℱ_r,ℱ_r( L) ,ℱ (L,m),ℱ_r( L,m) : subspace of ℱ , Definition <ref> (c) and (d), p. FrFLMr —–ℱ_r^'(L,m): subspace of ℱ_r( L,m) : Definition <ref>, p. FR' —–h_0=h_0(E_q), p. h0 —–H_L=H_L(E_q), p. HL(E) —–H_0=H_0(E_q), p. H_0 —–H( Σ) =H(Σ,E_q): H(Σ)=R(Σ )/L(∂Σ), p. DH —–H^+: the upper half plane Imz>0, p. H+- —–H^-: the lower half plane Imz<0, p. H+- —–H^±: the closure of H^±on S, p. H+- —–ℒ: the continuous point set of the function H_L of L, Definition <ref>, p. L —–n(Σ)=n(Σ,E_q),n (Σ,𝔞_v), (<ref>) in p. a70 —–ℕ: the set of natural numbers {1,2,…,}, p. nature —–ℕ^0:ℕ∪{0}, p. nature —–P: stereographic projection SP —–R(Σ)=R(Σ,E_q): Ahlfors error term, (<ref>) p. Ahero —–S: unit Riemann sphere with area 4π, p. RS —–𝒮_1,𝒮_1^(5),𝒮_1^(6) ,𝒮_1^(7),𝒮_1^(8): subspace of ℱ: Definition <ref>, p. S15678 —–𝒮_0=𝒮_1^(5)∩𝒮_1^(6) ∩𝒮_1^(7)∩𝒮_1^(8): Definition <ref>, p. S-surface —–𝒮^∗: subspace of 𝒮_0: Definition <ref>, p. 4psim —–( ·) ^∘: interior of an arc (path), p. boundary, interior of a surface, Definition <ref>, p. interior —–∂( ·) : in Definition <ref>: set of end points of an arc, p. boundaryarc or boundary of a domain on S or a surface, p. boundary. —–( ·) : closure of a set, p. cl —–#( ·) : the cardinality of a set, p. card —–∼: in Remark <ref>: equivalence of surfaces, or curves, Remark <ref>, p. finite 99 Ah0Ahlfors, L. V., Complex analysis, McGraw-Hill, third edition, 1979. AhAhlfors, L. V., Zur theorie der Üherlagerung-Sflächen, Acta Math., 65 (1935), 157-194. BerBernstein, F., Über die isoperimetrische Eigenschaft des Kreises auf der Kugeloberfläche und in der Ebene, Math. Ann., vol. 60 (1905), pp. 117-136. CLZ2Chen, Y.-L., Lin, T.-R. & Zhang, G.-Y., Movement of branch points in Ahlfors' theory of covering surfaces, preprint, DrDrasin, D., The impact of Lars Ahlfors' work in value-distribution theory, Ann. Acad. Sci. Fenn. Ser. A I Math. 13 (1988), no. 3, 329–353. DuDufresnoy, J., Sur les domaines couverts par les valeurs d'une fonction méromorphe ou algébroïde, Ann. Sci. École. Norm. Sup. 58. (1941), 179-259. EreEremenko, A., Ahlfors' contribution to the theory of meromorphic functions, Lectures in memory of Lars Ahlfors (Haifa, 1996), 41–63, Israel Math. Conf. Proc., 14, Bar-Ilan Univ., Ramat Gan, 2000. HaHayman, W.K., Meromorphic functions, Oxford, 1964. L-S-ZLi, W.; Sun, Z. H.; Zhang, G. Y., Properties of Ahlfors constant in Ahlfors covering surface theory. Front. Math. China 16 (2021), no. 4, 957–977. CLZ1Lin, T.-R., Chen, Y.-L. & Zhang, G.-Y., A finite theorem for Ahlfors' covering surface theory, https://doi.org/10.48550/arXiv.2305.13526. NNevanlinna, R., Zur theorie der meromorphen funktionen. Acta Math. 46, 1-99 (1925) NeNevanlinna, R. Analytic Functions, Translated from the second German edition by Phillip Emig, Die Grundlehren der Mathematischen Wissenschaften, Band 162 Springer-Verlag, New York-Berlin 1970. RRado, T.: The isoperimetric inequality on the sphere. Amer. J. Math. 57(4), 765-770 (1935) RiRickman, S., Quasiregular Mappings (Ergebnisse Der Mathematik Und Ihrer Grenzgebiete 3 Folge), Springer, 1993. SStoilow, S., Lecons sur les Principes Topologiques de la Theorie des Fonctions Analytiques. Gauthier-Villars, Paris (1956) S-ZSun, Z. H. & Zhang, G. Y., Branch values in Ahlfors' theory of covering surfaces, Science China Mathematics, Vol. 63 No. 8: 1535-1558. YYang, L., Value Distribution Theory. Springer, Berlin (1993) SZZhang, G. Y. & Sun, Z. H., The Impossibility of the Equality in Ahlfors' Second Fundamental Theorem, Scientia Sinica (Mathematica) (in Chinese) (2019), No. 10, 1445–1462. Z1Zhang G.Y., Curves, Domains and Picard's Theorem. Bull. London. Math. Soc. 34(2),205-211(2002) Zh1Zhang G.Y., The precise bound for the area-length ratio in Ahifors' theory of covering surfaces. Invent math 191:197-253 (2013)
http://arxiv.org/abs/2307.05453v1
20230711173217
Multipliers and equivalence of functions, spaces, and operators
[ "Cristina Camara", "Carlos Carteiro. William T. Ross" ]
math.FA
[ "math.FA", "math.CV", "47A15, 47A65, 30D55, 30J05" ]
LaTeX2e arrows,decorations.pathmorphing,backgrounds,positioning,fit,matrix showonlyrefs equationsection plain Proposition[equation]Proposition Corollary[equation]Corollary *Corollary*Corollary Theorem[equation]Theorem *Theorem*Theorem Lemma[equation]Lemma definition Definition[equation]Definition Conjecture[equation]Conjecture Example[equation]Example Remark[equation]Remark Question[equation]Question [enumerate]leftmargin=* [itemize]leftmargin=* [enumerate,1]label=(*),font= [enumerate,2]label=(*),font= open ℋ ℳ 𝔐 𝒫 𝒜 ℂ ℝ 𝔻 𝕋 ℕ ℤ 𝒦 ℋ̋ ϕφ 𝒰
http://arxiv.org/abs/2307.04086v1
20230709024221
Age of FGK Dwarfs Observed with LAMOST and GALAH: Considering the Oxygen Enhancement
[ "Tiancheng Sun", "Zhishuai Ge", "Xunzhou Chen", "Shaolan Bi", "Tanda Li", "Xianfei Zhang", "Yaguang Li", "Yaqian Wu", "Sarah A. Bird", "Ferguson J. W.", "Jianzhao Zhou", "Lifei Ye", "Liu Long", "Jinghua Zhang" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.GA" ]
Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China Beijing Planetarium, Beijing Academy of Science and Technology, Beijing, 100044, China [email protected] Research Center for Intelligent Computing Platforms, Zhejiang Laboratory, Hangzhou 311100, China [email protected] Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China [email protected] Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China School of Physics and Astronomy, University of Birmingham, Birmingham, B15 2TT, United Kingdom Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China Sydney Institute for Astronomy (SIfA), School of Physics, University of Sydney, NSW 2006, Australia Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, A20 Datun Rd., Chaoyang District, Beijing 100101, People’s Republic of China Center for Astronomy and Space Sciences, China Three Gorges University, Yichang 443002, People's Republic of China Department of Physics, Wichita State University, Wichita, KS 67260-0032, USA Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, A20 Datun Rd., Chaoyang District, Beijing 100101, People’s Republic of China Varying oxygen abundance could impact the modeling-inferred ages. This work aims to estimate the ages of dwarfs considering observed oxygen abundance. To characterize 67,503 LAMOST and 4,006 GALAH FGK-type dwarf stars, we construct a grid of stellar models which take into account oxygen abundance as an independent model input. Compared with ages determined with commonly-used α-enhanced models, we find a difference of ∼9% on average when the observed oxygen abundance is considered. The age differences between the two types of models are correlated to [Fe/H] and [O/α], and they are relatively significant on stars with [Fe/H] ≲ -0.6 dex. Generally, varying 0.2 dex in [O/α] will alter the age estimates of metal-rich (-0.2 < [Fe/H] < 0.2) stars by ∼10%, and relatively metal-poor (-1 < [Fe/H] < -0.2) stars by ∼15%. Of the low-O stars with [Fe/H] < 0.1 dex and [O/α] ∼ -0.2 dex, many have fractional age differences of ≥ 10%, and even reach up to 27%. The fractional age difference of high-O stars with [O/α] ∼ 0.4 dex reaches up to -33% to -42% at [Fe/H] ≲ -0.6 dex. We also analyze the chemical properties of these stars. We find a decreasing trend of [Fe/H] with age from 7.5–9 Gyr to 5–6.5 Gyr for the stars from the LAMOST and GALAH. The [O/Fe] of these stars increases with decreasing age from 7.5–9 Gyr to 3–4 Gyr, indicating that the younger population is more O-rich. § INTRODUCTION Galactic archaeology uses the chemical abundances, kinematics, and derived ages of resolved stellar populations as fossils to investigate the formation and evolution history of the Milky Way <cit.>. However, in comparison to chemical abundance and kinematics estimation, estimating the ages of field stars is a challenging task due to the inherent uncertainties present in both observational data and the stellar models employed for dating stars <cit.>. The chemical composition of a star is a fundamental input parameter in the construction of its theoretical model, which is critical in the determination of its age. Notably, at fixed [Fe/H], the abundance variations of individual elements exert a consequential impact on the overall metallicity Z, which subsequently determines the opacity of the stellar models. This, in turn, influences the efficiency of energy transfer and the thermal structure, thereby altering the evolution tracks on the HR diagram and the main-sequence lifetime <cit.>. Consequently, in the context of stellar modeling, it is essential to consider the proper metal mixture in order to accurately characterize stars and determine their ages. The solar-scaled ([α/Fe] = 0) and α-enhanced mixtures have been commonly used in theoretical model grids like Y2 isochrones <cit.>, Dartmouth Stellar Evolution Database <cit.>, and Padova stellar models <cit.>. These models treated all the α-elements, that are O, Ne, Mg, Si, S, Ca, Ti, by the same factor. Observations from high-resolution spectroscopic data have presented very different O-enhancement values from other α-elements on many stars <cit.>. The observed discrepancies in the abundances of oxygen and other α-elements can be attributed to the diverse origins of these elements. Specifically, O and Mg are believed to be primarily synthesized during the hydrostatic burning phase of massive stars and subsequently ejected during the core-collapse supernovae (CCSNe) <cit.>. Nevertheless, some works have provided evidence that Mg might also be partially released into the interstellar medium by SNe Ia <cit.>, while O appears to be solely enriched by CCSNe <cit.>. The other α-elements, namely Si, Ca, and Ti, primarily originate from the explosive burning of CCSNe and are partially contributed by SNe Ia <cit.>. For instance, 22% of Si and 39% of Ca come from SNe Ia according to the chemical evolution models in <cit.>. Therefore, not all α-elements vary in lockstep, the abundance of oxygen may not necessarily correlate with the abundance of other α-elements. Many works have also discussed the effects of varying individual element abundances on the stellar evolution models <cit.>. Theoretical models showed that the oxygen abundance influences the stellar evolution differently from the other α-elements <cit.>. Furthermore, <cit.> proposed the CO-extreme models, which treat oxygen abundance differently from the other α-elements and add carbon abundance in the stellar evolution models. The models have been employed to determine the ages of thousands of metal-poor halo stars, disk stars, and main sequence turn-off stars <cit.>. These results showed that increasing oxygen abundance leads to smaller age determination for the stars with [Fe/H] < -0.2. For the stars with [Fe/H] < -0.2 and [O/α] > 0.2 dex, the age difference would be about 1 Gyr. Due to the limited sample sizes of previous studies (<cit.>, with 70 stars, and <cit.>, with 148 stars) or the restricted range of [Fe/H] values <cit.>, there is a pressing need for a large and self-consistent sample to conduct a quantitative analysis regarding the impact of O-enhancement on age determination. Recently, millions of stars' individual element abundances have been measured by spectroscopic surveys like LAMOST <cit.>, APOGEE <cit.>, and GALAH <cit.>. These large sky surveys provide an excellent opportunity to study the effects of oxygen abundance variations on age determinations across a wide range of stellar parameters. To investigate the systematic effects of O-enhancement on age determination, we study the dwarf stars with available oxygen abundance measurements from LAMOST and GALAH. This paper is organized as follows: Section <ref> mentions the data selection; Section <ref> describes computations of stellar model grids; Section <ref> demonstrates ages differences between the O-enhanced models and α-enhanced models; the resulting age-abundance trends are presented in Section <ref>; and the conclusions of this work are drawn in Section <ref>. § TARGET SELECTION In this work, we make use of spectroscopic data from LAMOST DR5 Value Added Catalogue <cit.> and Third Data Release of GALAH <cit.>, together with astrometric data from Gaia Data Release 3 <cit.>. §.§ Spectroscopic Data LAMOST (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope) DR5 Value Added Catalog <cit.> contains more than 6 million stars with atmosphere parameters (T_ eff, log g, V_mic) and chemical abundances of 16 elements (C, N, O, Na, Mg, Al, Si, Ca, Ti, Cr, Mn, Fe, Co, Ni, Cu, and Ba). Measurements of element abundances are based on the DD–Payne tool <cit.>, which is a data-driven method that incorporates constraints from theoretical spectral models. It is noteworthy that, as discussed by <cit.>, the direct derivation of oxygen abundances from atomic oxygen lines or oxygen-bearing molecular lines in low-resolution (R ∼ 1800) LAMOST spectra is unfeasible. Alternatively, CH and CN molecular lines can be utilized for indirect estimation of oxygen abundances, as their strengths are sensitive to the amount of carbon locked up in CO molecules. As a result, the LAMOST oxygen abundances are only available in the cooler stars (T_ eff ≲ 5700 K), where the CH and CN lines have sufficient strength to allow a reasonably precise (±0.10 dex) estimate of [O/Fe] <cit.>. Due to the wide age range and the preservation of initial chemical abundances, the main-sequence star could be a good tracer of stellar populations. Therefore, we select the main-sequence stars with available measurements for [Fe/H], [α/Fe], and [O/Fe] from the catalog. Firstly, we use some recommended labels (T_ eff_flag = 1, log g_flag = 1, [Fe/H]_flag = 1, [X/Fe]_flag[[X/Fe]_flag = 1 for 14 elements (C, N, O, Na, Mg, Al, Si, Ca, Ti, Cr, Mn, Fe, Co, Ni).] = 1, qflag_chi2 = good) to select stars with reliable measurements. Afterward, we remove stars with T_ eff smaller than 5000 K or signal-to-noise ratio (S/N) less than 50 because their [O/Fe] determinations are not robust. <cit.> also provided a tag named “qflag_singlestar” to infer whether a star is single or belongs to a binary system. The tag is determined by the deviation significance of the spectroscopic parallax from the Gaia astrometric parallax. When the deviation is less than 3σ, it suggests an object is a single star. We use this tag to remove all candidate binaries from our sample. Finally, we choose stars with log g> 4.1. We lastly select a total of 187,455 unique stars. GALAH (Galactic Archaeology with HERMES) DR3 <cit.> presents stellar parameters (T_ eff, log g, [Fe/H], V_mic, V_broad, V_rad) and up to 30 elemental abundances for 588,571 stars, derived from optical spectra at a typical resolution of R ∼ 28,000. The oxygen abundance from GALAH DR3 was calculated using the O_ I 777 nm triplet <cit.>, based on a non-LTE method (LTE: local thermodynamic equilibrium)<cit.>. This NLTE method has also been employed for the measurement of [Fe/H] in GALAH. Following the recommendations in GALAH DR3, we require a SNR > 30, and a quality flag = 0 for reliable stellar parameter determination including iron, α-elements, and oxygen abundances (flag_sp = 0, flag_fe_h = 0, flag_alpha_fe = 0, and flag_o_fe = 0). Additionally, the sample is limited to the stars with e_alpha_fe < 0.1 and e_o_fe < 0.1. We exclude the binary systems identified by <cit.> (which is a catalog of FGK binary stars in GALAH). These cuts give us a sample of 19,512 dwarf stars (log g> 4.1). §.§ Astrometric Data We cross-match our selected LAMOST and GALAH samples with Gaia DR3 <cit.> catalog to obtain the luminosity for each star. Given that luminosity is utilized as a key observational constraint for estimating stellar age, we select stars with luminosity uncertainty less than 10%. Additionally, we select single stars by making a cut based on the Gaia re-normalized unit weight error (RUWE) being less than 1.2 (RUWE values are from the Gaia DR3). Our final sample consists of 149,906 stars from LAMOST (5000 K < T_ eff < 5725 K, -1 < [Fe/H] < 0.5, log g> 4.1) and 15,591 stars from GALAH (4500 K < T_ eff < 7000 K, -1 < [Fe/H] < 0.5, log g> 4.1). We calculate the Galactic Cartesian coordinates (X, Y, Z) and velocities (U, V, W) for the LAMOST sample using the Python package Galpy <cit.>. The distances are estimated by <cit.>. The Sun is located at (X, Y, Z) = (-8.3, 0, 0) kpc, and the solar motion with respect to the local standard of rest is (U_⊙, V_⊙, W_⊙) = (11.1, 12.24, 7.25) km s^-1 <cit.>. We use the Galactic Cartesian coordinates and velocities from the GALAH DR3 value-added catalog (VAC), which is based on astrometry from Gaia EDR3 and radial velocities determined from the GALAH spectra <cit.>. In Figure <ref>, we demonstrate dwarfs from LAMOST and GALAH in the Kiel diagram, and the [α/Fe][The [α/Fe] from both the LAMOST and GALAH catalog are defined as an error-weighted mean of [Mg/Fe], [Si/Fe], [Ca/Fe] and [Ti/Fe].]-[O/Fe] space to inspect their general distributions. The Kiel diagram in Figure <ref>(a) shows that most of the LAMOST dwarfs are cooler than 5700 K, while the GALAH dwarfs in Figure <ref>(b) covers a wider range of T_ eff (4500 - 7000 K). It should be noted that we do not apply any cut-off value at the high temperature side for the LAMOST sample. This upper limit is where reliable oxygen abundance can be measured by <cit.>. The [α/Fe]-[O/Fe] diagrams in Figure <ref>(c-d) show that the [O/Fe] generally increases with increasing [α/Fe], however, [O/Fe] widely spread at given α-enhanced values. The spreading is relatively large for low-α stars (especially for the GALAH sample), ranging from -0.4 to +0.6. c r c c[ht!] Metal Mixtures for the GS98 Solar Mixture, the α-Enhanced Mixture, and the O-Enhanced Mixture. Element log N_⊙ log N_α EM log N_ OEM C 8.52 8.52 8.52 N 7.92 7.92 7.92 O 8.83 8.83+[α/Fe] 8.83+[O/Fe] F 4.56 4.56 4.56 Ne 8.08 8.08+[α/Fe] 8.08+[α/Fe] Na 6.33 6.33 6.33 Mg 7.58 7.58+[α/Fe] 7.58+[α/Fe] Al 6.47 6.47 6.47 Si 7.55 7.55+[α/Fe] 7.55+[α/Fe] P 5.45 5.45 5.45 S 7.33 7.33+[α/Fe] 7.33+[α/Fe] Cl 5.50 5.50 5.50 Ar 6.40 6.40 6.40 K 5.12 5.12 5.12 Ca 6.36 6.36+[α/Fe] 6.36+[α/Fe] Sc 3.17 3.17 3.17 Ti 5.02 5.02+[α/Fe] 5.02+[α/Fe] V 4.00 4.00 4.00 Cr 5.67 5.67 5.67 Mn 5.39 5.39 5.39 Fe 7.50 7.50 7.50 Co 4.92 4.92 4.92 Ni 6.25 6.25 6.25 c r c[ht!] Grid of Evolutionary Models with Two Metal Mixture Patterns. Metal-mixture [O/Fe] [α/Fe] (dex) (dex) O-enhanced mixture -0.2 0 0.2 0 0.4 0 -0.1 0.1 0.3 0.1 0.5 0.1 0 0.2 0.4 0.2 0.2 0.3 0.4 0.3 0.5 0.3 0.6 0.3 α-enhanced mixture 0 0 0.1 0.1 0.2 0.2 0.3 0.3 r c c c[ht!] Z Values of Fixed [Fe/H] with Two Metal Mixture Patterns. [Fe/H] [α/Fe] [O/Fe] Z (dex) (dex) (dex) (dex) -1.0 0.1 0.1 0.0020 -1.0 0.1 0.5 0.0036 -0.8 0.1 0.1 0.0032 -0.8 0.1 0.5 0.0056 -0.6 0.1 0.1 0.0051 -0.6 0.1 0.5 0.0089 -0.4 0.1 0.1 0.0080 -0.4 0.1 0.5 0.0139 -0.2 0.1 0.1 0.0126 -0.2 0.1 0.5 0.0217 0 0.1 0.1 0.0197 0 0.1 0.5 0.0337 c c c c c c[ht!] Atmosphere Parameters and Chemical Abundance for the Example Stars from LAMOST Star T_ eff [Fe/H] Luminosity [α/Fe] [O/Fe] sobject_id (K) (dex) (L_⊙) (dex) (dex) 20140313-HD145243N315530B-01-084 5619±22 -0.30±0.04 0.74±0.02 0.06±0.02 0.46±0.09 20141112-HD083415N451147V01-03-165 5652±24 -0.15±0.04 1.57±0.03 0.15±0.02 -0.02±0.08 § STELLAR MODELS §.§ Input Physics We construct a stellar model grid using the Modules for Experiments in Stellar Astrophysics (MESA) code <cit.>. The versions of MESA and MESA SDK we used are Revision 12115 and Version 20.3.1, respectively. The EOS (Equation of State) tables in MESA are a blend of OPAL <cit.>, SCVH <cit.>, PTEH <cit.>, HELM <cit.>, and PC <cit.> EOS tables. Nuclear reaction rates are a combination of rates from NACRE <cit.>, JINA REACLIB <cit.>, plus additional tabulated weak reaction rates <cit.>. Screening is included via the prescription of <cit.>. Thermal neutrino loss rates are from <cit.>. The helium enrichment law is calibrated with initial abundances of helium and heavy elements of the solar model given by <cit.>, and it results in Y = 0.248 + 1.3324 Z. The mixing-length parameter α_ MLT is fixed to 1.82. Microscopic diffusion and gravitational settling of elements are necessary for stellar models of low-mass stars, which will lead to a modification to the surface abundances and main-sequence (MS) lifetimes <cit.>. Therefore, we include diffusion and gravitational settling using the formulation of <cit.>. We use the solar mixture GS98 from <cit.>. The opacity tables are OPAL high-temperature opacities [<http://opalopacity.llnl.gov/new.html>] supplemented by the low-temperature opacities <cit.>. We customize metal mixtures by introducing two enhancement factors, one for oxygen and one for all other α-elements (i.e., Ne, Mg, Si, S, Ca, and Ti). The two factors are applied in the same way as <cit.> to vary the volume density of element (log N) based on the GS98 solar mixture as presented in Table <ref>. We make a number of opacity tables by varying two enhancement factors according to the ranges of [α/Fe] and [O/Fe] values of the star sample. The enhancement values are shown in Table <ref>. For the mixtures with the same oxygen and α-elements enhancement factors, we refer to them as α-enhanced mixture (αEM), otherwise, as O-enhanced mixture (OEM). c c c c c c c c c c[ht!] Fundamental Parameters and Chemical Abundance for the Example Stars from GALAH Star T_ eff [Fe/H] Luminosity [α/Fe] [O/Fe] Mass_α EM Mass_ Buder2021 Age_α EM Age_ Buder2021 sobject_id (K) (dex) (L_⊙) (dex) (dex) (M_⊙) (M_⊙) (Gyr) (Gyr) 171230005802396 6096±76 -0.23±0.06 2.26±0.07 0±0.02 0.02±0.08 1.06±0.03 1.03±0.04 6.08±1.01 6.46±1.17 160529003401378 5846±76 -0.42±0.06 1.67±0.03 0.31±0.03 0.34±0.09 0.97±0.03 0.96±0.03 9.53±1.26 10.04±1.39 * The masses (Mass_ Buder2021) and ages (Age_ Buder2021) of the two example stars from the GALAH value-added catalog <cit.> are calculated based on PARSEC stellar isochrones (the PAdova and TRieste Stellar Evolution Code) <cit.>. §.§ Grid Computations We establish stellar model grids that include various metal-mixture patterns as indicated in Table <ref>. The mass range is from 0.6 to 1.2 M_⊙ with a grid step of 0.02 M_⊙. Input [Fe/H] values range from -1.20 to +0.46 dex with a grid step of 0.02 dex. The computation starts at the Hayashi line and terminates at the end of main-sequence when core Hydrogen exhausts (mass fraction of center hydrogen goes below 10^-12). The inlist file (for MESA) utilized in the computation of our stellar models is available on Zenodo: [doi:10.5281/zenodo.7866625]https://doi.org/10.5281/zenodo.7866625 To explicate the effect of oxygen enhancement on the evolutionary tracks, we provide an exposition of representative evolutionary tracks in Figure <ref>. The corresponding values of Z are listed in Table <ref>. At fixed [Fe/H], the variation of [O/Fe] would influence opacity, which could influence the energy transfer efficiency and the thermal structure. We find that the larger [O/Fe] leads to higher opacity at input [Fe/H] ≤ -0.2, and shifts the evolutionary tracks to lower T_ eff. As seen in Figure <ref>, at [Fe/H] ≤ -0.2, O-rich models are generally cooler than the α-enhanced models at given input [Fe/H], leading to higher modeling-determined masses (smaller ages) for a given position on the HR diagram (left panel of Figure <ref>). However, at input [Fe/H] = 0, larger [O/Fe] leads to lower opacity, and shifts the evolutionary tracks to higher T_ eff. The O-rich models are slightly hotter than the α-enhanced models. Overall, at fixed mass, the T_ eff difference between the two models becomes significant with smaller [Fe/H]. In addition, we note that the 1.1 M_⊙ and 1.2 M_⊙ tracks of O-rich models show different behavior compared with the tracks of 0.7 ∼ 1.0 M_⊙. The O-rich models with 1.1 M_⊙ show a blue hook morphology at [Fe/H] ≤ -0.8, which enlarges the T_ eff difference between two models at this evolutionary phase. At 1.2 M_⊙, both models show a blue hook morphology at the end of main-sequence, and the T_ eff difference keeps approximately constant at [Fe/H] ≤ -0.6. Figure <ref> presents the stellar evolution tracks of two example stars calculated with αEM and OEM models. Figure <ref>(a) presents the tracks of a star with observed [α/Fe] ∼ 0.1, [O/Fe] ∼ 0.5. Based on the αEM models (input [α/Fe] = 0.1, [O/Fe] = 0.1), we obtain the best-fit values of fundamental parameters for this star: mass = 0.87 ± 0.02 M_⊙, age = 8.69 ± 1.49 Gyr (the fitting method is described in detail in Section <ref>). Using the OEM models (input [α/Fe] = 0.1, [O/Fe] = 0.5), we estimate it to be a young star with mass = 0.90 ± 0.02 M_⊙, age = 5.68 ± 1.44 Gyr. The mean value of masses of OEM models ([O/Fe] = 0.5) inside the observational error box is larger than that of αEM models ([O/Fe] = 0.1), leading to smaller modeling-determined age for this star. Figure <ref>(b) shows the tracks of a star with observed [α/Fe] ∼ 0.2, [O/Fe] ∼ 0. We obtain a mass of 0.99 ± 0.01 M_⊙ and an age of 10.51 ± 0.60 Gyr for this star with αEM models (input [α/Fe] = 0.2, [O/Fe] = 0.2), and a mass of 0.98 ± 0.02 M_⊙ and an age of 11.34 ± 0.51 Gyr with OEM models (input [α/Fe] = 0.2, [O/Fe] = 0). As seen, the OEM models with input [O/Fe] = 0 are generally hotter than the αEM models ([O/Fe] = 0.2) at fixed mass and [Fe/H], leading to smaller modeling-determined mass and larger age for this star. §.§ Fitting Method We constrain stellar masses and ages using five observed quantities, i.e., T_ eff, luminosity, [Fe/H], [α/Fe], and [O/Fe]. Note that [O/Fe] is not used when estimating parameters with αEM models. We follow the fitting method raised by <cit.>. According to the Bayes theorem, we compare model predictions with their corresponding observational properties D to calculate the overall probability of the model M_i with posterior probability I, p(M_i| D,I)=p(M_i| I) p(D| M_i, I)/p(D| I) where p(M_i | I) represents the uniform prior probability for a specific model, and p(D | M_i, I) is the likelihood function: p(D| M_i,I)=L(T_eff,[Fe/H],lum) =L_T_effL_[Fe/H]L_lum The p(D | I) in Equation <ref> is a normalization factor for the specific model probability: p(D | I)=∑_j=1^N_m p(M_j| I) p(D | M_j, I) where N_m is the total number of selected models. The uniform priors p(M_i | I) can be canceled, giving the simplified Equation (1) as : p(M_i| D, I)=p(D | M_i, I)/∑_j=1^N_m p(D | M_j, I). Then Equation <ref> is the probability distribution for the selected models with the most probable fundamental parameters. As demonstrated in Figure <ref>, we fit a Gaussian function to the likelihood distribution of mass and age for each star. The mean and standard deviation of the resulting Gaussian profile are then utilized as the median value and uncertainty of fundamental parameter (mass and age) for each star. To find the stars that locate near the edge of the model grid, we consider a 3-sigma error box (i.e., three times the observational error, depicted as a blue square in Figure <ref>) on the HR diagram and divide the error box into 100 bins. For a certain star, when there are more than 5 bins that do not contain any theoretical model (sampling rate < 95%), we flag the star with “edge effect”. To assess the accuracy of our models and investigate potential model dependency in age and mass determination, we present a comparison of results obtained from our αEM models, OEM models, and the GALAH DR3 value-added catalog <cit.>. Figure <ref> shows the comparison of age and mass estimations for ∼4,000 GALAH stars, with age uncertainty of less than 30%, based on αEM models, OEM models, and GALAH DR3 VAC <cit.>. The ages and masses of stars from GALAH DR3 VAC are calculated using the PARSEC (the PAdova and TRieste Stellar Evolution Code) release v1.2S + COLIBRI stellar isochrone <cit.>, which adopt a solar-scaled metal mixture, i.e., input [α/Fe] = 0. Figure <ref> illustrates that the one-to-one relation of the results is quite good for most stars. It is noteworthy that the adopted approach encompasses a flat prior on age with an age cap of 13.2 Gyr <cit.>. Consequently, the ages of the majority of stars from GALAH DR3 VAC are found to be younger than 12 Gyr (with masses larger than 0.8 M_⊙), which results in a relatively large dispersion of age differences, amounting to 12.4% for αEM models and 13.0% for OEM models. Significant systematic differences are apparent between the PARSEC and the αEM models in Figure <ref>(a-b), with the former indicating 2.3% older age and 1.5% smaller mass than the latter. These discrepancies could be attributed to differences in the input physics employed by the two models, such as the input [α/Fe] value, helium abundance, and mixing-length parameter. In Figure <ref>(c-d), the PARSEC yields 5.5% older age and 1.9% smaller mass than the OEM models. Compared with the αEM models, the OEM models demonstrate more pronounced systemic differences from PAESEC. These distinctions primarily arise from the consideration of O-enhancement in OEM models, leading to younger ages and higher masses. In addition, a comparison of results obtained from our αEM models and the Yonsi–Yale <cit.> stellar isochrones have been shown in Figure <ref> in Appendix. § RESULTS This work aims to determine the ages of dwarfs considering oxygen abundance and study the chemical and kinematic properties of high-α and low-α populations in the Galactic disk. We give the masses and ages of 149,906 LAMOST dwarfs and 15,591 GALAH dwarfs with αEM models and OEM models. We remove ∼30% stars with sampling rate < 95%, located near the edge of the model grid. In addition, we remove ∼3% stars whose inferred ages are 2-sigma[For a certain star, age - 2*age_uncertainty > 13.8 Gyr.] larger than the universe age <cit.> due to their significant model systematic bias. Finally, we remove ∼35% stars that have relative age uncertainty larger than 30 percent. After these cuts, we obtain the ages of 67,503 dwarfs from LAMOST with a median age uncertainty of ∼16%, and 4,006 dwarfs from GALAH with a median age uncertainty of ∼18%. The age estimation of dwarf stars is inherently accompanied by considerable uncertainty, which can reach up to 30% within our sample. Furthermore, uncertainties (especially the systematic error) in atmosphere parameters can introduce biases in the age estimation. Consequently, a minority of stars in our sample exhibits ages that exceed the age of the universe. This occurrence is not uncommon, as even samples of subgiants with more precise age determinations have encountered analogous occurrences <cit.>. §.§ Oxygen Effect on Age Determinations §.§.§ Mock Data Test Most of the stars in both the LAMOST and GALAH samples are distributed in a relatively narrow range of [Fe/H] (-0.5 dex - +0.5 dex). To systematically investigate the effect of O-enhancement on age determinations in a wide range of T_ eff and [Fe/H], we apply a mock data test based on our grid of stellar models. For each set of stellar mode grids with fixed [Fe/H], [α/Fe], and [O/Fe] values, we draw random samples from the distributions of stellar evolution tracks in the H-R diagram. We adopt 0.05, 30 K as the observational errors for [Fe/H] and T_ eff, and fractional error of 2% for luminosity. Finally, We generate mock data of 0.15 million stars with age uncertainty of less than 30 percent. Figure <ref>(a) shows the distribution of mock stars on the HR diagram. Figure <ref>(b-c) presents a comparison between mock data and observational data for T_ eff and [Fe/H] distributions. Comparing mock data with LAMOST or GALAH dwarfs, mock stars cover wider ranges of T_ eff (5000 K - 7000 K), and [Fe/H] (-1.0 dex - +0.4 dex). Therefore, the mock data is useful for statistical studies of oxygen effect on age determinations. Figure <ref> shows a comparison between ages determined with αEM models (τ_α EM) and OEM models (τ_ OEM). The mock stars are grouped by their [Fe/H] and [O/α] values. The stars with [O/α] > 0 are hereafter referred to as high-O stars and the stars with [O/α] < 0 as low-O stars. Generally, high-O stars have younger ages based on OEM models, while low-O stars become older. The effect of oxygen enhancement on age determination is relatively significant for stars with [Fe/H] < -0.2. At [O/α] = -0.2, the mean fractional age difference ( (τ_ OEM - τ_α EM)/τ_α EM ) is 10.5% for metal-rich stars (-0.2 < [Fe/H] < 0.2), and 15.5% for relatively metal-poor stars (-1 < [Fe/H] < -0.2). The mean fractional age difference at [O/α] = 0.2 is -9.2% for metal-rich stars, and -16.5% for relatively metal-poor stars. The largest fractional age difference comes from high-O stars with [O/α] = 0.4, which have a mean fractional age difference of -20.2% at -0.2 < [Fe/H] < 0.2, and -30.6% at -1 < [Fe/H] < -0.2. We find clear age offsets that correlate to the [Fe/H] and [O/α] values. Increasing 0.2 dex in [O/α] will reduce the age estimates of metal-rich stars by ∼10%, and metal-poor stars by ∼15%. The mock data provide us with more sufficient stars at the metal-poor edge than observational data to present clearly age differences at different [O/α] and [Fe/H] values. §.§.§ Observational Data Figure <ref> presents the fractional age differences between αEM and OEM models for observational (LAMOST and GALAH) and mock data. The overall average age offset (absolute value of age difference) of stars from LAMOST and GALAH is 8.9% and 8.6%, respectively. Of the low-O stars with [Fe/H] < 0.1 dex and [O/α] ∼ -0.2 dex, many have fractional age differences of ≥ 10%, and even reach up to 27%. The mean fractional age difference of high-O stars with [O/α] ∼0.4 dex is ∼ -25%. The age offsets are relatively significant for metal-poor stars. The largest age differences are -33% to -42% for stars with [Fe/H] ≲ -0.6 dex and [O/α] ∼0.4 dex. For mock data, we note the trend of age offsets versus [Fe/H] is consistent with that of observational data. The age offsets of both samples increase significantly with decreasing metallicity at [Fe/H] ≳ -0.6. Interestingly, there is a slight increase in age offsets with decreasing metallicity at [Fe/H] < -0.6. This trend of age offsets is consistent with the change of T_ eff difference as a function of [Fe/H] (shown in Figure <ref>), as discussed in Section <ref>. §.§ Age-Abundance Relations To trace the chemical evolution history of the Galactic disk, we hereby present the age-abundance relations of the LAMOST sample (consisting of 67,511 stars) and the GALAH sample (consisting of 4,006 stars) using the ages from OEM models. For each sample, we employ local nonparametric regression fitting (LOESS model) to characterize the trends in these relations with enhanced clarity. Figure <ref> illustrates the results for the LAMOST sample. In Figure <ref>(a), a gradual decline in [Fe/H] is observed across the age range of ∼9 Gyr to ∼6.5 Gyr. This trend shows similarities to the metal-rich branch observed in young stars (age < 8 Gyr) as found by <cit.>, where the metallicity range of their metal-rich branch stars spans approximately -0.2 to +0.4. Notably, <cit.> also identifies a trend comparable to our findings, whereby their sample exhibits a [Fe/H] value of 0.4 at 8 Gyr, diminishing to around -0.2 at 6 Gyr. The "two-infall" chemical evolution model <cit.> predicts a process involving the infall of metal-poor gas commencing roughly 9.4 Gyr ago <cit.>. The observed trend of decreasing metallicity from 9 Gyr to 6.5 Gyr in our results may be related to this infalling metal-poor gas. Intriguingly, this "two-infall" model not only anticipates a decline in metallicity but also predicts an increase in the oxygen abundance,which is consistent with the observed trend illustrated in Figure <ref>(b). In Figure <ref>(b), the sample stars from LAMOST exhibit an increase in [O/Fe] as the age decreases from 9 Gyr to 4 Gyr, indicating a slight enrichment of oxygen in the younger stellar population. Figure <ref> presents the results for the GALAH sample. It is noteworthy that the GALAH stars display a decrease in [Fe/H] from ∼7.5 Gyr to 5 Gyr. Furthermore, the [O/Fe] of the GALAH stars exhibit a slight decrease with age ranging from ∼7.5 Gyr to 3 Gyr. The GALAH sample exhibits age-[Fe/H] and age-[O/Fe] trends similar to those observed in LAMOST; however, an overall slight temporal discrepancy can be observed. This incongruity may be ascribed to dissimilarities in sample composition or systematic differences in atmospheric parameters between the two survey datasets. The GALAH sample, on the whole, exhibits higher temperatures compared to LAMOST sample (5000 - 5700 K), indicating a relatively younger population. Furthermore, the determinations of [Fe/H] and [O/Fe] from GALAH are based on a non-LTE method <cit.>, which can also impact the observed trends. In conclusion, the analysis of the LAMOST and GALAH samples reveals a decreasing trend of [Fe / H] with an age ranging from 7.5–9 Gyr to 5–6.5 Gyr, and a notable upward trend in [O/Fe] as the age decreases from 7.5–9 Gyr to 3–4 Gyr. This result agree with the prediction of the "two-infall" scenario and suggest that a metal-poor and O-rich gas gradually dominates the star formation from 7.5–9 Gyr ago. As discussed in Section <ref>, oxygen has a unique origin, primarily produced by CCSNe <cit.>. Therefore, the observed age-[O/Fe] trend plays a distinct role in characterizing the chemical evolution history of the Milky Way and constraining chemical evolution models. Neglecting to account for the independent enhancement of oxygen abundance in age determination would result in significant age biases, as discussed in Section <ref>. Such biases would obscure the age-[O/Fe] relation, as depicted in Figure <ref> in the appendix, where the rising trend of [O/Fe] with decreasing age remains imperceptible at age < 9 Gyr. Therefore, we suggest that considering the oxygen abundance independently in stellar models is crucial. This would aid in accurately characterizing the age-[O/Fe] relation and provide better constraints for Galactic chemical evolution models. § CONCLUSIONS To determine the ages of dwarfs considering observed oxygen abundance, we construct a grid of stellar models which take into account oxygen abundance as an independent model input. We generate mock data with 0.15 million mock stars to systematically study the effect of oxygen abundance on age determination. Based on the α-enhanced models and O-enhanced models, we obtain the masses and ages of 67,503 stars from LAMOST and 4,006 stars from GALAH and analyze the chemical and kinematic properties of these stars combined with ages from O-enhanced models. Our main conclusions are summarized as follows: (1) The ages of high-O stars based on O-enhanced models are smaller compared with those determined with α-enhanced models, while low-O stars become older. We find clear age offsets that correlate to the [Fe/H] and [O/α] values. Varying 0.2 dex in [O/α] will alter the age estimates of metal-rich (-0.2 < [Fe/H] < 0.2) stars by ∼10%, and relatively metal-poor (-0.2 < [Fe/H] < 0.2) stars by ∼15%. (2) The overall average age offset (absolute value of age difference) between α-enhanced models and O-enhanced models is 8.9% for LAMOST stars, and 8.6% for GALAH stars. Of the low-O stars with [Fe/H] < 0.1 dex and [O/α] ∼ -0.2 dex, many have fractional age differences of ≥ 10%, and even reach up to 27%. The mean fractional age difference of high-O stars with [O/α] ∼0.4 dex is ∼ -25%, and reach up to -33% to -42% at [Fe/H] ≲ -0.6 dex. (3) Based on LAMOST and GALAH samples, we observe a decreasing trend of [Fe/H] with age from 7.5–9 Gyr to 5–6.5 Gyr. Furthermore, The [O/Fe] of both sample stars increases with decreasing age from 7.5–9 Gyr to 3–4 Gyr, which indicates that the younger population of these stars is more O-rich. Our results agree with the prediction of the "two-infall" scenario and suggest that a metal-poor and O-rich gas gradually dominates the star formation from 7.5–9 Gyr ago. We thank the anonymous referee for valuable comments and suggestions that have significantly improved the presentation of the manuscript. This work is based on data acquired through the Guoshoujing Telescope. Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope; LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. This work used the data from the GALAH survey, which is based on observations made at the Anglo Australian Telescope, under programs A/2013B/13, A/2014A/25, A/2015A/19, A/2017A/18, and 2020B/23. This work has made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. This work is supported by National Key R&D Program of China No. 2019YFA0405503, the Joint Research Fund in Astronomy (U2031203,) under cooperative agreement between the National Natural Science Foundation of China (NSFC) and Chinese Academy of Sciences (CAS), and NSFC grants (12090040, 12090042). This work is partially supported by the CSST project, and the Scholar Program of Beijing Academy of Science and Technology (DZ:BS202002). This paper has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (CartographY GA. 804752). Figure <ref> depicts the age and mass determinations for ∼15,000 LAMOST stars (with [α/Fe] ∼ 0.1) and reveals a satisfactory correspondence between the αEM models and the YY isochrones <cit.>, as the dispersion of the relative age and mass differences are only 6.4% and 1.1% between these two models. However, slight systematic differences are visible among this result, as the YY yields 3.6% older age and -0.4% smaller mass than the αEM models. aasjournal